LightNovesOnl.com

Incomplete Nature Part 4

Incomplete Nature - LightNovelsOnl.com

You're reading novel online at LightNovelsOnl.com. Please use the follow button to get notifications about your favorite novels and its latest chapters so you can come back anytime and won't miss anything.

These enigmatic features of the living world are in fact hints that there must be a loophole in the second law. A first step toward discovering it is to recognize that the second law of thermodynamics is actually only a description of a tendency, not a determinate necessity. This makes it different from the inverse-square law that defines the distribution of an electric or gravitational field in s.p.a.ce or Newton's law of force (F = ma). For all practical purposes, however, this thermodynamic tendency is so nearly inviolate that it can be treated like a law. It is a tendency that is so astronomically biased as to be almost universally the case. But "almost" is an important qualification. The second law of thermodynamics is only a probabilistic tendency, not a necessity, and that offers some wiggle room.

A superficial appearance of time reversal is implicit in describing functional and purposive processes in terms of the ends that they appear organized to produce. As we saw earlier, being organized for the sake of achieving a specific end is implicit in Aristotle's phrase "final cause." Of course, there cannot be a literal ends-causing-the-means process involved, nor did Aristotle imply that there was. And yet there is something about the organization of living systems that makes it appear this way. This time-reversed appearance is a common attribute of living processes, albeit in slightly different forms, at all levels of function. The highly ordered interactions of cellular chemistry succeed in maintaining living systems in far-from-equilibrium states. They locally counter the increase of internal entropy, as though thermodynamic time were stopped or even reversed within the organism. The increasing complexity of organism structures and processes that characterizes the grand sweep of evolution also appears as though it is running counter to the trend of increasing messiness and decreasing intercorrelations. Furthermore, mental processes recruit energy and organize physical work with respect to the potential of achieving some future general state of things that does not yet exist, thus seeming to use even thermodynamics against itself.

Of course, time is neither stopped, nor running backwards in any of these processes. Thermodynamic processes are proceeding uninterrupted. Future possible states of affairs are not directly causing present events to occur. So, what is responsible for these appearances?

THE LAW OF EFFECT.

Order generation is a necessary property of life, but it is the production of orderliness in the service of perpetuating this same capacity that distinguishes life from any inorganic processes. Thus the localized opposition to thermodynamic degeneration that is exemplified in organisms has a kind of reflexive character. It is this reflexivity that warrants describing organism structures and processes as functions as opposed to mere tendencies. We are familiar with functions that are a product of design, but only since Darwin have we been able to understand function as emerging spontaneously-with its end-directed character emerging irrespective of any antic.i.p.ated end.



The appeal to natural selection logic has long been considered to be the one proven way around the homunculus problem. Though there are still periodic uprisings, mostly from fundamentalist religious interests, as in the case of creationism or Intelligent Design proponents, it can fairly be said that the logic of natural selection has dealt a serious blow to any claim that teleological processes are necessary to explain complex structure and function in the biological world. Daniel Dennett in his book Darwin's Dangerous Idea considers the logic of natural selection to be the intellectual equivalent to a universal acid. And what it universally dissolves is teleology.

The theory of natural selection is ultimately just statistics at work. As Darwin argued, all that is required to explain how organisms become suited to their particular environments is (1) reproduction, with offspring inheriting traits from parents; (2) some degree of spontaneous variation from strict inheritance; and (3) reproduction in excess of the potential support that can be supplied by the local environment. This limitation will inevitably result in reproductive inequality among variant individuals. Those lineages with individual variants that are better suited to utilize available resources in order to support reproduction will gradually replace those lineages that are less well suited. Though Darwinism is often caricatured in contrast to Lamarck's theory of the inheritance of acquired characters, this was not a necessary requirement for Darwin, and he himself entertained the possibility of Lamarckian-like mechanisms in some editions of The Descent of Man in what he called a "gemule theory." This modern myth of intellectual history, which focuses on inheritance issues, actually obscures the fundamental insight that made Darwin's a.n.a.lysis so revolutionary and challenging to nineteenth-century sentiments.

The core distinguis.h.i.+ng feature of Darwin's explanation-and what made it so revolutionary-was, instead, the after-the fact logic of this mechanism. Where previously it seemed natural to a.s.sume that the processes responsible for orderly function and adaptive design necessarily preceded their effects-as in Lamarck's view of functional use and disuse determining the evolution of future functions-Darwin showed that, in principle, antecedent "testing" of functional responses by trial and error was unnecessary to achieve adaptive outcomes. The process that generated variant forms could be completely uncorrelated with the process that determined which variant forms were functionally superior to the others in a given environment. So long as the options with favorable outcomes were preferentially reproduced or retained and re-expressed in future contexts, it did not matter why or how they were generated.

This after-the-fact logic for producing adaptive "design" is not just applicable to evolution. The early American behaviorist Edward Thorndike realized that one could even describe the way organisms adapted their behaviors to the environment in response to reinforcement (e.g., reward or punishment delivered after the fact) in these same terms. He called it the "Law of Effect."

One might wonder why such a simple idea had to wait for the mid-nineteenth century to come to the attention of scientists and natural philosophers. One need look no further than the current stridency of anti-Darwinian sentiment that continues to complicate the politics of American education, and even persists among scholars outside biology, to recognize that this form of causal logic remains counterintuitive to this day. On the one hand, the counterintuitive logic of design-without-designing strains credulity. On the other hand, we are entirely familiar with the human ability to conceive and design mechanisms suitable to serve some purpose. So it is only a modest conceptual leap to imagine that a far greater intelligence and technology might be capable of designing mechanisms like organisms. The reverse logic is not so congenial to common sense. It goes against common sense to imagine that something so complex as an organism, which is also extremely well fitted to its environment, could arise spontaneously from something less complex and less in sync with surrounding conditions. Experience relentlessly reminds us of the opposite tendency. For this reason alone, it should not surprise us that systematically describing the logic of such a process took millennia to be achieved.

But Darwin was not the first to think in these terms. In fact, we can probably credit a near contemporary of Aristotle with the first version of such an explanation. The man who first articulated this notion was a Greek philosopher-poet named Empedocles. We know about him because he was both famous and notorious in his own right, and because Aristotle took his ideas to be worthy exemplars of a way of thinking about the natural world that was deeply flawed.

Empedocles' cosmology was based on the four "elements" earth, water, air, and fire (roughly a.n.a.logized to the four phases of matter: solid, liquid, gas, and plasma) and on forces of attraction and repulsion that he believed ruled their interactions. But what makes his a.n.a.lysis modern-and what made it seem incoherent to Aristotle-was his claim that even the orderliness of living bodies might be able to arise spontaneously, simply by preserved accident. From these spa.r.s.e beginnings all the more complex features of the world, like living bodies, could arise by combinations of more basic parts which themselves arose and combined by accident.

Empedocles argued that even aimless mechanical processes could produce functionally useful complexity. Initially, all that would be produced would be incongruous combinations and monstrosities. For example, he invokes the possibility of chimeric creatures, such as a man-faced ox, that might w.i.l.l.y-nilly be products of such a wild natural shuffling of materials and forms. But, he notes, most such combinations would be awkward and inappropriate to the world, and would quickly perish. Only those combinations that exhibited both an appropriate balance of component features and fittedness to the surroundings would be likely to persist in the world (and presumably, if living, would produce progeny). Further, he suggested that these creatures would sort themselves out into different environments, with those suited to water more likely to spend time there and those more suited to air more likely to spend time there, where each would then, by the same logic, get honed by elimination into progressively more specialized and distinguishable types. So, he claimed, without prior design, a blind and promiscuous process of mixing of elements could by dint of the elimination of the less coherent, less elegant, less appropriate variations, after the fact, result in forms that were both internally and relationally well designed and well suited to their contexts.

Aristotle, however, found this proposal to be incongruous with the extraordinary design elegance and behavioral sophistication of living things. He therefore devoted considerable effort to refuting this view. As a careful observer of nature, he noted that the elaborate process of organism development-which inevitably produced horses from horses and oaks from acorns from oaks-could not be understood without postulating the presence of an intrinsic end-directed principle (an entelechy). This active principle was inherent in these organisms from the earliest stages of development and was responsible for guiding their maturation toward the adult form. And it wasn't merely a mechanical principle, because it would function to bring about this end form despite the vicissitudes of an unpredictable environment. Thus a tree might grow its limbs through a fence or grow its roots around buried boulders in its striving toward its adult form. If such a principle is necessarily present in each organism, how could it be denied as an intrinsic contributor to the origination of living forms as well?

Darwin, too, apparently struggled with the counterintuitive logic that eventually became enshrined in his theory of natural selection. Most historians locate the point at which the idea of this reversal of logic came to him after reading Thomas Malthus' book on the statistics of population growth. Although Lamarck had previously developed the idea that useful traits should naturally replace less useful traits over the generations, Darwin realized that their usefulness did not need to be involved in their initial generation. There was no need for prior experience or prescient design. Because organisms reproduce, their traits would inevitably be differentially preserved and transmitted irrespective of how they were generated, so long as there was compet.i.tion among variants for the means of reproduction that favored the preservation of some lineages over others. The mechanism responsible for the origin of a given trait would therefore be irrelevant, so long as that trait got successfully preserved in the future population. In the world of reproducing organisms, an effect could in this way be the explanation for the current existence of functional organization, even if that effect had no role in producing it. The mechanisms for producing forms and for preserving them were entirely separable.

This provided a way to get around both the necessity of prior design and the role of an organism's striving to adapt to local conditions. Adaptation and functional correspondence could even be a consequence of preserved accident, so long as there was a favorable reproductive outcome. Darwin could thus explain the presence of a living form bearing the marks of design, without invoking any hint of intelligence, representation, agency, or prescience. Nothing more than blind chance and being consistent with the necessities of reproduction (Chance and Necessity, as Jacques Monod was to much later paraphrase it in the t.i.tle of his celebrated book) might provide a sufficient basis for replacing all appeals to teleological processes in natural science.

PSEUDOPURPOSE.

This causal dichotomy separating living from non-living nature is real, but the appearance of causal incompatibility is partly an unfortunate accident of conceiving of organisms as though they are machines. Although they are indeed functionally organized, living organisms aren't just complicated chemical machines. In many ways, living systems are the inverses of man-made mechanisms. As designers and users, we determine the form of a machine to be suited or not to a particular task, but this task otherwise has no relation to the machine's existence. Organism forms evolve in the process of accomplis.h.i.+ng a task critical to maintaining the capacity to produce this form, so the task s.p.a.ce and the form of the organism are essentially inseparable. Machines require a.s.sembly from parts that are produced separately and in advance. Organisms spontaneously develop. Their parts differentiate from an undifferentiated starting point. There are almost never additions, and parts almost never need to be brought together for a.s.sembly. The functional integration of the components of a machine must be antic.i.p.ated in advance. Organisms' components are integrated and interdependent from the beginning and, as noted above, they exist as a consequence of having at some point been relevant for already fulfilling some function.

Despite these almost exactly inverted modes of construction, the comparison between organisms and complex machines is almost irresistible. Indeed, I think that it is accurate to say that biologists on the whole accept the machine a.n.a.logy, at least in a vague sense of the concept, and rely on it to make sense of the obvious end-directedness of living processes. In large part, this satisfaction with the machine a.n.a.logy is the result of the fact that there is now a straightforward way to explain the end-directedness of both organic and mechanical processes without any of the trappings of teleology in the cla.s.sic sense, which still retains a vocabulary rich in teleological connotations. Functional organization can be treated as a variation on a theme common to simpler mechanistic processes lacking any whiff of meaning, purpose, or value. In an influential paper published in 1974, the Harvard evolutionary biologist Ernst Mayr argued that we can avoid confusing the different sorts of asymmetrically directed activities exhibited by intelligent agents, living organisms, and even some machines by distinguis.h.i.+ng a number of variations on the theme of asymmetric change, respectively. Thus, for example, there is the asymmetric change exhibited by thermodynamic processes, the apparent goal-directed behavior of thermostats and guidance systems, the asymmetric development and evolution exhibited by biological processes, and the purposeful design and conscious goal-directed activity of human agents.

As with many of the prominent evolutionary biologists of the late twentieth century, Mayr was eager to complete the metaphysical rout of teleology that Darwin had initiated. He recognized the need to use end-directed and information-based terminology for describing biological phenomena, but wanted a way to do this without also accepting its metaphysical baggage. Mayr, in fact, believed that the problem could be solved by incorporating insights from cybernetics and computational theories into the evolutionary paradigm. He quotes his Harvard colleague, the philosopher-logician Willard Van Ormand Quine, as agreeing: The great American philosopher Van Quine, in a conversation I had with him about a year before his death, told me that he considered Darwin's greatest philosophical achievement to consist in having refuted Aristotle's final cause. . . . At the present we still recognize four teleological phenomena or processes in nature, but they can all be explained by the laws of chemistry and physics, while a cosmic teleology, such as that adopted by Kant, does not exist."2 Whether or not we can credit Darwin with such a monumental philosophical achievement, this a.s.sertion nonetheless captures the belief held by most biologists that even the action of organisms can be understood as thoroughly mechanistic and on a par with the other physical processes of the inorganic world, except for being incorporated into a living body.

But recognizing that organisms behave in ways that differ from inorganic processes, Mayr sought a terminological distinction that would exemplify a middle ground between mere mechanism and purpose. He turned to a term coined by Colin Pittendrigh in a paper in 1958: teleonomy. The Greek combining form -nomy merely implies lawlike behavior, and so one could use it to describe behavior that was asymmetrically oriented toward a particular target state, even in systems where there was no explicit representation of that state (much less an intention to achieve it) but only a regular predictable orientation toward an end state. By only specifying the way a mechanical process can be organized so that it converges toward a specific state, rather than behavior for the sake of an end, this conception avoided sneaking mentalistic a.s.sumptions about teleology back into biological discourse. Mayr agrees with Pittendrigh that this makes it possible to feel no discomfort in saying that "A turtle came ash.o.r.e to lay her eggs," instead of just, "She came ash.o.r.e and laid her eggs." According to Mayr, "There is now complete consensus among biologists that the teleological phrasing of such a statement does not imply any conflict with physico-chemical causality."3 By coining a term which implied target-directedness but was agnostic about how this behavior was produced or came to exist, biologists and engineers could continue to use teleologically loaded descriptions, but without the metaphysical baggage that tended to come with them. Moreover, to the extent that the process of natural selection could be conceived as consequence-blind and accident-driven, this target-directed tendency could also be treated as accidental. It seemed as though teleology had been fully reduced to mere mechanism.

FIGURE 4.1: A diagram of a cla.s.sic thermostat circuit that uses a bimetal coil and a mercury switch to turn a furnace for heating a room on and off and to maintain it at a constant temperature. Although changes of different sorts propagate around this causal circle, the circular iterations of these influences compound with themselves again and again incessantly. This causal a.n.a.logue to self-reference is what is responsible for the deviation-reduction dynamics of the whole. Reversing the orientation of the mercury switch would alter the circuit from deviation reduction (negative feedback) to deviation amplification (positive feedback).

A cla.s.sic example of a purely mechanistic teleonomic effect is provided by the cycle of causes and effects const.i.tuting an old-style thermostat circuit used to maintain the temperature of a room (see Figure 4.1). We can trace a continuous loop of differences that make a difference in this system (following Gregory Bateson's way of describing this) as follows: The difference in the measured temperature of the room (1), makes a difference in the winding or unwinding of a bimetallic strip (2), which makes a difference in the tilt of a mercury switch (3), which makes a difference in the flow of current through the electric circuit (4), which makes a difference in the fuel introduced into the furnace (5), which makes a difference in the temperature of the room (1), and so on. The result is that the whole open system, consisting of the room and heating system, embedded in a larger context which introduces uncorrelated variations, exhibits a pattern of behavior that is specifically convergent toward a given end state. So, although there is no explicit internal representation of this target state, the structure of the circuit constraining the relations.h.i.+ps between room temperature and furnace-energy use produces a pattern of activity that would also be produced by a person periodically flipping the heater switch to adjust room temperature. In other words, from the point of view of the behavior, there is no difference between what a purpose-driven person and a thermostat would produce.

The telos-the end-in the case of the thermostat is the minimization of variation from a given target temperature. Of course, in the case of a thermostat, that end, as well as the means to achieve it, is the product of a human mind. Being embodied in the purely mechanical organization of heater and thermostat circuit does not change the fact that its ententional character arose extrinsic to the device. But it is at least conceivable that an a.n.a.logous feedback circuit could arise by accident in certain inorganic contexts.

Consider, for example, the famous Old Faithful geyser, located in Yellowstone National Park. It erupts on a highly regular basis because of the way that the temperature and pressure of the subsurface water is self-regulated around a mean value by the interplay of geothermal heating, the pressure of steam, and the weight of the column of water in the underground vent. As the water is heated to the point of boiling, the pressure of the released steam reaches a value sufficient to drive the overlaying column of water upward; but in so doing it "resets" the whole system, allowing new water to acc.u.mulate, boil, and reach this same threshold, again and again. So the water temperature and stem pressure oscillates around a constant value and the geyser erupts at regular intervals.

This demonstrates that such a teleonomic mechanism can arise by accident. If evolution can also be understood as nothing more than retained and reproduced accidental organization, then doesn't this suggest that all ententional processes might be reducible to accidental mechanisms? What are the implications for the source of the end-directed processes in an organism? Or a mind? Can't the purposiveness of a person turning a heater on and off to regulate room temperature also be ultimately traced to an accidental origin? Didn't our capacity to do this evolve by natural selection, and isn't that process reducible to physical-chemical accident? This is indeed the implication that many philosophers and biologists have drawn from this a.n.a.lysis.

Consider a typical biological a.n.a.logue to the thermostat: the temperature regulating "circuit" of the human body. As in the case of the thermostat, although the material details are considerably different, body temperature regulation also depends on a mechanism that merely embodies this tendency in its overall organization. It isn't governed by an explicit representation of the goal state. Indeed, body temperature regulation is accomplished by a thermostatlike feedback mechanism. But, of course, the optimal temperature for body functions is not an accidental value. This mechanism evolved because this temperature was optimal for the many metabolic processes that ensured health and reproductive success of a long lineage of warm-blooded animals. So, like the electrical-mechanical thermostat, the target state that it tends to converge toward is derived extrinsic to the mechanism itself, even though this target is implicitly embodied in the structure of the circuit. This is not the case for Old Faithful. There the balance of factors that determines its regularity is determined solely by the accidental contingencies of the local geology.

There is, however, an extended sense in which an a.n.a.logous claim could be made for mammalian temperature regulation. It is the contingency of best temperatures for the relevant metabolic biochemical processes that contributed to the preservation of this value rather than some other. This, too, is due to a merely physical-chemical contingency. A biochemical mechanism converging to this value was merely the most successful surviving variant from among many that were its probable compet.i.tors in evolution. Each was the product of a genetic accident, which became subject to the culling process of natural selection. In this respect, following a standard neo-Darwinian line of reasoning, we can say that this end wasn't the product of design. Its end-oriented tendency was produced by accident, and was retained merely because of its serendipitous contribution to the reproduction of organisms happening to exhibit this particular variant.

Writing in the early years of cybernetics and systems theory, Pittendrigh and Mayr were aware that target-directed behavior could be embodied in simple mechanisms, such as automatic guidance systems, and that these mechanisms could be designed to alter their output so that they could even track erratic targets or irregularly changing conditions. a.n.a.logous organic processes (such as temperature regulation) were already well known, as described in the work of physiologists like Walter Cannon and others. To ill.u.s.trate the parallelism between goal-directed artifact behavior and organism behavior, Mayr cites the example of a guidance system: "A torpedo that has been shot off and moves toward its target is a machine showing teleonomic behavior." Mayr goes further than Pittendrigh in specifying how he believes this concept can be more precisely specified in biology. At the time of Mayr's appropriation of the term teleonomy, the informational function of DNA had been firmly established and the development of the science of computing had begun to reach maturity (though laptop computing was still a decade away). This led Mayr to generalize the concept one step further: conceiving biological goal-directedness in software terms. "A teleonomic process or behavior is one which owes its goal-directedness to the operation of a program."4 This latter a.n.a.logy was to prove highly influential because the computer program a.n.a.logy can be extrapolated to any given level of complexity. Thus, in principle, there was no biological phenomenon that couldn't be modeled as the output of an algorithm. And like the cybernetic a.n.a.logy, it could be understood in completely mechanistic terms. But also like the thermostat, when stripped of reference to human users and their interpretations and goals, a computer is simply a machine. Attributing the complex determinate behavior of a computer program to goal-directedness is thus an implicit reference to a potential human user/designer; otherwise (as we've seen), computation reduces to machine operation. Although computer behavior can be highly complicated, and thus can parallel highly organized organically produced behaviors-including those that converge toward specific end states-the appeal to the computer metaphor introduces considerable baggage in the form of non-trivial a.s.sumptions about what distinguishes computation from any other machine operation. As we argued in the last chapter, it is a mistake to confuse the abstract description applied to the machine's operations as something that is intrinsic to it. And yet without these extrinsic interpretive glosses of its machine operations, its apparent target convergence reduces to mere asymmetric mechanical change.

Teleonomy is, however, more than merely asymmetric physical change. To indicate the distinctiveness of teleonomic processes in both biology and human mechanisms, Mayr contrasted teleonomic processes with processes that converge toward a terminal state that is no longer characterized by asymmetric change. These are natural processes occurring spontaneously in the non-living physical world. In this respect, the fall of an object in a gravitational field or the development of a chemical reaction toward thermodynamic equilibrium have asymmetric tendencies that develop toward a specific stable end stage. He described these sorts of processes as teleomatic, literally, automatically achieving an end: Many movements of inanimate objects as well as physico-chemical processes are the simple consequences of natural laws. For instance, gravity provides the end-state for a rock which I drop into a well. It will reach its end/state when it has come to rest on the bottom. A red-hot piece of iron reaches its "end-state" when its temperature and that of its environment are equal. All objects of the physical world are endowed with the capacity to change their state and these changes follow natural laws. They are "end-directed" only in a pa.s.sive, automatic way, regulated by external forces or conditions. Since the end/state of such inanimate objects is automatically achieved, such changes might be designated as teleomatic.5 Mayr thus suggests that we should cla.s.sify together all spontaneous physical processes that have a terminus at which change stops. This is probably a bit too broad. Others who have used the term teleomatic seem more content to restrict its reference to asymmetric causal processes that converge toward a more or less stable end state. This characterizes thermodynamic processes approaching equilibrium. Although mechanically moved objects may come to rest due to friction and falling objects may be specifically oriented in their movement and cease upon reaching the ground, these terminal states are relational and to some extent artificial, since they are just the result of being extrinsically impeded from continuing. In contrast, a chemical reaction or convection process developing toward thermodynamic equilibrium is defined by precisely excluding outside relational interactions. These are effectively intrinsically determined asymmetries and ends. Time-reversible processes, such as the interactions described by Newtonian mechanics, do not have intrinsic terminal states in the same sense, though field effects like the pull of gravity fall less clearly into one or the other category. This tripart.i.te division of teleologic, teleonomic, and teleomatic is widely accepted (if not always framed in these terms). Many, including Quine and Mayr, would thus even deny that thought processes are anything more than highly complex forms of teleonomy.

The major problem with the term teleonomy is its implicit agnosticism with respect to the nature of the mechanism that exhibits this property. This is, of course, considered to be one of its main advantages, according to Mayr and others. On the one hand, if what is implied is organization on the a.n.a.logy of a thermostat or computer, there seems little that is intrinsic to these processes to distinguish their principles of operations from those of other mechanisms. With respect to its users, a thermostat is useful because of its highly reliable convergence toward a specific end state from a wide variety of initial conditions and perturbations. While this description might also apply to the second law of thermodynamics, the convergence mechanisms are quite different. Specifically, the thermostat achieves a degree of near stability by doing work to perturb a thermodynamic process in a way that is intended to counter the second law effect as closely as possible. More usefully, a thermostat can be set to many different arbitrary convergence values. Certain of these values will be consistent with the interests of its designers or users.

The a.n.a.logy to the programmed operation of a digital computer is also an entirely mechanistic process that can be reduced to a collection of teleomatic processes, which may be even more linearly deterministic and parasitic on extrinsic description than is a feedback circuit, such as is embodied in a thermostat. Ultimately, in both of these interpretations, once we remove the implicit reference to a use, teleonomic processes reduce to teleomatic processes.

If these were presumed to be the only candidates, no wonder Mayr was willing to entertain the idea that teleology doesn't exist. It remains unclear whether other defenders of the concept also a.s.sume that teleonomic processes are entirely reducible to teleomatic processes. Justification for denying this reduction comes from those who focus on the special organization of teleomatic processes that const.i.tutes feedback processes. In the case of feedback, one teleomatic process is arranged so that it undermines the activity of another, and vice versa. Again, this is easily demonstrated by the action of a thermostat in which a change in room temperature activates a circuit, which has the effect of activating a furnace to change the room temperature, which ultimately has the effect of inactivating that circuit.

Both the thermostat and the computer a.n.a.logies leave us with an unanswered question, which is the issue at stake. What is the difference, if any, between attributing the construction of these sorts of mechanisms to human design versus natural selection versus serendipitous accident? As products of human design, we feel justified in locating the source of the telos outside the artifact. As products of pure accident, we feel justified in a.s.suming that there is no telos, only the appearance of it under some description. But as products of evolution, it depends. If we conceive of this process as merely preserved accident, then it follows that the presumed ententional character of the mechanism is likewise merely an imposed descriptive gloss; and if we conceive of evolution as itself somehow intrinsically ententional, we appear to have invoked a variant of the human designer a.n.a.logy. But of course, that merely begs the question, since human minds themselves evolved.

From the perspective of a kind of evolutionary hindsight, the end-directedness of an organism's adaptive features, even if they arose accidentally, currently exists because of their contribution to something else, a larger system that they happen to be a part of. Importantly, this system has the special and unusual property of being able to reproduce the many details of its organization, as well as this reproductive capacity. This embeddedness is relevant, in somewhat the same way that embeddedness in a human intentional context is relevant to the end-directedness of a thermostat or a computer program. The question is whether this makes the organism adaptation more than just teleonomic. Is a teleonomic mechanism that is preserved because it aids the persistence of a larger self-reproducing system equivalent to one that is entirely generated by accident? The answer depends on whether the larger system can also be adequately explained in this way. This means that in order to determine whether these processes are appropriately considered teleological, teleonomic, or just teleomatic (in the terms of this debate) requires that we carefully identify what comprises the organization of this larger system from which living processes derive their distinctive features.

Within evolutionary biology, it is generally a.s.sumed that the ultimate source for the special end-directed appearance of living processes is natural selection. So, examining evolution more closely can offer critical clues for deciding whether life's apparently purposive and representational processes are or are not reducible to teleomatic logic, and thus ententional in appearance only.

BLOOD, BRAINS, AND SILICON.

During the twentieth century, the abstract logic of natural selection was realized to be relevant to other domains as well in which adaptive complexity requires explanation or where unguided "design" might be important. Darwinian processes were identified as major factors in the organization of the humoral immune response and in the development of the nervous system, and Darwinian mechanisms have been employed as computational tools and as molecular genetic tools. Even though Darwinism was instrumental in challenging teleological claims in nineteenth-century biology, the power of Darwin's proposal was not in his critique of prior views of external agency forming the myriad creatures, but in the elegant simplicity of the mechanism he proposed. It was a recipe for replacing purposeful design strategies with mindless selection mechanisms wherever they might arise. And as Darwin had managed to eliminate the need to appeal to an intelligent creator-a grand homunculus-in order to explain the functional consequences of biological evolution, others subsequently invoked the Law of Effect to eliminate homunculi from quite different domains.

One of the most resounding success stories for the Law of Effect was its application to the explanation of the humoral immune response-the basis for disease resistance in us and our close animal relatives. By the late nineteenth century it was known that there were factors in the blood of people who had once contracted a disease and had overcome it that made them immune to a subsequent infection. In addition, it was discovered that a defensive immune response could be built up by exposure (e.g., via inoculation) to a dead or less virulent strain of the disease agent, such as a bacterium or virus. By being exposed to the disease agent in some form, immune molecules were produced that were matched to that agent. Common sense suggested that this immune response was therefore something a.n.a.logous to learning. Perhaps, it was thought, some molecular mechanism made a sort of mold of the disease agent and then caused many copies of it-antibodies-to be made. This mechanism was a.n.a.logous to a Lockean blank slate, which was ready to accept whatever form was presented to it. Despite its intuitive appeal, the mechanism turned out to be almost opposite in its organization. Instead, during prenatal life, a vast array of highly variant antibody forms are produced by virtue of a sort of super fragility and shuffling of the bits of DNA that code for the binding ends of the antibody molecules. This is called hypermutation, and it occurs only in the line of somatic cells destined to become antibody-producing cells (thus having no transmission to progeny). Later, during exposure to disease agents, invaders with molecular signatures different from one's own cells bind with those antibodies from among this vast array of preformed variants which just happen to be approximate mirror-image fits to the disease agent's molecular signature's. By fitting, it triggers rapid reproduction of the a.s.sociated antibody-producing cells, and thus floods the circulation with appropriate antibodies.

All the potential variety of adapted forms is therefore preformed at random before any function is established, and as a result, a huge fraction of this variety will never become activated. Moreover, the fit between antibody and antigen (the molecular shape on the invader that the antibody attaches to, which is the anti-body gen-erator) is only approximate. But the fact that there are already antibodies present for any possible infectious agent gives the body a head start.

The immune response does not have information that antic.i.p.ates its enemy, nor does it acquire information from this enemy. Yet in many ways it behaves as though both are the case. Because any structural variant is functional only after the fact of a serendipitous fit between randomly generated antibody and unpredictable antigen, there is no intentional end-directedness involved. And if we add to this account the fact that the immune mechanism itself evolved due to the Darwinian process-both of which exemplify a blind generation of variants and after-the-fact fitting to the environment-it would appear to have avoided any homunculus fallacy.

Probably the first serious application of Darwin's strategy to a quite different domain was in the psychological dogma of behaviorism, discussed in chapter 3. B. F. Skinner quite consciously modeled his operant conditioning paradigm on Darwin's after-the-fact logic and appealed to it to justify the anti-mentalist doctrine that was its base. The undirected emission of spontaneous exploratory behaviors was the a.n.a.logue to spontaneous genetic variation, the rewarding or aversive stimuli that served as reinforcers were the a.n.a.logues to selection pressures, and the biasing of behavior leading to habit formation and habit extinction that resulted was the a.n.a.logue of differential reproduction. So, on the a.n.a.logy to Darwin's natural selection, Skinner believed that by considering nothing else but these factors and their various schedules and contingencies, it might be possible to explain the formation of complex behavioral dispositions without ever invoking mental or other teleological influences. Just as Darwin had seemingly banished the teleology of a divine mind from biology, Skinner believed he had exorcised mind from psychology, leaving only mechanism.

Throughout the latter part of the twentieth century, other writers have also found the Darwinian model to offer a way out of the strictures of both simple mechanistic and homuncular accounts of ententional phenomena. The philosopher of science Karl Popper, for example, suggested that natural selection offered a useful a.n.a.logy for explaining the acc.u.mulation of reliable knowledge which characterized the development of scientific theories. Although individual researchers may be motivated by personal insights and pet hypotheses to explore and promote certain ideas, the collective enterprise could also be understood as having a kind of inevitable Darwinian quality. Comparing mutational variants to conjectures and the culling effect of natural selection to refutations, Popper offered a similarly non-teleological scenario for explaining the progressive "adaptation" of scientific knowledge to its empirical subject matter. Over his career, he generalized this logic to apply to the evolution of knowledge as it is acquired in any domain, and suggested that this might be a way to define knowledge in general, a view that he aptly described as "evolutionary epistemology." This approach was later embraced and generalized further by the American psychologist Donald T. Campbell, who articulated what is probably the most generalized characterization of selection processes as distinct from other forms of knowledge creation. He characterized the essence of selection (as opposed to instruction) theories of adaptation and knowledge generation with the simple aphorism "blind variation and selective retention." This catchphrase highlights two distinguis.h.i.+ng features of the selection logic: first, that the source of variation is unrelated to the conditions that favor preservation of a given variant; and second, that there is differential persistence of some variants with respect to others. As we will discuss in greater detail in later chapters, the defining criterion is not the absence of ententional processes, but the absence of any specific control of the variants produced by the process that determines their differential persistence.

A selection approach to global brain function has been explored by neuroscientists as an alternative to computational approaches. Most notably, Gerald Edelman, who received the 1972 n.o.bel Prize in medicine for his work on the structure of immunoglobulins, has compared the way the nervous system is able to register and adapt to an unpredictable world of stimuli to the strategy that the immune system employs to respond to the unpredictable diversity of disease agents in the world (as described above). He argued that an initial burst of variation in the production of neural connections is generated independent of later function, and that certain of these variant circuits are subsequently selectively recruited with respect to their relative "fit" with input from the senses, while others are eliminated. This was not just an extrapolation from immune function, but also was influenced by evidence of an intermediate-level selection process taking place during the embryonic development of the nervous system. During the 1970s and 1980s, numerous studies demonstrated that in developing mammal brains there is initially an overabundance of neurons and connections produced, and that subsequently many are culled by activity-dependent processes which favor neurons and connections that best reflect converging signal correlations. In all these contexts-immune function, the adaptation of neural circuits to inputs, and development of neural circuits-minimally constrained ("blind") variations are generated before having a significant portion of this variation sculpted away to meet specific functional requirements of the context in which they are embedded. Edelman additionally argued that this could provide a neural computational logic as well.

The combination of the computer theory of mind with the Darwinian paradigm seemed poised to offer the ultimate argument for a homunculus-free mind. In biology, the a.n.a.logue of the computer designer or software engineer is this mindless, purposeless physical process: natural selection. Edelman argued, in effect, that design and function of brains was selection all the way up. Evolved computations do not require any external homunculus to determine their specific correspondence relations.h.i.+ps to the world. Correspondence results from a process of blind variation and selective preservation of those variant circuits and algorithms that best "fit" within their context, both within the brain and within the world, and not in response to any reflection, purpose, or design foresight. Given that natural selection theory appeared to offer a non-teleological mechanism for improving the correspondence between the organic processes and specific features of the environment, it would seem we might finally have found a way to evade the homunculus trap.

Variations on this theme have been explored extensively in many other fields as well. For example, in a field often described as artificial life and in a related field called evolutionary computing, algorithms are created that randomly generate populations of other algorithms, which are then pitted against one another with respect to their success at achieving some criterion of operation. The relative success at achieving this result is then used to preserve some and discard others, so that the preserved variants can be used as the starting forms from which new variants are generated for a new cycle. Because natural selection is taken to be teleology-free-merely remembered accident-generating these computational systems presumably requires no additional teleological aspect. Of course, in an artificial life simulation, the computation seen as an adaptation, the grounds of compet.i.tion between algorithms, the reasons that some variants are preserved and others eliminated, and the environmental context providing the limited resource over which there is compet.i.tion, are all specified extrinsically. They are models of selection for the purpose of pursuing a human-conceived result, even if only to explore this sort of mechanism. And this is not selection based on any physical parameters of the machine operations and their environment-these are bracketed from consideration and not allowed to interfere. Only the logical idealization of selection is being embodied in these determinate machine operations.

FRACTIONS OF LIFE.

Organisms are not organized like thermostats. To the extent that they do self-regulate, it is in the service of some other outcome: reproduction. But can't this too be reduced to chemistry? Isn't the "secret of life" the replication of DNA? Indeed, according to the so-called central dogma of molecular biology, all forms of life depend on the ability of the DNA molecule to serve as a template for two essential functions: (1) determining the amino acid sequences const.i.tuting proteins and thus influencing their shapes and reactive surfaces; and (2) serving as a model for making duplicate copies of itself. Even viruses that contain RNA rather than DNA depend on the cellular DNA-based molecular machinery of their hosts to get their parts synthesized and their RNA replicated. With the discovery of this molecular basis for genetic inheritance, the mechanistic conception of life also was transformed. Life could be understood in information-processing terms. Mayr's conception of teleonomy on the a.n.a.logy of "control by an algorithm" had an unambiguous molecular interpretation: * DNA contains algorithms in the form of a triplet "code" of bases (purines and pyrimidines) const.i.tuting codons.

* A sequence of codons on a DNA molecule "codes for" a sequence of amino acids that const.i.tute a protein.

* Proteins control the chemistry that does the work of constructing our cells and running our bodies, including the process of reproduction.

* During reproduction, bodies make copies of the DNA molecules that control them and pa.s.s them on to offspring, thereby pa.s.sing on the algorithms that make the whole process repeatable in a new body.

The use of algorithms to control mechanisms that are capable of replicating those algorithms is a pretty reasonable caricature of the capability of a digital computer. And if we expand this vision to include a computer-controlled factory for manufacturing computers, also controlled by algorithms that it can copy and transfer to newly manufactured computers, it seems as though we have pretty thoroughly described the logic of life in purely mechanistic and information-processing terms.

From here, it is a small step to expand the account further to include a computational account of natural selection. Errors in the code that alter the algorithm, that alter the function of the computer, that alter the manufacturing process, and so on, if they produce more efficient variants of this process will tend to replace the less efficient forms. Viola-evolution! This vision leads to an interesting speculation: If the computer-manufacturing machinery is entirely controlled by the algorithms that are being copied, can't we ignore the implementation mechanisms and focus only on the information embodied in the algorithms?

Probably the most famous and popular modern account of the process of natural selection is provided by Richard Dawkins in The Selfish Gene (1976). It is relevant to this discussion because of the way it both appears to invoke ententional properties in evolution and denies them at the same time. With the discovery of the molecular nature of genes-how they replicate and determine the structure of proteins-it became possible to recast evolutionary theory in terms of information and computing, as well as chemistry. The non-repet.i.tive and linear structure of the DNA molecule is in many respects the perfect biological a.n.a.logue to the cla.s.sic computer tape, as envisioned in the original Turing machine, which was still in use for data transfer in the early 1970s. So the recognition that genetic inheritance is accomplished by virtue of duplicating the sequence structure of one DNA to make two copies could realistically be understood in information-processing terms. Dawkins' genius was the recognition that it should therefore be possible to think of evolution in informational terms.

A DNA molecule is capable of embodying an arbitrary linear pattern consisting of alternating "coding" elements (its four bases). This pattern indirectly organizes the functional dynamics of the chemistry occurring within living cells by providing a template for specifying the structure of life's workhorse macromolecules: proteins. In this respect, the base sequences embodied in DNA are often treated as the a.n.a.logue to software for the organism as computer. To make the a.n.a.logy more complete, one must think of DNA as carrying the code for actually building the computer on which it runs. Because a DNA molecule can serve as a template for the a.s.sembly of new DNA molecules with the same base sequence, it can further be claimed that it is a.n.a.logous to self-writing software. Although the gene concept has become progressively complicated, and to some extent deconstructed, over the past couple of decades as additional variant functions of DNA sequences have been identified, the commonly understood use of the term is applied to that length of a DNA molecule that corresponds to the structure of a given protein product. This turns out not to be the only feature of DNA that is functionally relevant, yet it still serves as a useful approximate way to part.i.tion DNA into distinct informational chunks relevant to evolution.

Dawkins describes genes as replicators. The suffix "-or" suggests that genes are in some sense the locus of this replication process (as in a machine designed for a purpose like a refrigerator or an applicator), or else an agent accomplis.h.i.+ng some function (such as an investigator or an actor). This connotation is a bit misleading. DNA molecules only get replicated with the aid of quite elaborate molecular machinery, within living cells or specially designed laboratory devices. But there is a sense in which they contribute indirectly to this process: if there is a functional consequence for the organism to which a given DNA nucleotide sequence contributes, it will improve the probability that that sequence will be replicated in future generations. Dawkins describes genes as active replicators for this reason, though the word "active" is being used rhetorically here to indicate this contribution, because they are otherwise dynamically pa.s.sive players in this process. So, although Dawkins explicitly employs a terminology that connotes agency in order to avoid what he argues would be tedious circ.u.mlocutions, it is largely for heuristic reasons.

Replicator theory thus treats the pattern embodied in the sequence of bases along a strand of DNA as information, a.n.a.logous to the bit strings entered into digital computers to control their operation. Like the bit strings stored in the various media embodying this ma.n.u.script, this genetic information can be precisely copied again and again with minimal loss because of its discrete digital organization. This genetic data is transcribed into chemical operations of a body a.n.a.logous to the way that computer bit strings can be transcribed into electrical operations of computer circuits. In this sense, genes are a bit like organism software.

Replicators are, then, patterns that contribute to getting themselves copied. Where do they get this function? According to the standard interpretation, they get it simply by virtue of the fact that they do get replicated. The qualifier "active" introduces an interesting sort of self-referential loop, but one that seems to impute this capacity to the pattern itself, despite the fact that any such influence is entirely context-dependent. Indeed, both sources of action-work done to change things in some way-are located outside the reputed replicator. DNA replication depends on an extensive array of cellular molecular mechanisms, and the influence that a given DNA base sequence has on its own probability of replication is mediated by the physiological and behavioral consequences it contributes to in a body, and most importantly how these affect how well that body reproduces in its given environmental context. DNA does not autonomously replicate itself; nor does a given DNA sequence have the intrinsic property of aiding its own replication-indeed, if it did, this would be a serious impediment to its biological usefulness. In fact, there is a curious irony in treating the only two totally pa.s.sive contributors to natural selection-the genome and the selection environment-as though they were active principles of change.

But where is the organism in this explanation? For Dawkins, the organism is the medium through which genes influence their probability of being replicated. But as many critics have pointed out, this inverts the location of agency and dynamics. Genes are pa.s.sively involved in the process while the chemistry of organism bodies does the work of acquiring resources and reproducing. The biosemiotician Jesper Hoffmeyer notes that, "As opposed to the organism, selection is a purely external force while mutation is an internal force, engendering variation. And yet mutations are considered to be random phenomena and hence independent of both the organism and its functions."6 By this token the organism becomes, as Claus Emmeche says, "the pa.s.sive meeting place of forces that are alien to itself."7 So the difficulty is not that replicator theory is in error-indeed, highly accurate replication is necessary for evolution by natural selection-it's that replicators, in the way this concept has generally been used, are inanimate artifacts. Although genetic information is embodied in the sequence of bases along DNA molecules and its replication is fundamental to biological evolution, this is only relevant if this molecular structure is embedded within a dynamical system with certain very special characteristics. DNA molecules are just long, stringy, relatively inert molecules otherwise. The question that is begged by replicator theory, then, is this: What kind of system properties are required to transform a mere physical pattern embedded within that system into information that is both able to play a const.i.tutive role in determining the organization of this system and constraining it to be capable of self-generation, maintenance, and reproduction in its local environment? These properties are external to the patterned artifact being described as a replicator, and are far from trivial. As we will see in subsequent chapters, it can't be a.s.sumed that a molecule that, under certain very special conditions, can serve as a template for the formation of a replica of itself exhibits these properties. Even if this were to be a trivially possible molecular process, it would still lack the means to maintain the far-from-equilibrium dynamical organization that is required to persistently generate and preserve this capacity. It would be little more than a special case of crystallization.

Indeed, the example of prions both qualifies as a form of replicator dynamics and demonstrates the fundamental systemic dependency of both the replication process and information properties. Prions are the cause of some quite troublesome diseases, such as so-called mad cow disease (bovine spongiform encephalopathy, BSE) and Kuru, which are acquired by eating body parts (nervous tissue) that contain prions. A prion is not a virus or a bacterium, but a protein molecule with a distinctive three-dimensional shape. It can impose this shape on other protein molecules formed by the same amino acid sequence structure, but which are folded into a different three-dimensional structure. Only when the protein is in the prion shape-configuration can it exert this shape-templating effect. As a result, adding some prions to a context where there are other non-transformed (pre-prion) molecules will induce differential refolding of more and more of the pre-prion forms as time goes on and the concentration of prions increases. Pre-prion molecules are generated as a consequence of normal brain function and are routinely metabolized; but if transformed into the prion form, they are not only difficult to get rid of but their acc.u.mulation is damaging, eventually producing severe loss of function and death.

In this sense, prions are active replicators because they both get their form copied, and they also contribute to perpetuating this process. Indeed, distinctive lineages of prions have been identified that are specifically a.s.sociated with different animal species (e.g., cows, sheep, cats and dogs, and humans), having evolved variants that are "adapted" to these species. But prions are not parasitic on animal brains, as are viruses or bacteria, because prions don't cause other pre-prion or prion molecules to be synthesized-pre-prion molecules are synthesized by cells within the brain-rather, they just catalyze a change in shape of the pre-prion molecules that are already present. If it weren't for the production of this protein by a host brain, there would be no shape replication. In fact, prion proteins are only able to produce this effect because the prion shape is more energetically stable than the pre-prion shape. They do no chemical work. They just influence the less stable forms to more easily transform into a more "relaxed" state. If it were not for the far-from-equilibrium metabolism of the nervous system, prions would be impossible, because there would be no substrates to embody the form. Something physical must be generated and multiplied for evolution to be possible, and this process is necessarily dependent on a special kind of dynamical system: an organism. The form that gets replicated must be embodied, and because generating embodied form is a process that runs counter to the second law of thermodynamics, work must be done to accomplish this. Work can't be done by an inert form. It takes a special kind of dynamical system, as we will develop in later chapters.

So evolution is not merely differential replication of pattern, and information is not an intrinsic property of physical pattern (even though we may be able to measure the information-bearing potential of a given pattern). Were this so, then crystal growth would count as evolution. Even if we treat DNA base sequences as program code (data plus instructions), there still needs to be something serving the role played by computers and their users for this pattern to count as information. So when Dawkins suggests that evolution is the result of the differential replication of information, he is not incorrect, but equating "information" with pattern smuggles unspecified systemic properties of its context into the account, while pretending they can be provided by a pa.s.sive artifact.

The dependency of gene replicator theory on organism systems and organism-environment relations.h.i.+ps in order to account for their informational, adaptive, and functional features of biological evolution suggests that something critical is ignored in the a.n.a.lytic effort to collapse a systemic relations.h.i.+p down to one of its apparent component parts. In doing so, the material processes that create the possibility of creating this part get ignored.

The embryologist Paul Weiss, writing in the late 1960s, posed this conceptual problem clearly in his description of the effect of uncritical reductionistic interpretations of biological processes: In trying to restore the loss of information suffered by thus lifting isolated fragments out of context, we have a.s.signed the job of reintegration to a corps of anthropomorphic gremlins. As a result, we are now plagued-or blessed, depending on one's party view-with countless demiG.o.ds, like those in antiquity, doing the jobs we do not understand: the organizers, operators, inductors, repressors, promoters, regulators, etc.-all prosthetic devices to make up for the amputations which we have allowed to be perpetrated on the organic wholeness, or to put it more innocuously, the "systems" character, of nature and of our thinking about nature.8 Weiss argues that the a.n.a.lytic dissection of living organization into independent parts, which presumably reduces an organism to a mere machine, and thereby exorcises the homunculus of an elan vital, only serves to s.h.i.+ft its locus to more cryptic contributors of teleological functions: genetic codes, translation, regulation, signaling, and so forth. Even though a.n.a.lytically dissecting the organic wholeness of a living system doesn't remove anything from the material components of life, it nevertheless segregates the whole into parts. This provides a powerful tool for breaking up the work involved in the exploration of the complex system that is an organism, yet it also precisely brackets from a.n.a.lysis what is most relevant: the "organic wholeness." The life of an organism is not resident in its parts. It is embodied in the global organization of the living processes. Moreover, the so-called parts that a.n.a.lysis produces-the individual molecules, organelles, cells, tissue types, and organs-are not parts in the sense that machine p

Click Like and comment to support us!

RECENTLY UPDATED NOVELS

About Incomplete Nature Part 4 novel

You're reading Incomplete Nature by Author(s): Terrence W. Deacon. This novel has been translated and updated at LightNovelsOnl.com and has already 828 views. And it would be great if you choose to read and follow your favorite novel on our website. We promise you that we'll bring you the latest novels, a novel list updates everyday and free. LightNovelsOnl.com is a very smart website for reading novels online, friendly on mobile. If you have any questions, please do not hesitate to contact us at [email protected] or just simply leave your comment so we'll know how to make you happy.