LightNovesOnl.com

Complexity - A Guided Tour Part 10

Complexity - A Guided Tour - LightNovelsOnl.com

You're reading novel online at LightNovelsOnl.com. Please use the follow button to get notifications about your favorite novels and its latest chapters so you can come back anytime and won't miss anything.

Wolfram's "New Kind of Science"

I first heard about Cook's result in 1998 when he spoke at a workshop at the Santa Fe Inst.i.tute. My own reaction, like that of many of my colleagues, was "Very cool! Very ingenious! But not of much practical or scientific significance." Like the Game of Life, Rule 110 is an example of a very simple deterministic system that can create unpredictable complex behavior. But in practice it would be very difficult to design an initial configuration that would result in a desired nontrivial computation. Moreover, Rule 110 would be an even slower computer than the Game of Life.

Wolfram's view of the result is very different. In his 2002 book, A New Kind of Science, Wolfram interprets the universality of Rule 110 as strong evidence for "a new law of nature," namely, his Principle of Computational Equivalence. Wolfram's proposed principle consists of four parts: The proper way to think about processes in nature is that they are computing.

Since even very simple rules (or "programs") such as Rule 110 can support universal computation, the ability to support universal computation is very common in nature.

Universal computation is an upper limit on the complexity of computations in nature. That is, no natural system or process can produce behavior that is "noncomputable."



The computations done by different processes in nature are almost always equivalent in sophistication.

Got that? I admit, it's kind of hard to figure out what this principle means, and a major purpose of Wolfram's 1,200-page book is to explicate this principle and show how it applies in very diverse areas of science. I read the whole book, and I still don't completely understand what Wolfram is getting at here. However, I'll give my best gloss on it.

What Wolfram means by "processes in nature are computing" is something like what you see in figures 10.6 and 10.7. At any given time a cellular automaton possesses information-the configuration of states-and processes information by applying its rule to its current configuration. Wolfram believes that natural systems work much the same way-that they contain information and process that information according to simple rules. In A New Kind of Science (or "NKS" as it is called by the cognoscenti), Wolfram discusses scientific topics such as quantum physics, evolutionary and developmental biology, and economics, to name a few, and he attempts to show how each of these can best be described in terms of computation via simple rules. In essence, his "new kind of science" refers to the idea that the universe, and everything in it, can be explained by such simple programs. This is computation writ large, very large.

Now, according to Wolfram, since extremely simple rules such as Rule 110 can support universal computation, then most natural systems-presumably more complicated than Rule 110-can probably support universal computation too. And, Wolfram believes, there is nothing more complex than what can be computed by a universal computer, given the right input. Thus there is an upper limit on the complexity of possible computations in nature.

As I described in chapter 4, Alan Turing showed that universal computers can in principle compute anything that is "computable." However, some computations are simpler than others. Even though they both could run on the same computer, the program "calculate 1 + 1" would result in a less complicated computational process than a program that simulates the earth's climate, correct? But Wolfram's principle in fact a.s.serts that, in nature, the "sophistication" of all computations actually being performed is equivalent.

Does any of this make sense? Wolfram's theories are far from being generally accepted. I'll give you my own evaluation. As for points 1 and 2, I think Wolfram is on the right track in proposing that simple computer models and experiments can lead to much progress in science, as shown in examples throughout this book. As I'll describe in chapter 12, I think that one can interpret the behavior of many natural systems in terms of information processing. I also find it plausible that many such systems can support universal computation, though the significance of this for science hasn't been demonstrated yet.

Regarding point 3, the jury is also still out on whether there are processes in nature that are more powerful than universal computers (i.e., can compute "uncomputable" things). It has been proved that if you could build truly nondigital computers (i.e., that can compute with numbers that have infinitely many decimal places) then you would be able to build such a computer to solve the halting problem, Turing's uncomputable problem that we saw in chapter 4. Some people, including Wolfram, don't believe that numbers with infinitely many decimal places actually exist in nature-that is, they think nature is fundamentally digital. There's no really convincing evidence on either side.

Point 4 makes no sense to me. I find it plausible that my brain can support universal computation (at least as far as my limited memory capacity allows) and that the brain of the worm C. elegans is also (approximately) universal, but I don't buy the idea that the actual computations we engage in, respectively, are equivalent in sophistication.

Wolfram goes even further in his speculations, and predicts that there is a single, simple cellular automaton-like rule that is the "definite ultimate model for the universe," the primordial cellular automaton whose computations are the source for everything that exists. How long is this rule? "I'm guessing it's really very short," says Wolfram. But how long? "I don't know. In Mathematica, for example, perhaps three, four lines of code." Computation writ small.

NKS made a big splash when it was published in 2002-it started out as Amazon.com's number one bestseller, and remained on the bestseller list for months. Its publication was followed by a large publicity campaign launched by Wolfram Media, the company Wolfram formed to publish his book. Reactions to the book were bipolar: some readers thought it brilliant and revolutionary, others found it self-aggrandizing, arrogant, and lacking in substance and originality (for example, critics pointed out that physicists Konrad Zuse and Edward Fredkin had both theorized that the universe is a cellular automaton as early as the 1960s). However, whatever its other merits might be, we cellular-automata addicts can all agree that NKS provided a lot of welcome publicity for our obscure pa.s.sion.

CHAPTER 11.

Computing with Particles.

IN 1989 I HAPPENED TO READ AN ARTICLE by the physicist Norman Packard on using genetic algorithms to automatically design cellular automaton rules. I was immediately hooked and wanted to try it myself. Other things got in the way (like finis.h.i.+ng my Ph.D. thesis), but working on this idea was always in the back of my mind. A few years later, with thesis finished and a research job at the Santa Fe Inst.i.tute, I finally had the time to delve into it. A young undergraduate student named Peter Hraber was hanging around the inst.i.tute at that time, looking for something to work on, so I recruited him to help me with this project. We were soon after joined by a graduate student named Rajars.h.i.+ Das, who had independently started a similar project.

Like Packard, we used a genetic algorithm to evolve cellular automaton rules to perform a specific task called "majority cla.s.sification." The task is simple: the cellular automaton must compute whether its initial configuration contains a majority of on or off states. If on states are in the majority, the cellular automaton should signal this fact by turning all the cells on. Similarly, if off has an initial majority, the cellular automaton should turn all cells off. (If the number of initial on and off states is equal, there is no answer, but this possibility can be avoided by using a cellular automaton with an odd number of cells.) The majority cla.s.sification task is sort of like having to predict which of two candidates will win an election in your city when all you can see are the political signs on your close neighbors' front lawns.

The majority cla.s.sification task would be trivial for a von-Neumann-style computer. Simply have the central processing unit count the respective numbers of on and off states in the initial configuration, storing the count at each step in memory. When done counting, retrieve the two totals from memory, determine which one is larger, and reset the configuration to all on or all off depending on this comparison. A von-Neumann-style computer can do this easily because it has random access memory in which to store the initial configuration and intermediate sums, and a central processing unit to do the counting, the final comparison, and the resetting of the states.

In contrast, a cellular automaton has no random access memory and no central unit to do any counting. It has only individual cells, each of which has information only about its own state and the states of its neighbors. This situation is an idealized version of many real-world systems. For example, a neuron, with only a limited number of connections to other neurons, must decide whether and at what rate to fire so that the overall firing pattern over large numbers of neurons represents a particular sensory input. Similarly, as I describe in chapter 12, an ant must decide what job it should take on at a given time-in order to benefit the colony as a whole-based only on its interactions with a relatively small number of other ants.

The bottom line is that it is in general difficult to design cellular automata to perform tasks that require collective decision making among all the cells. Peter and I were interested in how a genetic algorithm might solve this design problem.

Using Norman Packard's work as a starting point, we coded up simulations of one-dimensional cellular automata in which each cell is connected to three cells on either side-that is, the size of each cell's neighborhood (including the cell itself) is seven cells.

Think for a minute how you might design a rule for a cellular automaton like this to perform the majority cla.s.sification task.

One reasonable first guess for a rule might be: "Each cell should update to the state that is currently a majority in its local neighborhood." This would be like basing your election prediction on which candidate is supported by the majority of yard signs among you and your local neighbors. However, this "local majority vote" cellular automaton doesn't work, as ill.u.s.trated by the s.p.a.ce-time diagram in figure 11.1. The initial configuration has a majority of black cells, and each cell updates its state according to the local majority vote rule over 200 time steps. You can see that the lattice quickly settles into a stable pattern of black and white regions. The boundaries between regions are at locations where majority black neighborhoods overlap with majority white neighborhoods. The final configuration, containing both white and black cells, does not give the desired answer to the majority cla.s.sification task. The problem is that there is no way, using this rule, for one black region, say, to communicate its length to the other black regions, so that collectively they can determine whether or not they are in the majority.

FIGURE 11.1. s.p.a.ce-time behavior of the "local majority vote" cellular automaton starting from a majority black initial configuration. (Figure adapted from Mitch.e.l.l, M., Crutchfield, J. P., and Das, R., Evolving cellular automata to perform computations: A review of recent work. In Proceedings of the First International Conference on Evolutionary Computation and Its Applications (EvCA '96). Moscow, Russia: Russian Academy of Sciences, 1996.) It's not immediately obvious how to design a rule for this task, so in the spirit of Robby the Robot from chapter 9 we ran a genetic algorithm in order to see if it could produce a successful rule on its own.

In our genetic algorithm, cellular automaton rules are encoded as strings of 0s and 1s. These bits give the state-update values for each possible neighborhood configuration (figure 11.2).

The genetic algorithm starts out with a population of randomly generated cellular automaton rules. To calculate the fitness of a rule, the GA tests it on many different initial lattice configurations. The rule's fitness is the fraction of times it produces the correct final configuration: all cells on for initial majority on or all cells off for initial majority off (figure 11.3). We ran the GA for many generations, and by the end of the run the GA had designed some rules that could do this task fairly well.

As we saw with Robby the Robot, a solution evolved by the genetic algorithm is typically impossible to understand at the level of its "genome." The cellular automata that we evolved for the majority cla.s.sification task were no different. The genome of one of the highest-performing cellular automata designed by the GA is the following (split into two lines of text): 0000010100000110000101011000011100000111000001000001010101010111 0110010001110111000001010000000101111101111111111011011101111111.

Recall that the first bit is the update state for the center cell in the all 0s neighborhood, the second bit is the update state for center cell in the neighborhood 0000001, and so forth. Since there are 128 possible neighborhoods, this genome consists of 128 bits. Looking at this bit string, there is nothing obvious that gives us a hint as to how this rule will behave, or why it obtained high fitness on the majority cla.s.sification task.

FIGURE 11.2. Ill.u.s.tration of how a cellular automaton rule is encoded as an individual in the genetic algorithm's population. The 128 possible neighborhood configurations are listed in a fixed order. The update state for the center cell of each neighborhood configuration is encoded as a 0 (off) or a 1 (on). An individual in the genetic algorithm's population is a string of 128 bits, encoding the update states in their fixed ordering.

FIGURE 11.3. To calculate its fitness, each rule is tested on many random initial configurations. The fitness of a rule is the fraction of initial configurations on which the rule produces the correct answer within some number of time steps.

Figure 11.4 gives two s.p.a.ce-time diagrams that display the behavior of this rule on two different initial configurations: with (a) a majority of black cells and (b) a majority of white cells. You can see that in both cases the final behavior is correct-all black in (a) and all white in (b). In the time between the initial and final configurations, the cells somehow collectively process information in order to arrive at a correct final configuration. Some interesting patterns form during these intermediate steps, but what do they mean?

It took a lot of staring at pictures like figure 11.4 for us to figure out what was going on. Luckily for us, Jim Crutchfield, a physicist from Berkeley, happened to visit the Santa Fe Inst.i.tute and became interested in our effort. It turned out that Jim and his colleagues had earlier developed exactly the right conceptual tools to help us understand how these patterns implement the computation.

FIGURE 11.4. s.p.a.ce-time behavior of one of the best-performing evolved cellular automaton rules for the majority cla.s.sification task. In (a), the initial configuration contains a majority of black cells and the cellular automaton iterates to a fixed configuration of all black. In (b), the initial configuration contains a majority of white cells and the cellular automaton iterates to a fixed configuration of all white. (Figure adapted from Mitch.e.l.l, M., Crutchfield, J. P., and Das, R., Evolving cellular automata to perform computations: A review of recent work. In Proceedings of the First International Conference on Evolutionary Computation and Its Applications (EvCA '96). Moscow, Russia: Russian Academy of Sciences, 1996.) Three types of patterns can be seen in figure 11.4: all black, all white, and a checkerboard-like region of alternating black and white states (this appears as a grayish region in the low-resolution figure 11.4). It is this checkerboard pattern that transfers information about the density of black or white cells in local regions.

Like the strategy the genetic algorithm evolved for Robby the Robot, the cellular automaton's strategy is quite clever. Take a look at figure 11.5, which is a version of figure 11.4a that I have marked up. Regions in which the initial configuration is either mostly white or mostly black converge in a few time steps to regions that are all white or all black. Notice that whenever a black region on the left meets a white region on the right, there is always a vertical boundary between them. However, whenever a white region on the left meets a black region on the right, a checkerboard triangle forms, composed of alternating black and white cells. You can see the effect of the circular lattice on the triangle as it wraps around the edges.

Sides A and B of the growing checkerboard region travel at the same velocity (i.e., distance covered over time). Side A travels southwest until it collides with the vertical boundary. Side B just misses colliding with the vertical boundary on the other side. This means that side A had a shorter distance to travel. That is, the white region bordered by side A is smaller than the black region bordered by side B. At the collision point, side A disappears, and the black region is allowed to grow. At the triangle's bottom point, sides B and C disappear, and the entire lattice becomes black, the correct final configuration.

FIGURE 11.5. The s.p.a.ce-time diagram of figure 11.4 (a) with important features marked.

If we try to understand these patterns as carrying out a computation, then the vertical boundary and the checkerboard region can be thought of as signals. These signals are created and propagated by local interactions among cells. The signals are what allow the cellular automaton as a whole to determine the relative sizes of the larger-scale adjacent black and white regions, cut off the smaller ones, and allow the larger ones to grow in size.

These signals are the major difference between the local majority voting cellular automaton of figure 11.1 and the cellular automaton of figure 11.5. As I mentioned above, in the former there is no way for separated black or white regions to communicate with one another to figure out which has the majority of cells. In the latter, the signals created by the checkerboard region and the vertical boundary carry out this communication, and the interaction among signals allows the communicated information to be processed so that the answer can be determined.

Jim Crutchfield had earlier invented a technique for detecting what he called "information processing structures" in the behavior of dynamical systems and he suggested that we apply this technique to the cellular automata evolved by the GA. Crutchfield's idea was that the boundaries between simple regions (e.g., sides A, B, C, and the vertical boundary in figure 11.5) are carriers of information and information is processed when these boundaries collide. Figure 11.6 shows the s.p.a.ce-time diagram of figure 11.5, but with the black, white, and checkerboard regions filtered out (i.e., colored white), leaving only the boundaries, so we can see them more clearly. The picture looks something like a trace of elementary particles in an old physics device called a bubble chamber. Adopting that metaphor, Jim called these boundaries "particles."

Traditionally in physics particles are denoted with Greek letters, and we have done the same here. This cellular automaton produces six different types of particles: (gamma), (mu), (eta), (delta), (beta), and (alpha, a short-lived particle that decays into and ). Each corresponds to a different kind of boundary-for example, corresponds to boundaries between black and checkerboard regions. There are five types of particle collisions, three of which ( + , + , and + ) create a new particle, and two of which ( + and + ) are "annihilations," in which both particles disappear.

Casting the behavior of the cellular automaton in terms of particles helps us understand how it is encoding information and performing its computation. For example, the and particles encode different types of information about the initial configuration. The particle decays into and . The particle carries the information that it borders a white region; similarly, the particle carries the information that it borders a black region. When collides with before does, the information contained in and is combined to deduce that the large initial white region was smaller than the large initial black region it bordered. This new information is encoded in a newly created particle , whose job is to catch up with and annihilate the (and itself).

FIGURE 11.6. The s.p.a.ce-time diagram of figure 11.4(a), with regions of "simple patterns" filtered out, leaving the boundaries between these regions ("particles"). (Figure adapted from Mitch.e.l.l, M., Crutchfield, J. P., and Das, R., Evolving cellular automata to perform computations: A review of recent work. In Proceedings of the First International Conference on Evolutionary Computation and Its Applications (EvCA '96). Moscow, Russia: Russian Academy of Sciences, 1996.) We were able to apply this kind of a.n.a.lysis to several different cellular automata evolved to perform the majority cla.s.sification task as well as other tasks. This a.n.a.lysis allowed us to predict things such as the fitness of a given cellular automaton (without having to run the cellular automaton itself, but using only its "particle" description). It also allowed us to understand why one cellular automaton had higher fitness than another and how to describe the mistakes that were made by different cellular automata in performing computations.

Particles give us something we could not get by looking at the cellular automaton rule or the cellular automaton's s.p.a.ce-time behavior alone: they allow us to explain, in information-processing terms, how a cellular automaton performs a computation. Note that particles are a description imposed by us (the scientists) rather than anything explicit taking place in a cellular automaton or used by the genetic algorithm to evolve cellular automata. But somehow the genetic algorithm managed to evolve a rule whose behavior can be explained in terms of information-processing particles. Indeed, the language of particles and their interactions form an explanatory vocabulary for decentralized computation in the context of one-dimensional cellular automata. Something like this language may be what Stephen Wolfram was looking for when he posed the last of his "Twenty Problems in the Theory of Cellular Automata": "What higher-level descriptions of information processing in cellular automata can be given?"

All this is relatively recent work and needs to be developed much further. I believe that this approach to understanding computation, albeit unconventional, will turn out to be useful in other contexts in which computation is distributed among simple components with no central control. For example, it is still a mystery how high-level information about sensory data is encoded and processed in the brain. Perhaps the explanation will turn out to be something close to particle-like or, given the brain's three dimensions, wave-like computation, where neurons are the scaffolding for information-carrying waves of activity and their information-processing interactions.

Brain computation is of course a long jump from one-dimensional cellular automata. However, there is one natural system that might be explained by something very much like our particles: the stomata networks of plants. Every leafy plant's leaves are covered with stomata-small apertures that open or close in response to light and humidity. When open, the stomata let in carbon dioxide, which is used in photosynthesis. However, open stomata cause water to evaporate from the plant's fluid stores. Botanist Keith Mott, physicist David Peak, and their colleagues at Utah State University have long been observing the patterns of opening and closing of stomata on leaves, and believe that the stomata const.i.tute a network that is something like a two-dimensional cellular automaton. They also observe that the temporal patterns of opening and closing on the leaves looks very much like two-dimensional versions of interacting particles. They hypothesize that plants perform a distributed, decentralized computation via their stomata-namely, how to optimally open and close stomata in order to best balance carbon dioxide gain and water loss-and that the computation may be explainable in terms of such particles.

CHAPTER 12.

Information Processing in Living Systems.

EVER SINCE SZILARD'S INSIGHT THAT information might be the savior of the second law of thermodynamics from the attack of Maxwell's demon, information and its cousin computation have increasingly infiltrated science. In many people's minds information has taken on an ontological status equal to that of ma.s.s and energy-namely, as a third primitive component of reality. In biology in particular, the description of living systems as information processing networks has become commonplace. In fact, the term information processing is so widely used that one would think it has a well-understood, agreed-upon meaning, perhaps built on Shannon's formal definition of information. However, like several other central terms of complex systems science, the concept of information processing tends to be ill-defined; it's often hard to glean what is meant by information processing or computation when they are taken out of the precise formal context defined by Turing machines and von Neumann-style computers. The work described in the previous chapter was an attempt to address this issue in the context of cellular automata.

The purpose of this chapter is to explore the notion of information processing or computation in living systems. I describe three different natural systems in which information processing seems to play a leading role-the immune system, ant colonies, and cellular metabolism-and attempt to illuminate the role of information and computation in each. At the end I attempt to articulate some common qualitative principles of information processing in these and other decentralized systems.

What Is Information Processing?

Let me quote myself from chapter 10: "In what sense do natural systems 'compute'? At a very general level, one might say that computation is what a complex system does with information in order to succeed or adapt in its environment. But can we make this statement more precise? Where is the information, and what exactly does the complex system do with it?" These questions may seem straightforward, but exploring them will quickly force us to dip our feet into some of the murkiest waters in complex systems science.

When we say a system is processing information or computing (terms which, for now, I will use synonymously), we need to answer the following: What plays the role of "information" in this system?

How is it communicated and processed?

How does this information acquire meaning? And to whom? (Some will disagree with me that computation requires meaning of some sort, but I will go out on a limb and claim it does.)

INFORMATION PROCESSING IN TRADITIONAL COMPUTERS.

As we saw in chapter 4, the notion of computation was formalized in the 1930s by Alan Turing as the steps carried out by a Turing machine on a particular input. Ever since then, Turing's formulation has been the basis for designing traditional von-Neumann-style programmable computers. For these computers, questions about information have straightforward answers. We can say that the role of information is played by the tape symbols and possible states of the tape head. Information is communicated and processed by the tape head's actions of reading from and writing to the tape, and changing state. This is all done by following the rules, which const.i.tute the program.

We can view all programs written for traditional computers at (at least) two levels of description: a machine-code level and a programming level. The machine-code level is the set of specific, step-by-step low-level instructions for the machine (e.g., "move the contents of memory location n to CPU register j," "perform an or logic operation on the contents of CPU registers j and i and store the result in memory location m," and so on. The programming level is the set of instructions in a high-level language, such as BASIC or Java, that is more understandable to humans (e.g., "multiply the value of the variable half_of_total by 2 and store the result in the variable total," etc.). Typically a single high-level instruction corresponds to several low-level instructions, which may be different on different computers. Thus a given high-level program can be implemented in different ways in machine code; it is a more abstract description of information processing.

The meaning of the input and output information in a Turing machine comes from its interpretation by humans (programmers and users). The meaning of the information created in intermediate steps in the computation also comes from its interpretation (or design) by humans, who understand the steps in terms of commands in a high-level programming language. This higher level of description allows us to understand computations in a human-friendly way that is abstracted from particular details of machine code and hardware.

INFORMATION PROCESSING IN CELLULAR AUTOMATA.

For non-von-Neumann-style computers such as cellular automata, the answers are not as straightforward. Consider, for example, the cellular automaton described in the previous chapter that was evolved by a genetic algorithm to perform majority cla.s.sification. Drawing an a.n.a.logy with traditional computers, we could say that information in this cellular automaton is located in the configurations of states in the lattice at each time step. The input is the initial configuration, the output is the final configuration, and at each intermediate time step information is communicated and processed within each neighborhood of cells via the actions of the cellular automaton rule. Meaning comes from the human knowledge of the task being performed and interpretation of the mapping from the input to the output (e.g., "the final lattice is all white; that means that the initial configuration had a majority of white cells").

Describing information processing at this level, a.n.a.logous to "machine code," does not give us a human-friendly understanding of how the computation is accomplished. As in the case of von-Neumann-style computation, we need a higher-level language to make sense of the intermediate steps in the computation and to abstract from particular details of the underlying cellular automaton.

In the previous chapter I proposed that particles and their interactions are one approach toward such a high-level language for describing how information processing is done in cellular automata. Information is communicated via the movement of particles, and information is processed via collisions between particles. In this way, the intermediate steps of information processing acquire "meaning" via the human interpretation of the actions of the particles.

One thing that makes von-Neumann-style computation easier to describe is that there is a clear, unambiguous way to translate from the programming level to the machine code level and vice versa, precisely because these computers were designed to allow such easy translation. Computer science has given us automatic compilers and decompilers that do the translation, allowing us to understand how a particular program is processing information.

For cellular automata, no such compilers or decompilers exist, at least not yet, and there is still no practical and general way to design high-level "programs." Relatively new ideas such as particles as high-level information-processing structures in cellular automata are still far from const.i.tuting a theoretical framework for computation in such systems.

The difficulties for understanding information processing in cellular automata arise in spades when we try to understand information processing in actual living systems. My original question, "In what sense do natural systems 'compute'?" is still largely unanswered, and remains a subject of confusion and th.o.r.n.y debate among scientists, engineers, and philosophers. However, it is a tremendously important question for complex systems science, because a high-level description of information processing in living systems would allow us not only to understand in new and more comprehensive ways the mechanisms by which particular systems operate, but also to abstract general principles that transcend the overwhelming details of individual systems. In essence, such a description would provide a "high-level language" for biology.

The rest of this chapter tries to make sense of these ideas by looking at real examples.

The Immune System.

I gave a quick description of the immune system way back in chapter 1. Now let's look a bit more in depth at how it processes information in order to protect the body from pathogens-viruses, bacteria, parasites, and other unwelcome intruders.

To recap my quick description, the immune system consists of trillions of different cells and molecules circulating in the body, communicating with one another through various kinds of signals.

Of the many many different types of cells in the immune system, the one I focus on here is the lymphocyte (a type of white blood cell; see figure 12.1). Lymphocytes are created in the bone marrow. Two of the most important types of lymphocytes are B cells, which release antibodies to fight viruses and bacteria, and T cells, which both kill invaders and regulate the response of other cells.

FIGURE 12.1. A human lymphocyte, whose surface is covered with receptors that can bind to certain shapes of molecules that the cell might encounter. (Photograph from National Cancer Inst.i.tute [http://visualsonline.cancer.gov/details.cfm? imageid=1944].) FIGURE 12.2. An ill.u.s.tration of a lymphocyte (here a B cell) receptor binding to an antigen.

Each cell in the body has molecules on its surface called receptors. As the name implies, these molecules are a means by which cells receive information. The information is in the form of other molecules that chemically bind to the receptor molecules. Whether or not a receptor binds to a particular molecule depends on whether their physical shapes match sufficiently.

A lymphocyte's surface is covered with identical receptors that bind to a particular range of molecular shapes. If a lymphocyte happens by chance to encounter a special pathogen molecule (called an "antigen") whose shape fits the lymphocyte's receptors, then the molecule binds to one of the lymphocyte's receptors, and the lymphocyte is said to have "recognized" that antigen-the first step in killing off pathogens. The binding can be strong or weak, depending on how closely the molecule's shape fits the receptor. This process is ill.u.s.trated in figure 12.2.

The main problem facing the immune system is that it doesn't know ahead of time what pathogens will invade the body, so it can't "predesign" a set of lymphocytes with receptors that will bind strongly to just the right shapes. What's more, there are an astronomical number of possible pathogens, so the immune system will never be able to generate enough lymphocytes at any one time to take care of every eventuality. Even with all the many millions of different lymphocytes the body generates per day, the world of pathogens that the system will be likely to encounter is much bigger.

Here's how the immune system solves this problem. In order to "cover" the huge s.p.a.ce of possible pathogen shapes in a reasonable way, the population of lymphocytes in the body at any given time is enormously diverse. The immune system employs randomness to allow each individual lymphocyte to recognize a range of shapes that differs from the range recognized by other lymphocytes in the population.

Click Like and comment to support us!

RECENTLY UPDATED NOVELS

About Complexity - A Guided Tour Part 10 novel

You're reading Complexity - A Guided Tour by Author(s): Melanie Mitchell. This novel has been translated and updated at LightNovelsOnl.com and has already 546 views. And it would be great if you choose to read and follow your favorite novel on our website. We promise you that we'll bring you the latest novels, a novel list updates everyday and free. LightNovelsOnl.com is a very smart website for reading novels online, friendly on mobile. If you have any questions, please do not hesitate to contact us at [email protected] or just simply leave your comment so we'll know how to make you happy.