If this program succeeds, we would have an uncontroversial sense of information, via Shannon, that applies to all sorts of physical correlations. This picture can then be developed by identifying a subset of cases in which these signals have been co-opted or produced to drive biological processes. In addition, perhaps we can appeal to rich semantic properties in cases where we have the right kind of history of natural selection to explain the distinctive role of genes, and perhaps other factors, in development.
Genes and a handful of non-genetic factors would have these properties; most environmental features that have a causal role in biological development would not. There remain many problems of detail, but the appeal of the overall picture provides, at least for some, good reason to persevere with some account along these lines. These structural features of DNA and its relation to amino acids are not central to some of the ideas about information in biology, even when the focus is on development and inheritance.
As noted above, the enthusiasm for semantic characterization of biological structures extends back before the genetic code was discovered see Kay for a detailed historical treatment. Both Peter Godfrey-Smith and Paul Griffiths have argued that there is one highly restricted use of a fairly rich semantic notion within genetics that is justified Godfrey-Smith ; Griffiths Genes specify amino acid sequence via a templating process that involves a regular mapping rule between two quite different kinds of molecules nucleic acid bases and amino acids.
This mapping rule is combinatorial , and apparently arbitrary in a sense that is hard to make precise, though see Stegmann for discussion of different versions of this idea. Figure 1 below has the standard genetic code summarized. This very narrow understanding of the informational properties of genes is basically in accordance with the influential early proposal of Francis Crick The argument is that these low-level mechanistic features make gene expression into a causal process that has significant analogies to paradigmatic symbolic phenomena.
Some have argued that this analogy becomes questionable once we move from the genetics of simple prokaryotic organisms bacteria and archaea , to those in eukaryotic cells Sarkar Mainstream biology tends to regard the complications that arise in the case of eukaryotes as mere details that do not compromise the basic picture we have of how gene expression works for an extensive discussion of these complexities, by those who think they really matter, see Griffiths and Stotz The protein is made by stringing a number of amino acid molecules together.
Moreover, much of the DNA in a eukaryotic organism is not transcribed and translated at all. The extent of wholly nonfunctional DNA remains unclear. So the argument in Godfrey-Smith and Griffiths is that there is one kind of informational or semantic property that genes and only genes have: coding for the amino acid sequences of protein molecules. It does not vindicate the idea that genes code for whole-organism phenotypes, let alone provide a basis for the wholesale use of informational or semantic language in biology.
Genes can have a reliable causal role in the production of a whole-organism phenotype, of course. But if this causal relation is to be described in informational terms, then it is a matter of ordinary Shannon information, which applies to environmental factors as well.
That said, it is possible to argue that the specificity of gene action, and the existence of an array of actual and possible alternatives at a given site on a chromosome, means that genes exert fine-grained causal control over phenotypes, and that few other developmental resources exert this form of causal control Waters ; Maclaurin ; Stegmann We return to this issue in section 6. The notion of arbitrariness figures in other discussions of genetic information as well Maynard Smith ; Sarkar ; Stegmann But the very idea of arbitrariness, and the hypothesis of a frozen accident, have become problematic.
For one thing, as noted in our discussion of Bergstrom and Rosvall above, the code is not arbitrary in the sense that many others would work as well. To the contrary, the existing code is near-optimal for minimising error costs. So the very idea of arbitrariness is elusive. And empirically, the standard genetic code is turning out to have more systematic and non-accidental structure than people had once supposed Knight, Freeland, and Landweber The notion of arbitrariness has also been used in discussions of the links between genes and phenotypes in a more general sense.
Kirscher and Gerhart discuss a kind of arbitrariness that derives from the details of protein molecules and their relation to gene regulation. Proteins that regulate gene action tend to have distinct binding sites, which evolution can change independently. To bind, a protein must be able to attach to a site, but that requires congruence only with a local feature of the protein, not sharply constraining its overall shape Planer This gives rise to a huge range of possible processes of gene regulation.
So there is a perennial temptation to appeal to the idea of arbitrariness when discussing the alleged informational nature of some biological causation. The Skyrms framework fits these designed cause and response systems within biological agents for three reasons. First, as noted, this framework shows that signalling does not require intelligence or an intelligent grasp of signal meaning. Second, the simplest cases for models of signalling are cases in which there is common interest.
The sender and receiver are advantaged or disadvantaged by the same outcomes. While complete common interest is atypical of organism-to-organism communication, the cells and other structures within an organism share a common fate with complex exceptions. So in this respect the base models might fit organ-to-organ communication better than they fit between-organism phenomena.
Third, in many of these biological systems, the abstract structure specified as a signalling system—environmental source, sender, message, receiver, response—maps very naturally onto concrete biological mechanisms. For example, Ron Planer has recently argued that we should see gene expression as the operation of a signalling system Planer His view is quite nuanced, for the identity of the sender, receiver, and message vary somewhat from case to case. For example, when a protein is a transcription factor, then the gene counts as a sender.
He treats other gene-protein relations differently. The details of his view need not concern us here. The point is that there is machinery in the cell—genes, proteins mRNA transcripts, ribosomes and their associated tRNA—that can be plausibly mapped onto sender-receiver systems. There is nothing forced about mapping the information-processing sender-receiver structure onto the molecular machinery of the cell.
However, while this framework very naturally fits within cell and between cell processes, it is much less clear how naturally other suggestions mesh with this framework. For example, in the Bergstrom-Rosvall picture of intergenerational transmission, who or what are the senders and receivers?
Perhaps in the case of multicelled organisms, the receiver exists independently of and prior to the message. For an egg is a complex and highly structured system, before gene expression begins in the fertilised nucleus, and that structure plays an important role in guiding that gene expression Sterelny ; Gilbert But most organisms are single celled prokaryotes, and when they fission, it is not obvious that there is a daughter who exists prior to and independently of the intergenerational genetic message she is to receive.
For him, the sender of genetic messages is natural selection operating on and filtering the gene pool of a population; messages are read by the developmental system of the organism as a whole Shea But the less clearly a sender-receiver or producer-consumer framework maps onto independently recognised biological mechanisms, the more plausible a fictionalist or analogical analysis of that case becomes.
So we see an important difference between a reader being the developmental system as a whole, and reader being a receptor on a cell membrane. But this realization of the causal schematism applies only at the cell level, at the level at which the transcription and translation apparatus shows up as a definite part of the machinery.
One of the most extraordinary features of ontogeny is that it proceeds reliably and predictably without any central control of the development of the organism as a whole. There is nothing, for example, that checks whether the left-side limbs are the same size as the right side limbs, intervening to ensure symmetry. Some biologists and philosophers have argued that the introduction of informational and semantic concepts has had a bad effect on biology, that it fosters various kinds of explanatory illusions and distortions, perhaps along with ontological confusion.
Here we will survey some of the more emphatic claims of this kind, but some degree of unease can be detected in many other discussions see, for example, Griesemer The movement known as Developmental Systems Theory DST has often opposed the mainstream uses of informational concepts in biology, largely because of the idea that these concepts distort our understanding of the causal processes in which genes are involved.
These theorists have two connected objections to the biological use of informational notions. One is the idea that informational models are preformationist. Preformationism, in its original form, in effect reduces development to growth: within the fertilized egg there exists a miniature form of the adult to come. Preformationism does not explain how an organized, differentiated adult develops from a much less organized and more homogeneous egg; it denies the phenomenon. See Francis for a particularly vigorous version of the idea that the appeal to information leads to pseudo-explanation in biology.
DST theorists think that informational models of genes and gene action make it very tempting to neglect parity, and to attribute a kind of causal primacy to these factors, even though they are just one of a set of essential contributors to the process in question. Once one factor in a complex system is seen in informational terms, the other factors tend to be treated as mere background, as supports rather than bona fide causal actors. It becomes natural to think that the genes direct, control, or organise development; other factors provide essential resources.
Sometimes a gene will have a reliable effect against a wide range of environmental backgrounds; sometimes an environmental factor will have a reliable effect against a wide range of genetic backgrounds. Sometimes both genetic and environmental causes are highly context-sensitive in their operation. Paul Griffiths has emphasised this issue, arguing that the informational mode of describing genes can foster the appearance of context-independence:.
Genes are instructions—they provide information—whilst other causal factors are merely material…. A gay gene is an instruction to be gay even when [because of other factors] the person is straight. Griffiths — The inferential habits and associations that tend to go along with the use of informational or semantic concepts are claimed to lead us to think of genes as having an additional and subtle form of extra causal specificity.
These habits can have an effect even when people are willing to overtly accept context-dependence of most causes in complex biological systems. To say this is almost inevitably to treat environmental factors as secondary players. The parity thesis has been the focus of considerable discussion and response. In a helpful paper, Ulrich Stegmann shows that the parity thesis is really a cluster of theses rather than a single thesis Stegmann Some ways of interpreting parity make the idea quite uncontroversial, as no more than an insistence on the complex and interactive character of development, or as pointing to the fact that just as genes come in slightly different versions, with slightly different effects holding other factors constant , the same is true of nongenetic factors.
Epigenetic markers on genes, due to nutritional environments, litter position and birth order, may also come in slightly different variants, with slightly different effects. Other versions of the claim are much more controversial. One response to the parity thesis has been to accept the view that genes are just one of a set of individually necessary and collectively sufficient developmental factors, but to argue nonetheless that genes play both a distinctive and especially important role in development Austin ; Lean ; Planer As noted above, perhaps the most promising suggestion along these lines is that genes exert a form of causal control over development that is universal, pervasive and fine-grained.
Many features of the phenotypes of every organism exist in an array of somewhat different versions, as a result of allelic variation in causally relevant genes. No other developmental factor exerts control that is similarly universal, pervasive and fine-grained Woodward ; Stegmann Polymerase is critically causally important, but varying its concentration will modify the rate of synthesis, but not the sequences produced. That is not true of modifications of the DNA sequence itself, so the DNA sequence is more causally specific than polymerase.
Shea takes a different approach, arguing that different causal factors have different evolutionary histories. Some causal factors are simply persisting features of the environment gravity being one example. Others are experienced by the developing organism as a result of histories of selection.
Burrows, for example, ensure that eggs and nestlings develop in fairly constant temperature and humidity. But burrows are not naturally selected inheritance mechanisms. They have not come into existence to ensure that a seabird chick resembles its parents. In contrast, some other developmental features are present and act in development because of histories of selection in which the selective advantage is that these mechanisms help ensure parent-offspring similarity. Shea argues that genes, probably epigenetic markings on genes, and perhaps a few other developmental resources are shaped by this form of natural selection.
So genes, and perhaps a few other developmental factors, play a distinctive developmental role, even though many other factors are causally necessary Shea In sum then, there are good reasons to be cautious about the use of informational terminology in thinking about development.
In this Book
But it is also possible to over-estimate the strength of the connection between informational conceptions of development and the idea that genes play a uniquely important role in development. There are ways of defending the idea that genes play a special role while acknowledging the interactive character of development. Moreover, an ambitious use of informational concepts is not confined to those within mainstream biological thinking. They suggest that one of the useful features of informational descriptions is that they allow us to generalize across different heredity systems, comparing their properties in a common currency.
In addition, one of the present authors has used informational concepts to distinguish between the evolutionary role of genes from that of other inherited factors whilst demonstrating the evolutionary importance of non-genetic inheritance Sterelny , So in various ways, an informational point of view may facilitate discussion of unorthodox theoretical options, including non-genetic mechanisms of inheritance.
Often, the idea is just a vivid but perhaps misleading way of drawing attention to the orderly, well-controlled and highly structured character of development. In its overall results, development is astonishingly stable and predictable, despite the extraordinary complexity of intracellular and intercellular interactions, and despite the fact that the physical context in which development takes place can never be precisely controlled.
There are attempts to draw closer and more instructive parallels between computational systems and biological development. In particular, Roger Sansom has made a sustained and detailed attempt to develop close and instructive parallels between biological development and connectionist computational models Sansom b,a, This view has the merit of recognising that there is no central control of development; organisms develop as a result of local interactions within and between cells.
However, the most promising ideas about program-development parallels seem to us to be ones that point to an apparently close analogy between processes within cells, and the low-level operation of modern computers. One crucial kind of causal process within cells is cascades of up and down-regulation in genetic networks. One gene will make a product that binds to and hence down-regulates another gene, which is then prevented from making a product that up-regulates another… and so on.
What we have here is a cascade of events that can often be described in terms of Boolean relationships between variables. One event might only follow from the conjunction of another two, or from a disjunction of them. Down-regulation is a kind of negation, and there can be double and triple negations in a network. Gene regulation networks often have a rich enough structure of this kind for it to be useful to think of them as engaged in a kind of computation.
While talking of signalling networks rather than programs, Brett Calcott has shown that positional information in the developing fruitfly embryo depends on this kind of Boolean structure, with limb bud development depending on the cells that differentiate into limb buds integrating one positive with two negative signals, so the buds develop in a regular pattern on the anterior midline of the embryo. Calcott shows that thinking of development in terms of these signalling networks with their Boolean structures has real explanatory value, for it enables us to explain how positional information, for example, can easily be reused in evolution.
Wing spots on fruitflies can evolve very easily, for the networks that tell cells where they are on the wing already exist, so the evolution of the wingspot just need a simple mutational change that links that positional information to pigment production Calcott Ron Planer agrees that gene regulation has this Boolean structure, and that we can, in effect, represent each gene as instantiating a conditional instruction.
As with Calcott, Planer goes on to point out that these conditional instructions can be and often are, linked together to build complex networks of control. Even so, Planer argues that while these are signalling networks, they should not be thought of as computer programs. For example, the combinations of instructions have no intrinsic order; we can represent each of the genes as a specific conditional instruction, but there is nothing in the set of instructions itself that tells us where to begin and end Planer A younger generation fluent in both classical pragmatism and the latest neuroscience was in the best position to take stock of matters, such as Anthony Chemero, W.
Teed Rockwell, and Tibor Solymosi. From its grounding in the current behavioral and brain sciences, neuropragmatism confirms many core views of traditional pragmatism. Neuropragmatism continues to reform philosophical views about such things as the mind-body relation, the function of intelligence, the nature of knowledge and truth, the nature of voluntary agency and responsibility, the function of social morality, and the ethical ways for dealing with new technologies. Along the way, it distinguishes itself from other neuroscientifically-based philosophical outlooks.
The first three are grounded in biology and anthropology. Many theoretical views across cognitive science and neuroscience regard them as foundational. The measure of this neural learning is improved habitual efficiency at specific routine tasks. All these theories have prototypes in the works of classical pragmatists. Combating any philosophy of mind that depicts mind as fundamentally passive, receptive, representational, cognitivist, or mechanistic, the classical pragmatists sought to understand the mind in its biological medium.
All of the nervous systems in all of their functionings for living must be taken into account.
Life (Stanford Encyclopedia of Philosophy)
William James lent scientific respectability to the notion that the fringes and margins of consciousness extend deep down into entirely unconscious emotional and intuitive cognition. The pragmatists affirmed that cognition is basically about applying learned habits to ongoing situations demanding immediate active responses from the organism. Also recognizing how centers of the brain are typically involved in many kinds of coordinated tasks, the classical pragmatists resisted the notion that each part of the brain deals only with narrow tasks or specific sorts of representations.
As integrated phases within the continuity of brain processes, the traditional schema of perception, reasoning, emotion, and will cannot be mechanically separate and only temporally related in a series leading to action. Sensation, thought, feeling, and volition are interfused; they are discriminable but not separable aspects of the continuous flow of neural activity Gazzaniga , Damasio , This fusion makes it impossible to draw a definitive line between the world beyond the skin of an organism and where cognition begins.
Although the brain is obviously the locus of cognition, it does not necessarily follow that only brain events suffice to account for all the functions and features of cognition. Every such body exists in a natural medium to which it sustains some adaptive connection The natural medium is thus one which contains similar and conjunctive forms.
Information and living systems: Philosophical and scientific perspectives
Pragmatism has always refused to treat neurons and any other brain cells such as glia which may modulate brain activity as the exclusive place where cognitive meaning is enacted — neurons are essential to, but not entirely constitutive of, cognition. Neuroscience properly studies the interrelated processes of brain activity, but cognitive neuroscience cannot help explain the processes of learning and knowing by referencing brain activity alone in isolation from any context.
Philosophy, for its part, will be unable to show how to integrate body and mind if knowledge is examined quite apart from any bodily context. The same goes doubly for the functions in which such systems take part, such as cognition. Cognition, therefore, is not to be solely done within the head in the end but is rather understood in terms of life and living within environments.
Grounding mind in biology takes life seriously. What are the existential truths of life? As Michael Schwartz and Osborne Wiggins describe life, there cannot be any firm or fixed divisions between organic bodies and their environment. Schwartz and Wiggins offer the following existential truths about life:. Being vs. World-relatedness vs. Dependence vs. By studying those modes of modification the mind is studied, and nowhere else. There is no point to first specifying what the external world is like and then asking how an organism cognizes that world.
Neuropragmatism, like classical pragmatism before it, studies cognition as it actually transforms the lived environment. Wilson recounts the ways that cognition is for action. The function of the mind is to guide action, and things, such as perception and memory, must be understood in terms of their contribution to situationappropriate behavior.
Cognition must be understood in terms of how it functions under the pressure of real-time interaction with the environment. Human cognition can off-load cognitive work onto the symbolic environment so that it holds or even manipulates information for us.
- Comanche Come-On (Mountain Jack Pike Book 3);
- Genie und Wahnsinn am Beispiel von A Beatutiful Mind (German Edition);
- Roping - Trick and Fancy Rope Spinning.
- Hemingway & Baileys Bartending Guide to Great American Writers!
We harvest that information on a need-to-know basis. That makes the environment part of the cognitive system. These interactions become part of our cognitive systems. Our thinking, decision making, and future are all impacted by our environmental transactions. For human cognition, managing the lived environment is not just biological but social as well.
We must regularly manage each other and our institutions. Distinctively human cognition is from birth and perhaps before birth a matter of brains cognizing together in concert. For humans, experience is culture — cognizing the environment is thoroughly shaped by the transmitted modes of cultural activities engaging human nervous system.
The brain exhibits much dedicated modular architecture, but massive parallel and networked processing is dominant. The brain is not hierarchical, but more democratic. Nerve centers across the brains are intricately interconnected with each other, so most any part of the brain has some direct or indirect systemic link to every other part of the brain. There is no inner Cartesian theater where all information is gathered and simultaneously experienced; experience at best displays rough continuities.
There is no executive command center giving orders to the rest of the brain; deliberation at best guides habitual motor action. Ordinary cognition does not primarily aim at static representation in general but at dynamic adequacy in specific situations. These complex modes of thought, seemingly far from mere matter or biology, remain embodied and functional for practical success.
Higher selfconscious cognitive processes reflection, inference, hypothesis testing are socially invented and taught capacities to attentively focus on ways to generalize practical habits for flexible use. These higher social capacities serve to coordinate group cooperative practices where some creativity is needed to maintain efficiency in the face of unstable conditions. Among these social practices are linguistic communication, symbolic representation, and logical inference.
Even pure imagination, conceptual play, and aesthetic contemplation are creative capacities existing to refine practice, even though we can also perform them in isolation from practical concerns. These creative modes permitted, among other things, the fixation of concepts and select relations among concepts, leading to reasoning. The most complex modes of rational thinking i. Such things as logic, science, and all sophisticated modes of creative intelligence are culturally-designed and educationally-transmitted technologies.
The epistemic criteria for knowledge is the technological test of practicality. Scientific knowledge is continuous with technology and ordinary practical skill. Much of human experience, most of morality, and all of knowledge are emergent features of social epistemic practices. All a priori, conceptual, and linguistic truths are internal to a social epistemic practice, and cannot be directly or simply used to criticize some other practice.
Because no a priori conceptual rigidity can dictate terms of empirical adequacy, only the practical adequacy of a knowledge system is relevant to its validity. For example, no folk belief system rules over any scientific field, and scientific fields should respect pluralism and seek coherence, not unity.
By avoiding epistemic dualism and reductivist monism, both epistemology and ethics can be naturalized, by showing how they fit in the natural world of encultured humans.
Evolution produced the infant human brain capable of speedily acquiring crucial functional habits because all humans need them, and additional functional habits are acquired when culture indoctrinates them into children. Habits are not unyielding reflexes; advanced learning is capable of questioning and amending any a priori truth through empirical inquiry and science.
Because the a priori does not float freely from actual brain development, learning, and language, there is no logic-practice gap. Reason can be naturalized, because its processes and results can be shown to fit in the natural world of embodied and encultured humans. Having stated these core theses of neuropragmatism, we may step back and survey wider intersections of neuroscience and philosophy. The Cartesian claim that mind and body have entirely different properties is demonstrably false. Lingering claims that consciousness has unnatural properties similarly rest on philosophical confusions and ignorance of brain science.
Mental activity, conscious and unconscious, is a natural process involving the nervous system — as such it is entirely open to scientific inquiry. The old metaphysical formula demanding identity of all properties for genuine identity was rejected early on by pragmatism and is no longer taken seriously beyond arm-chair philosophy. For science, functional identity is quite sufficient: where two phenomena are strongly correlated and display the same functionalities, the two phenomena are rightly regarded as the same natural process observed from different perspectives. Qualitative feelings happen where nervous systems achieve certain degrees of complexity in their transaction with their respective bodies.
Subjectivity and perspective are precisely what would be naturally expected when specific brains generate specific experiences. Unscientific philosophies point to features of experience or thought allegedly lacking dynamic functionality or integration with action. Worse, anti-naturalistic philosophies further claim that scientific naturalism can never integrate them with energetic matter.
However, neurological investigations much less any sound phenomenology, such as that of pragmatists have not been able to confirm such static and aloof features of consciousness. There simply is no avoiding dynamic and creative cognition. Consciousness is intensely qualitative, to be sure, precisely because the brain puts so much work into that phase of experience.
Theories of mind comfortable with taking purity, passivity, receptivity, or representation as basic modes of cognition must be rejected as incompatible with neuroscience. All the same, neuroscience is at liberty to develop specialized theories about micro and macro brain systems, borrowing and modifying terms as it may require. No folk psychology or linguistic conventionalism can dictate terms of scientific inquiry into the nexus of brain, body, and world.
The dream of the unity of science having dissipated, teleological and intentional terms can be legitimate features of successful empirical studies at every level from the social to the synaptic although mechanistic causality seems to dominate at molecular levels. Indeed, the choice between teleological and mechanistic modes of explanation may not be forced.
After all, wholes typically have genuine powers and properties that no aggregate of parts could have. This is not duplication of causal powers, as reductionists fret, but only the recognition of compatible kinds of causal powers at different scales and systems of nature. The pluralistic stance of pragmatism and neuropragmatism is hospitable to continuities of terminology and causality at multiple levels of brain science. Neuropragmatism cannot deny that humans can do these things. Yet it must undertake explanations for their existence without permitting them to assume any fundamental role in ordinary cognition.
Neuropragmatism tends to favor the idea that sophisticated symbolic capacities of human intelligence are the scaffolding on which the extended mind of linguistic sociality operates. Basic cognition is not symbolic or representational; but human societies design their environments in ways that offload cognitive work onto the manipulation of external symbols. Rationalism in general makes it difficult to account for cognition and knowledge in any natural terms.
Cartesianism was the height of presumptive rationalism by taking our most sophisticated forms of communication replete with analytic meanings and necessary truths as essential to all consciousness and cognition. Later representationalisms sustained this obsession with static symbols, rendering it difficult to naturalistically explain even how children acquire linguistic competence. Reliance on representation leads to a postulation of foundational perceptions. Connectionism comes closer to dynamical and distributed cognition but may still contain aspects or elements of representationalism.
Neuropragmatism, like other neurophilosophies, takes close notice of the way that the brain rapidly merges diverse streams of stimuli from all sources in order to guide effective action in the lived moment. All cognitive processes and hence all conscious experiences too are fusings of information about external sensations, motor control processes, and internal feedback from the body. There is no pure sensation, no pure will, and no pure feeling. There are no dichotomies between sensation, emotion, and reason — these aspects of cognition work together as they guide behavior.
Even in the simplest case of behavior, these fusions are evident. Simplistic associationism is inadequate because organic circuits create new wholes that are not merely sums or sequences of their parts. In a genuine organic circuit of perception, action, and consequence e. The next time the child sees the flame, he sees a hot flame, and when he reaches for that flame, he reaches for a painful touch. Organic circuits result in holistic organic wholes of experience. Experience is thoroughly imbued with prospective values of action.
That is why we directly experience meanings and values in the world around us. If meanings or values were only interior mental states, then our experience of an external object would be stereoscopic, a sort of double perception. Does lived experience ever seem like this? Meanings and values are where they appear to be: embodied in the things that we know how to use.
Meanings and values are instances of achieved practical knowledge through learning. Knowledge is built up from our experimental attempts to productively manage our deliberate modifications to the environment. Static representationalism, correspondence theories of knowledge, and Cartesian materialism are not viable theories of mind and intelligence. Even aspects of connectionism and dynamic systems theory may contribute to the proper synthesis of these positions Bechtel and Abrahamsen , provided excessive representationalism is avoided Freeman , Rockwell Localized mind is where brains act; philosophical options are common substantial cause, or dual aspect monism, or outright ontological identity.
Networked mind is wherever brains are coordinating action through communication, and therefore much of intelligence is an emergent feature of human communities modifying environments. Mind is dependent on brains, and cognitive functions are brain functions, either of single or multiple brains.
Neurons are all about systemic communication, across synapses and across the room. Many cognitive functions and all higher cognitive functions only operate through people — viz. Human psychology must be social and ecological. For babies could never do any such thing. Agosta II. It is time for an expansion and enrichment of evolutionary theory. The "back to the future" proposal contained herein is based on three postulates: 1 Neo-Darwinism is too impoverished for this task; 2 its predecessor, Darwinism, contained the necessary breadth of vision and metaphor to be the basis for an inclusive and unifying theory of biology; and 3 the necessary framework for this new stage in the evolution of evolutionary theory is largely in place.
We make our case through the use of a number of metaphorical dualisms designed to help focus discussions toward a more cooperative and productive approach to the study of living systems. Along the way, we suggest a number of self-induced paradoxes in neo-Darwinian accounts of evolution that are resolved by our perspective.
Key words: Darwinism; nature of the organism; nature of the conditions; extended synthesis; metaphors; time, sequency; simultaneity; thermodynamics; metabolism; space; information; function; history; complexity; cohesion; compensatory changes; evolutionary transitions; cooperation; division of labor; ecological hierarchy; genealogical hierarchy; self-organization.
Modern science labors under a self-imposed duality. One part is highly conservative. A working framework is accepted as true, or at least provisionally true, and research programs elaborate ways in which that framework is useful. All such pursuits have shown that our frameworks are either false or incomplete. This necessitates the second part of the duality, finding alternatives that are less false or less incomplete.
Reflecting this duality, scientists use two kinds of language: nomenclature labels and metaphor. Scientists use nomenclature to reduce ambiguity within a theoretical framework assumed to be fundamentally true. If a North American and a European call a bird a "robin," they are referring to two distantly related species. If, however, they both say Turdus migratorius , there is no confusion.
Many scientists mistrust metaphors because they allow too many possible meanings, introducing unwanted ambiguity. Scientific change, however, is creative, and that requires metaphorical language. By extending existing nomenclature to accommodate new concepts and empirical findings, metaphors give people a reason to learn new ideas and the means to learn them. And finally, metaphors present natural truths using language that allows understanding by non-specialists.
Darwin masterfully linked everyday experience and knowledge with technical observations through metaphors. The widespread acceptance of Darwinism, however, shifted the language of evolution from metaphor to nomenclature. Today's normative framework for evolutionary biology is neo-Darwinism, or the Synthetic Theory of Evolution. At present, there is considerable discussion among evolutionary biologists about the need for an "Extended Synthesis" e. The diversity of viewpoints about evolution parallels the diversity of life. In this review, we will be provocative in an attempt to make readers feel ambivalent about their own views of evolution.
We do not believe that theoretical advances will emerge from ecclesiastical dialogues among different entrenched viewpoints in which each side expects the other to convert at some point, until an eventual winner emerges. Metaphor is essential for this. Fortunately, metaphors cannot disappear-forgotten or set aside, they can always be recovered and reexamined in light of new information or challenges to an existing framework. So, we will return to the metaphors elaborated by Darwin and neglected during the development of the New Synthesis. Then we will discuss briefly the notion that Herbert Spencer's influence initiated neo-Darwinism.
Next, we show how Darwin's metaphors provide a common link among some late 20 th century reactions against the "hardened synthesis. The belief that neo-Darwinism is a refinement of Darwinism begins with the final sentence of the sixth chapter of the Origin of Species :. The former seems to be much more the important; for nearly similar variations sometimes arise under, as far as we can judge, dissimilar conditions; and, on the other hand, dissimilar variations arise under conditions which appear to be nearly uniform. Darwin believed that organisms produced offspring similar but not identical to each other; transmitted those similarities and differences to their offspring; and acted in their own behalf.
Mostly importantly, those capacities held regardless of the Nature of the Conditions. Without substantial autonomy from the surroundings, there could be no reproductive overrun, hence no struggle for survival, thus no natural selection. Natural selection was the outcome of conflicts created a priori by the conditions of existence. It was a consequence of the higher law. The final paragraph of the sixth chapter of the Origin supplies the essential context. On my theory, unity of type is explained by unity of descent.
The expression [our bold] of conditions of existence The higher law encompasses all interactions between the nature of the organism and the nature of the conditions. Darwin lacked mechanisms of inheritance and ontogeny, yet understood that organisms were genealogically and developmentally cohesive. It was in the nature of the organism to produce offspring that were all highly similar to each other and their parents and other ancestors.
He also postulated that reproduction produced variation without regard for environmental conditions, and therefore it was in the nature of the organism to produce offspring in numbers far exceeding the resources available for their support. This cannot happen in a Panglossian Lamarckian world, so there must be constraints on responses to the surroundings. Darwin resolved this conundrum by postulating that the nature of the organism created those constraints. And yet, they are not absolute.
Information and Living Systems: Philosophical and Scientific Perspectives
All reproducing organisms have positive Darwinian fitness, but some are fitter than others in their particular environments, where they predominate numerically over their merely adequate relatives. However selection-challenged, those relatives survive and play decisive evolutionary roles. When the conditions change, the fittest in the old environment might not survive at all, whereas some of the merely adequate might flourish. Natural selection was thus an emergent property of the inevitable conflict created by the conditions of existence and also a metaphor for the ways in which such conflicts are resolved, setting the stage for resolution of conflicts yet to come.
Darwin believed evolution was an outcome of interactions between two classes of phenomena, each following their own rules yet spatially and temporally entwined. Furthermore, the two classes of phenomena were not co-equal, the more important inevitably creating conflict, the "lesser" leading to resolutions of those conflicts. Perhaps understanding how heterodox his views were, Darwin proposed two rich metaphors to help visualize his theory of the fundamental complexity of evolution. The Tree of Life metaphor is a symbol of a major part of the evolutionary process.
- linstigateur (MAÎTRES ET ESCLAVES t. 4) (French Edition);
- Selected publications;
- RESOLVED Primer!
Living systems are capable of acting in their own behalf, but more importantly, they regularly take the initiative, using what they have inherited. Metaphorically, the present is the state in which biological systems create their own futures based on their own pasts. Particular origins in space and time play integral roles in explaining the properties of organisms and the species they form, and how they interact with their surroundings, including other species.
In Europe, "sycamore" is a maple Acer pseudoplatanus and "plane tree" Platanus orientalis is what North Americans call "sycamore" Platanus occidentalis. Darwin's metaphor of natural classification being a phylogeny enables us to understand why North American sycamores and European plane trees resemble each other so closely, why their ecologies are so similar, why they hybridize so readily. Darwin's phylogenetic tree metaphor contrasted with a progressive view of diversity embodied in the Scala Naturae , underscoring the notion of evolution as one of selective accumulation of diversity rather than selective replacement.
These laws, taken in the largest sense, being Growth with reproduction; Inheritance which is almost implied by reproduction; Variability from the indirect and direct action of the conditions of life, and from use and disuse; a Ratio of Increase so high as to lead to a Struggle for Life, and as a consequence to Natural Selection, entailing Divergence of Character and the Extinction of less improved forms.
Thus, from the war of nature, from famine and death, the most exalted object which we are capable of conceiving, namely, the production of the higher animals, directly follows. There is grandeur in this view of life, with its several powers, having been originally breathed by the Creator into a few forms or into one; and that, whilst this planet has gone circling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being evolved. The "Tangled Bank" statement evokes selective accumulation of diversity producing complex ecosystems.
It also reinforces Darwin's view that natural selection is an emergent property. Bowler wrote an influential text about the "eclipse of Darwinism" and its end, which he linked to the rise of the Modern Synthesis. We are concerned with the origin of the eclipse. STOCKING noted that by , anthropologists and sociologists considered "evolution" synonymous with progressive historical sequences.
A sociologist of that period, Herbert Spencer, interpreted natural selection as a phenomenon in which the fittest out-competed their rivals. As "survival of the fittest" gained popularity, North American Lamarckian E. Cope raised the question COPE : "If evolution is survival of the fittest, what explains the origin of the fittest?
Darwinians would have dismissed Cope's arguments, saying evolution was not survival of the fittest but survival of the adequate. In this way, natural selection could be construed as a creative and progressive process. We, therefore, trace the origin of neo-Darwinism to the Spencerian Heresy in the last quarter of the 19 th century. Spencer's views were popular enough that Darwin tried to reinterpret Survival of the Fittest in metaphorical rather than nomenclatural terms. He was unsuccessful. With his metaphor of formless blocks of marble, Simpson exposed the fundamental difference between Darwinism and neo-Darwinism with respect to the nature of the organism.
Darwinians would have characterized lineages of inheritance as the sculptors and natural selection as more an art critic than an artist. For Darwin, inheritance introduced historical contingency into evolutionary explanations. This perspective was consistent with a Humean interpretation of history as a series of causal events that may influence the future without assuming it to be pre-determined; a perspective highlighted by Darwin's phylogenetic tree metaphor.
Neo-Darwinism fit well with a more Hegelian view that history is a passive record of the emergence of inevitable events. It is now a science. Having eliminated the nature of the organism by making selection creative, neo-Darwinists killed time by eliminating phylogeny as explanatory. To underscore our perspective, we present the following statements about evolution expressed by self-described neo-Darwinians, preceded by Darwin's views in the 6 th edition of the Origin :.
The "hardening of the synthesis" led to a simple view of evolution: function follows the nature of the conditions and form follows function, blurring the distinction between Darwinian and Lamarckian explanations ELDREDGE , Darwin's metaphors allow us to understand how Darwinism and neo-Darwinism have been conflated.
They also clarify connections between Darwinism and some major critiques of neo-Darwinism in the s and s, such as The Hierarchy View ELDREDGE , : Living systems are simultaneously part of an informational hierarchy of replicators the Genealogical Hierarchy - Nature of the Organism and an energetic hierarchy of interactors the Ecological Hierarchy - Nature of the Conditions. All of these proposals are radical in the literal sense that they return to the roots of Darwinism. We believe they failed because neo-Darwinism is not rich enough metaphorically to accommodate the full diversity of life and life functions.
We suggest three classes of metaphors time, space and complexity we believe can frame productive discussions about the future of evolutionary theory. Everything that happens has a material cost. Physics treats this reality as an accounting system, where transactions can be measured in terms of the transformation of energy most easily seen as heat loss or of the movements of particles in the system affected by the transformation of energy. No matter the particular manifestation of these transformations, there is always a net cost, called "entropy. This accounting can only be done retrospectively, using the temporal record produced by each causal event.
Early thermodynamics limited the usable energy account of a system. When the usable energy inside a system was exhausted "equilibrium" had been reached, the system having achieved maximum entropy. Just as the Darwinian reality had inspired the unsavory vision of "Nature red in tooth and claw," this view of thermodynamics inspired the terrifying vision of the "Heat death of the universe. LOTKA , characterized biological systems as metabolic systems, maintaining themselves in highly organized states by exchanging matter and energy with their surroundings.
He suggested that the inevitable structural decay that accompanies such transactions could be delayed, although not reversed, by the system's accumulation of energy from outside to do work within the system. Organisms undergo heat-generating transformations, involving a net loss of energy from the system, and conservative transformations, changing free energy into states that can be stored and utilized in subsequent transformations.
All conservative transformations in biological systems are coupled with heat generating transformations, but the reverse is not true; maintaining structure is expensive BROOKS et al. For closed thermodynamic systems, once the matter inside the system is dispersed maximally, given the boundaries of its container, equilibrium is reached and the bank account is empty. All work ceases. Equilibrium systems show no duality in energy use, clearly inadequate for understanding biological systems. Open systems are those for which new energy and matter can flow through the system, allowing the system to continue to function so long as the flows continue.
Total entropy changes dS entropy production are subdivided into two components, one accounting for exchanges between the system and its surroundings d e S: heat-generating transformations and the other for production by processes internal to the system d i S: conservative transformations. Exchanges between organisms and their surroundings are accompanied by a great deal of waste dissipated into the surroundings; hence, d e S is large compared with d i S. Open systems, however, must produce entropy internally.
Internal production d i S , manifested as storage and transmission of information, is critically important in biological evolution, even though it represents a tiny portion of an organism's energy budget. Biological systems maintain themselves in highly organized states far from thermodynamic equilibrium through causal engagement with the surroundings.
That engagement must be mediated by a physical distinction between the "inside" and an "outside" of the organism, allowing for autonomous internal processes. This boundary is provided by cell membranes, which are not only physical barriers between the inside and outside of the organism but are also mechanisms for modulating the exchange of matter and energy between the organism and its surroundings. Production Rules in biological systems govern the internal processes. Organisms mitigate these effects by "exporting" entropy to the surroundings; if all the heat generated by processes associated with bringing matter and energy into an organism stayed in the organism, it would rapidly die.
- Mark Bedau.
- Support Programs for Ex-Offenders: A State-by-State Directory?
- Continental philosophical perspectives on life sciences and emerging technologies.
- Information and Living Systems | The MIT Press.
Conservative transformations are characterized by energy and entropy flowing in the same direction, entropy production being retained within the system and tending to move the system towards more structured states. As entropy and energy flow through biological systems at different rates, structure accumulates at different levels of organization; furthermore, the structure at any given level is constrained by energy and entropy flows at other levels.
Organisms maintain themselves through time by exploiting "resource gradients" in the surroundings ULANOWICZ , determined by interactions between abiotic and biotic factors. Abiotic factors can be structured in part by ca; for example, metabolic processes are involved in the degradation of high-grade energy sources into lower grade forms of energy, including heat. Both the capture of incoming solar energy by biological systems, and the mass re-radiation of heat by these organisms affects the thermal profile of the earth.
Additionally, the production of oxygen as a byproduct of photosynthesis and of carbon dioxide as a byproduct of aerobic metabolism affect the composition of the earth's atmosphere. Biological systems produce entropy at different rates because energy stored by conservative transformations is degraded at different rates. At the lowest organizational levels, the shortest time intervals, and the smallest spatial scales, the greatest relative contribution to c is ca. For cellular or sub-cellular structures over short time intervals, physiological processes dominate explanations.
Most entropy production is dissipated into metabolic heat loss. Most entropy production at this scale is dissipated into accumulation and maintenance of biomass. From the perspective of the environment, such patterns of biodiversity tend to be organized with respect to energy gradients, whereas from the perspective of the genealogical system, biodiversity is organized with respect to sister-group relationships and patterns of geographical distribution that mirror geological evolution occurring on similar temporal and spatial scales.
We're all children of Time. Systems showing irreversible behavior have a sense of time generated by the thermodynamic costs of their behavior. Exchanges of matter and energy between the system and surroundings d e S generate cyclical time. Metabolic and other homeostatic mechanisms are examples. Shevek called this Simultaneity , because the endpoint of a complete cycle of irreversible processing of matter and energy is the starting point - beginning and end are the same.
These cycles maintain living systems, allowing them to persist long enough to be able to change, but they do not create change by themselves. For such changes to occur, biological systems must also be able to make linear time, which Shevek called Sequency. Sequential time is made by thermodynamic production d i S ; reproduction, inheritance, ontogeny, and speciation are examples. Both simultaneity and sequency processes can be envisioned in terms of energy flows. More often, however, we express sequency processes in terms of the flow of information. The relationship between information and material phenomena, including thermodynamics, has had a long and turbulent history.
Information theory has developed from two general perspectives, "communications theory" and "measurement theory". These perspectives overlap in proposing that information 1 is anything transmitted from a "source" through a "channel" to a "receiver" and 2 is an abstraction rather than a material part of the system. Neither of those conceptions is adequate for describing biological systems. In communications theory , the amount of information sent from a source is calculated using a statistical entropy function. Errors in transmission can result from poor encoding at the source or from noise in the transmission channel.
Meaningful information is that subset of information transmitted actually recorded by the receiver there may or may not be a separate decoder. All of the processes affecting the transmission and reception of the information thus decrease the entropy of the message from its maximal value at the source.
Physical entropies are expected to increase as a result of work done on the system, so either information transmission is not a physical process or the communications view of entropy is non-physical. Measurement theory provides a second formalism. Bound information is determined with respect to the "complexions" microstates of the system. Information points to, but is not a material part of, the system.
They used a general mathematical formalism summarizing changes in the number of parts, the number of kinds of parts, and the relative frequency of the different kinds of parts, key elements of biological complexity. Simple heuristic simulations emulating biological processes associated with the storage and transmission of information e.
If H max is a function of the capacity, or potential, of a system and H obs is a function of the expression of some of that potential, the difference between information capacity and information content is proportional to the constraints, inherent and extrinsic, on the system. For example, additive genetic variance could be construed as an indication of population-level entropy, while genetic correlations would be an indication of organizing principles constraining that variance.
COLLIER , related information to the causal capacity of a system, its ability to impose distinctions on its surroundings. Requirement 2 is satisfied by showing that energy and information are interconvertible. Conservative processes within biological systems are coupled with heat-generating processes, so there is an energetic cost associated with the production and maintenance of biological information.
Intropy and enformation are interconvertable e. The number of accessible microstates is increased by the production of new components, either at a given level or through the opening up of new levels of organization. For example, auto-catalytic processes producing monomers make "monomer space" available. Some monomers have high chemical affinities for each other, and will spontaneously clump into dimers and polymers. Once polymers begin to form, "polymer space" becomes available to the evolving system. At this level, polymers are macrostates and monomer and dimer distributions are microstates.
Causal interactions among polymers create new levels of organization in which polymer distributions are the microstates and new levels of organization are the macrostates, and so on.
Each new functional level creates a hierarchy of increasing structural intricacy, manifested by increasing allocation of the entropy production in structure. Ranging from molecular affinities to cell-cell adhesion to genetic compatibility, mate recognition, and genealogy, they provide resistance to fluctuations from lower levels, allowing macroscopic properties to emerge.
Cohesion is thus analogous to inertia we discuss cohesion below. This introduces an irreducible hierarchical structure to the evolution of living systems. Interactions between Simultaneity and Sequency. We're just recycled history machines. Theoretical studies in nonequilibrium thermodynamics e.
These physicists argued that life was so improbable it demanded a special explanation. They accepted the progressive nature of evolution, which they also considered contrary to the expectations of the Second Law. Prigogine and colleagues developed a model by which life could originate as an improbable event and evolve into increasing improbable states. Looking to flows between the system and its surroundings, d e S, for insights, they discovered that near equilibrium, random fluctuations in the exchanges between the system and surroundings could theoretically produce states of lowered entropy.
This view became so popular that when BRODA discussed Boltzmann's lecture, he changed "entropy" to "entropy [negentropy]" throughout the text. And yet, Boltzmann stated that life was a struggle for entropy. It seems Boltzmann was able to see the organism's perspective, focusing on the source rather than the fate of the matter and energy needed to sustain life.
Living systems must find usable energy. Plants find this in the form of photons coming from the sun. The source of those photons is thermonuclear reactions in the sun. Being relatively low energy products of the sun's thermonuclear reactions "exported from the system to the surroundings," photons are part of the sun's entropy production. Photonic energy used by plants to build biomass is part of plant entropy production, so when herbivores eat, they are feeding on entropy.
In exchanging matter and energy with their surroundings, organisms degrade their surroundings more than themselves, maintaining a low-entropy state relative to their surroundings. This may look like the Darwinian duality, but "self-organization" in this context means the tendency for the system to organize itself according to the nature of the conditions. The principle of maximum entropy production asserts that systems will utilize resources from the surroundings as rapidly as possible; those sequestering the maximum amount of energy fastest win, starving out their slower competitors.
Absent from these formulations is thermodynamic production, d i S the nature of the organism. It is more complete to recognize that biological systems maintain low-entropy states relative to their surroundings, but not relative to their own previous states. Entropy can, and does, increase through time - the inevitable structural decay that accompanies such transactions can only be delayed LOTKA , But the entropy increase that characterizes biological systems is far from maximum entropy production. Early in ontogeny, organisms exhibit high metabolic rates, corresponding to something like maximum entropy production.
Through time, however, this "immature" stage is always replaced by a "mature" or "steady state" phase, characterized by reduced metabolic rate. Finally, all organisms enter a "senescent" stage in which metabolic rate decreases further. Decreasing rates of entropy production are determined by interactions between the surroundings and the "sense of self" the organism inherits from its parent s. The material information system of inheritance is thus critical in determining the ways in which the organism interacts with the surroundings to produce its actual lifespan.
Evolutionary persistence is associated with decreasing rates of entropy production, not maximal entropy production. BLUM introduced the metaphor of time as an arrow, being propelled into the future by power applied from "outside.