Biological Information and Intelligent Design: Meyer, Yarus, and the Direct Templating Hypothesis

Hi Eddie -

You seem to be advocating that scientists stop using Bayesian methods. Given the remarkable success of Bayesian methods across the sciences, I don’t think your campaign is going to succeed. I give it a 1% chance of success.

But of course, your campaign could in fact go viral, knock out the competition, and sweep the field of philosophy of science, even if it’s not very “fit” in this competition between methods…

1 Like

Interesting how the widespread use of contraceptives has made that statement problematic in humans.

Bayesian methods are precisely (or I suppose one should really say “approximately”!) as good as the scientific model into which they’re plugged. And are still vulnerable to the number of variables and how well they can be defined.

@glipsnort
@Eddie
@Jon_Garvey
@Chris_Falter
@Jonathan_Burke

I haven’t commented here for a while, but I have caught up on most of the comments, and I think I see a pattern that might be worth mentioning.

The argument seems to boil down to whether or not its possible to predict the results of natural selection, without a priori (or a posteriori for that matter) knowledge of some quantitative estimate of fitness (which I think Steve has pretty well defined).

The answer is no, for the reason that Steve has given, it is extremely difficult to make accurate predictions in very complex scenarios. Since all of biology sets the standard for complexity among the sciences, it isn’t surprising that the central theory of biology, evolution by natural selection, would suffer from this problem. But biology is not alone. The three body problem in physics suffers similarly, and the solution of the Schrodinger equation for elements much larger than hydrogen does also.

This lack of predictability is not a hallmark of a bad science. The old idea that science always makes testable predictions has been modified greatly over the past century, ever since the uncertainty principle showed that in some cases, non-predictability is the rule. Chaos theory treats the idea of non-predictability in mathematical terms, and demonstrates its application to a large number of deterministic processes throughout the physical and social sciences.

So what all of this means is that yes, natural selection is a highly complex phenomenon whose outcomes cannot be predicted on empirical grounds based on quantitative assessments of fitness measures. But, again as Steve has shown, this doesn’t mean that such measures don’t exist, both absolute and relative fitness can be measured quantitatively and used in population genetics and Hardy Weinberg calculations to make useful predictions about how evolution works.

Here is an illustration from my own work. My group discovered a new allele of a human metabolic gene. We assessed its frequency in several populations. We found that it followed Hardy Weinberg equilibrium, which implied that it was not undergoing natural selection. We therefore predicted that it might not have any effect on fitness. After some further study, we found that indeed, as predicted, this allele had no effect on the health of the population (because of the activity of the gene in question, we thought it might)… So, while we never actually measured the fitness of the allele, the genotype, or the phenotype of the people who had the allele, it was still possible to make predictions about its role in natural selection based on biological law.

In other words, Bayesian

Eddie, take a deep breath. You are right that there is a confusion about words, and what is being discussed. I do not disagree in fact with anything that Steve has written, so the confusion is not between us. My example was about the effect of the allele on people’s health. I wanted to show that biologists are not totally in the woods when it comes to mathematical analysis of the effects of genetic changes.

We made a guess (based on the position of the SNP and the activity of the gene) that the new allele could possibly have a deleterious effect on the health of the carriers. If so, then we would have expected to see a loss of HW equilibrium due to negative selection (which is more common than positive selection, btw). We did not see that. By itself, this does not prove that the new allele is neutral, but it is consistent with that. And that “prediction” (used loosely) turned out to be correct.

Your comment about humans choosing (based on cultural factors) how many kids to have is quite right, and could overwhelm any strictly genetic factors. But strongly deleterious alleles, will still in many cases reduce survival (for example of newborns or children) and averaged over an entire population that will show up as a disequilibrium in allele frequencies as selection plays a role. I would suggest checking out the HW equilibrium and its relation to selection, online. Its a very simple equation, and very useful in population genetics.

No, that’s not quite right, Eddie. We did quantify the relative fitness of the new allele. It was 1.0. In other words it had no selective effect, either positive or negative. That was the result of the HW calculation. If we had found dysequilibrium, we could have quantified the relative fitness as being something like 1.2 or 0.95 etc. But note that my example is not related to evolutionary outcomes except tangentially, because I wasnt working on evolution at the time, but on population genetics in disease. Steve, who does work directly in evolutionary biology, uses far more sophisticated models to determine the quantitative fitness of genetic and phenotypic changes. My point was solely to illustrate that there are many ways to come up with numerical estimates of fitness. Which can then be used for various purposes. Im sorry if this isnt clear, but feel free to ask more questions.

1 Like

Sorry about that, Eddie. I think some of us have been trying to keep the discussion as non-technical as possible, but that often involves leaving out some information. Steve mentioned that we can talk about relative and absolute fitness, with relative fitness being much more commonly used. I will leave further explanation of absolute fitness to Steve or someone else, since I know little about it. For relative fitness, the factor (usually denoted as “s” or sometimes “w”) for fitness is 1, when there is no selective effect, greater than 1 when the effect is positive (meaning that the allele in question will increase with time in the population) or less than 1 (when the allele frequency will decrease with time). Most beneficial alleles will have s values close to 1, although sometimes, like the lactose tolerant genotypes, that value can be as high as 2 or more. The higher the relative fitness value, the faster the allele will spread through the population. I am pretty sure this is covered in online basic articles on population genetics, though I havent checked.

Possibly, depending on how much is known about the phenotype (or even genotype) of the new population. We made a guess based on the new genotype, which turned out to be wrong, since we were wrong about the phenotype. But if, for example the new population of rodents have white fur, (like lab rats) we could estimate a pretty low value for relative fitness, based on their lack of camouflage. On the other hand if they were much bigger than the indigenous population, it would be hard to predict, since ecology, animal behavior, and and all other biological fields are too complex to be easily modeled. It might depend on the specific jungle, what else is living there, what is the temperature, what food sources exist, and so on.

As you know, there have been many natural experiments (often non deliberate) that have illustrated how hard it is to predict the consequences of ecological change or human environmental interventions.

I might point out, shifting the topic just a bit, that some theologians have noticed this amazing complexity of our natural world, and have seen it (as I do) as a sign of the majesty of God’s creation. Whenever I hear that any simple natural law has been found to be not so simple, or that some model we have made to fully understand any natural phenonemon turns out to be not always true, I say “Thank God”. So getting back to @Jon_Garvey’s point, the fact that contingency is so universal in the natural world is to me a wonderful, indeed a holy thing. It doesnt mean that science is useless or that some things we call science arent really scientific. It means that (as Jon has said for some time now) we need to expand our definition of science to include the omnipresent fact of contingency in nature, and somehow (I have no idea how) to consider God’s providence in our naturalistic worldview. But that could be a topic for another thread.

@glipsnort

Fitness is not a scientific concept as you have defined it, because we have not theory to explain what makes a particular organism fit. We know what creates heat, We do not know what creates fitness per Darwinism.

5 posts were merged into an existing topic: Ecology and Natural Selection

Almost, but not quite, Sy. In this case I’m trying to work with Joshua’s concept of science as a discipline with a good, but definitionally limited, set of tools and purposes. And if we ask why it should be that acts of God should be properly excluded from science, it’s because science has to do with the patterns that are reproducible and “lawlike” in nature. As Asa Gray wrote, favourably quoting Bishop Butler, “The only distinct meaning of the word ‘natural’ is stated, fixed or settled.”

That’s why I said, about a million posts back, that what is contingent and NOT reducible to lawlike processes may certainly be observed and recorded within science (like the endless lists of Enlightenment naturalists) or else there would be no data on which to form new theories. But contingent causes properly belong outside the scientific method. They will be termed “random”, which is proper if everyone recognises that word as meaning always (in science) “unknown and beyond the scientific method”, rather than vaguely allowing ideas of ontological randomness like “undirected” or “purposeless” into scientific discourse.

A classic example was Steve’s mention of half-life, in which the statistical pattern is pretty precise and absolutely measurable: element A decays to element B, and experiments measuring the proportions over time will determine the half-life. Science rules, OK.

But decay of the individual particles is absolutely unpredictable, to the extent that someone like Lou Jost used to say it was “uncaused”. But that is not a scientific, but a philosophical or theological statement, as would be any idea that it is “undirected”, “spontaneous”, or any other explanation beyond “cause unknown and unrepeatable, ergo beyond science’s purview - ask a philosopher or theologian.”

The philosopher might well comment that “no cause” is incoherent. The theologian might well invoke the doctrine of providence. Neither would need to apologise for not providing empirical data for their view, because empirical data has only to do with repeatable causes - and contingency ain’t one.

For “quantum events”, you can substitute any “random” contingency, such as the hyper-mutation of the immune cells, neutral changes in genes, or anything (in fact) that does not yield a lawlike set of causes at the level under consideration. And that’s where, theologically speaking, God is making choices, just as where lawlike process occur, that’s where God is creating dependable regularity.

The question is not whether God is active in nature (if one is an EC that ought to be a given), but in what manner : science discerns the patterns in nature (or as George D rightly points out, constructs patterns it hopes match the real ones), but cannot do more than record contingencies - and look to other disciplines for causation, even though the causes are not necessarily supernatural.

As you rightly say, contingency is a “holy thing”: possibly science could be expanded to include it (but not without changing its groundrules against final and formal causes): or if that’s not useful perhaps scientists could just learn to do more than science, recognising and stating if they are crossing the boundaries, or at least being more aware that science does not in any way exhaust the understanding of creation.

2 Likes

Sy,

Ohno gave a figure of 30K genes that could, in the nature of things, be the maximum under selection at any one time (presumably meaning in any one population). I understand that to be one basis on which Kimura founded neutral theory: most genes would be fixed or perish completely independently of natural selection.

Since Ohno’s time, the knowledge of the number of genomic features undergoing evolution has vastly expanded through non-coding elements, overlapping genes and so on (though perhaps the number of coding genes has shrunk since HGP). Either way, those like Eugene Koonin suggest that selection necessarily remains blind to many beneficial and moderately deleterious mutations simply because of the elements that are under stronger selection. Hence, in the small populations of most higher species, most selection is said to be purifying, and most change neutral.

On the face of it, then, selection seems to be as much due to the other 30K (or whatever figure is now accepted) elements under selection, but mostly not under study, as to the value or otherwise of the particular gene being researched. This appears to me to break any clear correlation, in many cases, between tangible benefits or detriments of a trait and their being selected (and hence the idea of “reproductive success”).

Do population genetics fitness considerations take the limited capacity of natural selection into consideration in some way?

Hi Eddie,

Bayesian analysis can move in either direction in time. If I know that a new allele confers resistance to streptomycin, for example, I can predict that it will provide greater fitness to a bacteria population in my body when I’m taking antibiotics. If a new allele reduces the leg-to-body ratio in male humans, I can predict that it will provide greater fitness to the lucky Valentinos who have it.

That said, any predictions would be probabilistic rather than binary. The overall fitness of a trait in a population in a particular environment is highly complex. And as our friend Steve stated earlier,

1 Like

Not the entire difference, as Hayek pointed out in his Nobel lecture. The complexity of physical systems nevertheless remains tractable to science because the variables are geberally independent, and most can be safely ignored in an experimental or modelling situation.

However, the complexity of human systems (he was an economist) is not so because the complexity is what he called “organised”, in that all the variables affect all the others. In that case, unless one knows all the individual factors completely (impossible), one can intrinsically only make generalised approximations, rather than predictions: in other words, it’s not “more difficult”, but “qualitatively dissimilar”.

The problem is, one cannot know for sure just how approximate ones conclusions are: you don’t know what you don’t know until you know it, and so even Bayesian procedures don’t protect one from being wildly wrong.

Biology, of course, lacking willful humans, is somewhere in between physical and humans science, but is closer in its complexity to the human than the physical science situation. That’s been my point all along.

1 Like