Biological Information and Intelligent Design: Signature in the Ribosome

This.

The deception employed (wittingly or unwittingly) by Cornelius and other evolution denialists is that the terabytes of sequence data we have only amount to some vague “similarity.” The reality is that these trees are constructed from differences.

1 Like

As for new body plans (which is a Humpty Dumpty term), since you question the ancestry of vertebrates, you might want to check out tunicates.

As for new protein complexes, do you have any examples? And are you aware of the time required to evolve (using variation random with respect to fitness and selection) a highly-specific novel binding site with nanomolar affinity?

2 Likes

[quote=“Billcole, post:76, topic:5974”]
How can we explain these complex sequences appearing through neutral mutations that occur in isolated populations? The math simply does not work.[/quote]
What math? Why would you limit consideration to neutral mutations and not consider selection?

[quote]I think the evidence is pointing to multiple origins of life as an alternative hypothesis to universal common descent, or multiple origins of life in combination with common descent.
[/quote]Then let’s discuss some actual evidence, not what Cornelius says about evidence.

1 Like

I can’t speak to individual people on this but I do not think this is usually deception per se. I think many of them really think this is true. Though, of course, this betrays some big misunderstandings about evolutionary theory and genetic data.

For example, on a basic level, if you can mathematically model the similarity, this is equivalent to modeling the difference. E.g. a similarity of 80% is equivalent to modeling a difference of 20%.

1 Like

@Billcole we have covered this elsewhere with others.It turns out that those new complex DNA sequences came from prior DNA sequences. All the de novo genes in humans have counterparts in apes (in non coding regions).

We are still waiting for a response about this. Perhaps you could give it a try.

And before we give it a full go, keep in mind that even if this was a good argument it is not an argument against common descent. At its best, it is an argument for a yet undiscovered mechanism of change.

1 Like

Dr. Hunter,

Hope you are doing well by God’s grace today.

That’s a claim I cannot accept. The mapping of 64 codons to 20 amino acids might be a telos to Richard Dawkins, but I don’t think it would be a telos to Aristotle or Aquinas. It certainly doesn’t seem like a telos worthy of the God I worship.

My general observation is that biology, like basically every other branch of science, has some unsolved mysteries that are being researched. Your examples have pointed out that in something as complex as life, much variance (“noise”) exists alongside or within the theory (“signal”). Your examples scarcely begin to address the existence of the signal, as far as I can tell.

To provide one example, Nakhleh (2013) describes the considerable progress made in the reconciliation of phylogenies, while acknowledging the challenges that remain. Your approach, as far as I can tell, is to cite the remaining challenges without acknowledging the progress that has been made. This is how I am interpreting benkirk’s response to your series of articles:

The mountain of evidence for common descent includes the fossils showing the development of fins in Cetacea, the patterns in the distribution of endogenous retroviruses and pseudogenes, etc. If you want to convince the 99% of the biologists who disagree with you, I suspect you’ll have to address the strong evidence at the heart of the theory of evolution, rather than nibble around the edges and proclaim its defeat.

My $.02

1 Like

De Novo genes are indeed a fascinating area. One of the debates is the mechanisms that can make these changes. Our limited understanding of DNA makes this even more difficult. The genome is a sequence of the chemicals initialed ATCG. These four chemicals arrangement makes up the genetic code. The challenge is the amount of ways they can be arranged vs an arrangement that functions properly. Since they are a sequence the amount of ways they can be arranged is 4^3.2 billion ways(the standard formula for a sequence) in the case of the human genome. How does this massive amount of sequential chemicals become arranged to create a unique animal?

Lets start with just the common ancestry of chimpanzees and humans. Do you dispute this? What specific genetic evidence to offer to against this?

Hi Joshua
Yes, this is a fascinating transition or special creation. I would like to have this discussion without a conclusion but in order to better learn about the evidence. If you believe there is a reasonable hypothesis that humans and chimps descended from a common ancestor, can you state a clear hypothesis?

I think there are probably similar sequences to de novo genes between chimps and humans. Through the history of life there are significant jumps that need to be made that appear to require significant engineering to come up with the proper DNA codes for both the protein exon regions the splicing code regions. I think it is exceeding unlikely that current evolutionary theory can explain this. We don’t have an identified mechanism that creates novel new functional sequences. This gets even messier when we talk about regulating sequences like micro RNA’s.

Hi Ben
I have looked at models like Michael Lynches and they require very large populations and time to get a few mutations fixed in a population. If you have a suggested model I would be very interested to look at it.

Well, the obvious answer is to start smaller. Nevertheless, for the purpose of your question, let’s just look at the current size of our genome. The average number of new mutations in the genome of each human is 60. The overall genetic variation between humans is several million differences. A fairly small number of these differences are being selected upon strongly, a larger proportion are subject to weak selection, and the rest (neutral variations) are just ‘drifting’ with the fortunes of their carriers.

Obviously, the number of possible “unique animals” that can be formed using our genetic code is enormous! How do you think we should mathematically quantify this proportion? And how can we track through time the changing probabilities of viability as the complexity of our organism population grows?

5 posts were split to a new topic: Gene Tree Incongruence

Meaning that the code’s degeneracy would not be created by a designer worth his salt? Interesting argument, but I’m afraid if the code is biochemically determined, then teleology is inescapable. This is particularly true given that the code is so unique and ~optimal. It would be difficult to imagine a better example of design.

OTH, if it is not biochemically determined (which I think is the case), then it demolishes evolution, since you would literally have to evolve somehow through an astronomical rugged search space.

Hi Cornelius, hope you are doing fine today. I would like to respond to something you just said to @Chris_Falter.

I thought we had agreed before that the DNA code used by most branches of the evolutionary tree is not unique and that there are many alternative versions of it being employed by all kinds of micro-organisms.

Variation in the DNA code for the earliest forms of life could explain (1) why there are so many present-day variations in the genetic code among relatively simple micro-organisms and (2) why the code employed by our line of descent appears to show some degree of optimization. I don’t see how these findings are problematic for the evolutionary paradigm. On the contrary, it seems to be evidence for emergence of the genetic code through evolutionary processes.

Casper

Hi Casper:

So a couple of points here. First, (Ling, 2015), which I think you cited earlier, does demonstrate many alternate versions of the DNA code, as you say. However, they are defining “DNA code” differently than we are. In addition to the actual code, they also are including codon bias, ambiguous decoding, and recoding. Those three things are not really part of the code, per se, so much as how the code is used. And those three things are certainly important. There are all kinds of subtleties, like those, at the molecular level, which do not fit CD. But in terms of questions about how the DNA code itself evolved, those three things would typically be considered a different question.

So, second, what about the actual different codes. They aren’t all that common, and the different codes we have discovered are all minor variants of the canonical code. Again, these are interesting and important. In fact, not surprisingly, their pattern also contradicts CD. BTW, Ken Miller once argued that they, in fact, do nicely fall into a CD pattern. That is false and he reworded/elaborated on his claim, saying that convergence shows a CD pattern. :confused:

So, third, we really don’t see a CD pattern that would show different codes going back to the LUCA, or LUCAs, or a network, or whatever. From an evolutionary perspective, the conclusion has always been that the canonical code goes back to the LUCA.

If the code were not ~optimal, it would solve a lot of problems. You would still have the problem of how a code, any code, would evolve in the first place. But at least you could stop there. Any old code would do. Instead, once you have a code up and running (somehow), you then have to traverse, and search through, an astronomically large code design space, chocked full of local minima in a rugged fitness landscape. It’s not going to happen.

The one way around this is to say the code is biochemically determined, so that (again, somehow) it basically self assembles. It doesn’t evolve, so much as merely comes together. That is just a real stretch, but for those who want to go there, it would mean an incredible confirmation of design.

So there you have it. The DNA code either (i) demolishes evolution, or (ii) confirms design.

Hello Bill,

IIRC we are discussing your agreement with Cornelius’s claim, “There is a vast amount of evidence against common descent,” and your claim, “I think the evidence is pointing to multiple origins of life as an alternative hypothesis to universal common descent, or multiple origins of life in combination with common descent.”

May we please focus on evidence, not models nor arguments, since you have explicitly specified evidence?

My question is purely evidentiary: “And are you aware of the time required to evolve (using variation random with respect to fitness and selection) a highly-specific novel binding site with nanomolar affinity?”

The answer is a number and unit of time. Are you aware of it?

A post was split to a new topic: What is Universal Common Descent?

I split some posts into other threads. Please check the new threads before you post.

This is a very interesting point. The key to getting our arms around the sequential space challenge is conceptualizing how big the number 4^3.2 billion is. If you were to type across a page of 100 letters across and 100 lines down it would take 3200 pages to write the number. If you wanted to state the sequential space of all the organisms that ever lived on earth it would take less than one additional line on page 3201.

Now your point is logical that change can be one gene at a time but still the average human gene takes around 1500 nucleotides to code an average size protein. So the possible ways to arrange the DNA code for the average protein is 4^1500. This number is smaller but still orders of magnitude larger than the number of sub atomic particles in our universe approximately 4^200. The question is how many ways can you arrange the DNA and still get the required protein function? Unless that number is close to the total possible ways to arrange the DNA then a non directed search will almost certainly fail.

Hi Dr. Hunter,

I hope you and yours enjoy a day of thanksgiving for all the blessings we enjoy. My daughter is visiting this weekend, for which we are truly grateful. When the medical problems seem so large and the pain is so great, it is hard to give thanks. So we are learning to persevere together in this grace of thanksgiving.

You seem to be saying biochemistry could be added to the list of finely-tuned parameters in the philosophical fine-tuning argument for God’s existence. I’m fine with that. I would hasten to add, however, that it if we make that argument, I think we have to acknowledge that the DNA code is no more teleological than gravity or the speed of light or the mass of a proton. Does that make sense?

Your form of argumentation is interesting to me because I am giving attention in my master’s studies to how to infer causality from complex data. The thorny problem with modeling highly complex systems is that the data always contain a lot of noise that a very simple model struggles to capture and explain. So how do you distinguish the signal from the noise? How do you know that your model is explaining real forces that are truly at work when there is never a lack of exceptions to the rule? Do you just throw up your hands and say, “Whatever happened, Goddidit–that’s all I know”?

The way forward, I suggest (not that I’m the first!), is to adopt this rule of thumb: if applying a model to a significantly large data set significantly reduces the variance (noise), we can reasonably conclude that the model has real explanatory power for those data. The model does not have to completely eliminate the variance; the requirement is just that it has to significantly reduce the variance.

To provide a mathematical example: the principal component analysis (PCA) model-building process explicitly relies on variance reduction to infer causality. I.e., you find the vector that most reduces the variance, then recursively find orthogonal vectors that most reduce the variance until you reach the point where further vectors would overfit. Ordinary least squares regression (OLS) is another example of model-building that relies on variance reduction for explanatory power.

To see how this works in practice, let’s look for an example outside the field of biology: the question of whether the global climate is warming. (Please note: I will not apply the rule of thumb to the question of whether climate change is anthropomorphic; this example addresses only the question of whether a long-term warming trend has been occurring.) The chairman of the Senate Environment and Public Works Committee steps to the podium in Feb. 2015, throws a snowball, and says, “It’s very, very cold out! Here’s your proof that global warming is a hoax.” Has the honorable Senator Inhofe proven that climate scientists are the captives of their paradigm, and they cannot account for all the data because of their ideological commitments? Are the climate scientists just another band of flat-earthers and geocentrists? Or is it the honorable Senator who is not accounting for the data, and is blind to the signal because he is focusing solely on the noise?

The scientific response is quite simple: weather is a highly complex, global system with lots of variance. Consequently, the occurrence of a few bitterly cold, record-low-temperature days in the midst of the overall warmest winter on record to that point does not disprove the long-term, global warming trend.* Those few bitterly cold days are noise, not signal. Look at enough observations across the entire globe since the advent of the industrial revolution, and the warming trend is apparent–even though there are plenty of local, shorter-term observations that seem to contradict the trend. Moose Jaw, Saskatchewan had a cold summer in 2016? It was unusually cold in Coalinga, California? Sure, but the vast majority of cities had the warmest summer (overall) on record. The numerous exceptions do not disprove the far more numerous observations explained by rule.

Similarly, the question of whether common descent (CD) is a valid scientific explanation does not hinge on whether it explains all the data perfectly–i.e., whether it completely explains all the variance. Instead, a reasonable person should be willing to accept the theory of common descent if it significantly reduces (not eliminates) the variance in a wide variety of biological and paleontological data. Does it do so? Yes, overwhelmingly so. It explains patterns in the distribution of endogenous retroviruses, the patterns of distribution in pseudogenes, the patterns of limb morphology in cetaceans since the K-Pg boundary, the appearance of land-lubbin’ tetrapods in the Devonian, the overall congruence between phenotype and genotype (hope I’m saying that correctly), and a mountain of other data.

Thus I see your list of “Darwin’s predictions” as snowballs thrown in the midst of the overall warmest winter on record. They are interesting, worthy of scientific research. But they do not reduce the overall explanatory power of common descent, as the replies of your fellow scientists in this thread indicate.

If you want to convince the scientific community that common descent is truly the equivalent of flat-earthism and geocentrism, you could start by addressing the specific issues commonly cited as strong evidence (e.g., distribution of ERVs and pseudogenes, evolution of limb morphology in cetaceans), rather than exercising your selection bias to find a few examples of small amounts of variance that have not yet been fully explained.

My $.02,

EDIT: I realize that this proposal for inferring causality does not address covariance and other conundrums. Instead of turning this post into a book, I think it sufficient to note that no objection to the theory of evolution has ever turned on the issue of covariance, AFAIK.


  • Again, I point out that I am only discussing temperature trends. The question of whether global warming is anthropogenic or not is very interesting, but I am not asking it in this post.
3 Likes