Biological Information and Intelligent Design: Signature in the Ribosome

Well, the obvious answer is to start smaller. Nevertheless, for the purpose of your question, let’s just look at the current size of our genome. The average number of new mutations in the genome of each human is 60. The overall genetic variation between humans is several million differences. A fairly small number of these differences are being selected upon strongly, a larger proportion are subject to weak selection, and the rest (neutral variations) are just ‘drifting’ with the fortunes of their carriers.

Obviously, the number of possible “unique animals” that can be formed using our genetic code is enormous! How do you think we should mathematically quantify this proportion? And how can we track through time the changing probabilities of viability as the complexity of our organism population grows?

5 posts were split to a new topic: Gene Tree Incongruence

Meaning that the code’s degeneracy would not be created by a designer worth his salt? Interesting argument, but I’m afraid if the code is biochemically determined, then teleology is inescapable. This is particularly true given that the code is so unique and ~optimal. It would be difficult to imagine a better example of design.

OTH, if it is not biochemically determined (which I think is the case), then it demolishes evolution, since you would literally have to evolve somehow through an astronomical rugged search space.

Hi Cornelius, hope you are doing fine today. I would like to respond to something you just said to @Chris_Falter.

I thought we had agreed before that the DNA code used by most branches of the evolutionary tree is not unique and that there are many alternative versions of it being employed by all kinds of micro-organisms.

Variation in the DNA code for the earliest forms of life could explain (1) why there are so many present-day variations in the genetic code among relatively simple micro-organisms and (2) why the code employed by our line of descent appears to show some degree of optimization. I don’t see how these findings are problematic for the evolutionary paradigm. On the contrary, it seems to be evidence for emergence of the genetic code through evolutionary processes.

Casper

Hi Casper:

So a couple of points here. First, (Ling, 2015), which I think you cited earlier, does demonstrate many alternate versions of the DNA code, as you say. However, they are defining “DNA code” differently than we are. In addition to the actual code, they also are including codon bias, ambiguous decoding, and recoding. Those three things are not really part of the code, per se, so much as how the code is used. And those three things are certainly important. There are all kinds of subtleties, like those, at the molecular level, which do not fit CD. But in terms of questions about how the DNA code itself evolved, those three things would typically be considered a different question.

So, second, what about the actual different codes. They aren’t all that common, and the different codes we have discovered are all minor variants of the canonical code. Again, these are interesting and important. In fact, not surprisingly, their pattern also contradicts CD. BTW, Ken Miller once argued that they, in fact, do nicely fall into a CD pattern. That is false and he reworded/elaborated on his claim, saying that convergence shows a CD pattern. :confused:

So, third, we really don’t see a CD pattern that would show different codes going back to the LUCA, or LUCAs, or a network, or whatever. From an evolutionary perspective, the conclusion has always been that the canonical code goes back to the LUCA.

If the code were not ~optimal, it would solve a lot of problems. You would still have the problem of how a code, any code, would evolve in the first place. But at least you could stop there. Any old code would do. Instead, once you have a code up and running (somehow), you then have to traverse, and search through, an astronomically large code design space, chocked full of local minima in a rugged fitness landscape. It’s not going to happen.

The one way around this is to say the code is biochemically determined, so that (again, somehow) it basically self assembles. It doesn’t evolve, so much as merely comes together. That is just a real stretch, but for those who want to go there, it would mean an incredible confirmation of design.

So there you have it. The DNA code either (i) demolishes evolution, or (ii) confirms design.

Hello Bill,

IIRC we are discussing your agreement with Cornelius’s claim, “There is a vast amount of evidence against common descent,” and your claim, “I think the evidence is pointing to multiple origins of life as an alternative hypothesis to universal common descent, or multiple origins of life in combination with common descent.”

May we please focus on evidence, not models nor arguments, since you have explicitly specified evidence?

My question is purely evidentiary: “And are you aware of the time required to evolve (using variation random with respect to fitness and selection) a highly-specific novel binding site with nanomolar affinity?”

The answer is a number and unit of time. Are you aware of it?

A post was split to a new topic: What is Universal Common Descent?

I split some posts into other threads. Please check the new threads before you post.

This is a very interesting point. The key to getting our arms around the sequential space challenge is conceptualizing how big the number 4^3.2 billion is. If you were to type across a page of 100 letters across and 100 lines down it would take 3200 pages to write the number. If you wanted to state the sequential space of all the organisms that ever lived on earth it would take less than one additional line on page 3201.

Now your point is logical that change can be one gene at a time but still the average human gene takes around 1500 nucleotides to code an average size protein. So the possible ways to arrange the DNA code for the average protein is 4^1500. This number is smaller but still orders of magnitude larger than the number of sub atomic particles in our universe approximately 4^200. The question is how many ways can you arrange the DNA and still get the required protein function? Unless that number is close to the total possible ways to arrange the DNA then a non directed search will almost certainly fail.

Hi Dr. Hunter,

I hope you and yours enjoy a day of thanksgiving for all the blessings we enjoy. My daughter is visiting this weekend, for which we are truly grateful. When the medical problems seem so large and the pain is so great, it is hard to give thanks. So we are learning to persevere together in this grace of thanksgiving.

You seem to be saying biochemistry could be added to the list of finely-tuned parameters in the philosophical fine-tuning argument for God’s existence. I’m fine with that. I would hasten to add, however, that it if we make that argument, I think we have to acknowledge that the DNA code is no more teleological than gravity or the speed of light or the mass of a proton. Does that make sense?

Your form of argumentation is interesting to me because I am giving attention in my master’s studies to how to infer causality from complex data. The thorny problem with modeling highly complex systems is that the data always contain a lot of noise that a very simple model struggles to capture and explain. So how do you distinguish the signal from the noise? How do you know that your model is explaining real forces that are truly at work when there is never a lack of exceptions to the rule? Do you just throw up your hands and say, “Whatever happened, Goddidit–that’s all I know”?

The way forward, I suggest (not that I’m the first!), is to adopt this rule of thumb: if applying a model to a significantly large data set significantly reduces the variance (noise), we can reasonably conclude that the model has real explanatory power for those data. The model does not have to completely eliminate the variance; the requirement is just that it has to significantly reduce the variance.

To provide a mathematical example: the principal component analysis (PCA) model-building process explicitly relies on variance reduction to infer causality. I.e., you find the vector that most reduces the variance, then recursively find orthogonal vectors that most reduce the variance until you reach the point where further vectors would overfit. Ordinary least squares regression (OLS) is another example of model-building that relies on variance reduction for explanatory power.

To see how this works in practice, let’s look for an example outside the field of biology: the question of whether the global climate is warming. (Please note: I will not apply the rule of thumb to the question of whether climate change is anthropomorphic; this example addresses only the question of whether a long-term warming trend has been occurring.) The chairman of the Senate Environment and Public Works Committee steps to the podium in Feb. 2015, throws a snowball, and says, “It’s very, very cold out! Here’s your proof that global warming is a hoax.” Has the honorable Senator Inhofe proven that climate scientists are the captives of their paradigm, and they cannot account for all the data because of their ideological commitments? Are the climate scientists just another band of flat-earthers and geocentrists? Or is it the honorable Senator who is not accounting for the data, and is blind to the signal because he is focusing solely on the noise?

The scientific response is quite simple: weather is a highly complex, global system with lots of variance. Consequently, the occurrence of a few bitterly cold, record-low-temperature days in the midst of the overall warmest winter on record to that point does not disprove the long-term, global warming trend.* Those few bitterly cold days are noise, not signal. Look at enough observations across the entire globe since the advent of the industrial revolution, and the warming trend is apparent–even though there are plenty of local, shorter-term observations that seem to contradict the trend. Moose Jaw, Saskatchewan had a cold summer in 2016? It was unusually cold in Coalinga, California? Sure, but the vast majority of cities had the warmest summer (overall) on record. The numerous exceptions do not disprove the far more numerous observations explained by rule.

Similarly, the question of whether common descent (CD) is a valid scientific explanation does not hinge on whether it explains all the data perfectly–i.e., whether it completely explains all the variance. Instead, a reasonable person should be willing to accept the theory of common descent if it significantly reduces (not eliminates) the variance in a wide variety of biological and paleontological data. Does it do so? Yes, overwhelmingly so. It explains patterns in the distribution of endogenous retroviruses, the patterns of distribution in pseudogenes, the patterns of limb morphology in cetaceans since the K-Pg boundary, the appearance of land-lubbin’ tetrapods in the Devonian, the overall congruence between phenotype and genotype (hope I’m saying that correctly), and a mountain of other data.

Thus I see your list of “Darwin’s predictions” as snowballs thrown in the midst of the overall warmest winter on record. They are interesting, worthy of scientific research. But they do not reduce the overall explanatory power of common descent, as the replies of your fellow scientists in this thread indicate.

If you want to convince the scientific community that common descent is truly the equivalent of flat-earthism and geocentrism, you could start by addressing the specific issues commonly cited as strong evidence (e.g., distribution of ERVs and pseudogenes, evolution of limb morphology in cetaceans), rather than exercising your selection bias to find a few examples of small amounts of variance that have not yet been fully explained.

My $.02,

EDIT: I realize that this proposal for inferring causality does not address covariance and other conundrums. Instead of turning this post into a book, I think it sufficient to note that no objection to the theory of evolution has ever turned on the issue of covariance, AFAIK.


  • Again, I point out that I am only discussing temperature trends. The question of whether global warming is anthropogenic or not is very interesting, but I am not asking it in this post.
3 Likes

I think you have missed or not responded to my point, though. I was asking about the proportion of possible viable outcomes. You gave me the proportion of actual outcomes out of all the possible outcomes. I think it is grave error in the kind of math you are trying to present to not appropriately distinguish between these numbers.

For example, I could calculate the probability that each molecule of paint on a masterwork would be in its exact current location, and conclude that the existence of this painting is a mathematical impossibility. Or that the chances of my parents meeting, and contributing the individual egg and sperm cells that make me, are pretty small and get worse with each generation you go back. Should my conclusion therefore be that I shouldn’t exist, or that my existence is a miracle? Of course not! It’s not about the exact outcome we wound up with, it’s about the many similar paths that events could have taken to reach an outcome in the present day.

Indeed, the chances of the exact forms of life we see today existing are miniscule. But that doesn’t say anything about the chances of life itself existing!

Since you did not try to put any numbers out regarding this question, I’ll see if I can put some rough ones in the ring. I mentioned above that the average human has 60 new mutations. Let’s say the chances of a miscarriage are 11%. Now, for a precise number we should subtract out miscarriages not caused by novel mutations and add in cases where novel mutations are severely deleterious after birth, but that much precision isn’t practical or necessary to get a ballpark figure. This means that out of a hundred new humans, with 60 new mutations each, out of a pool of 6,000 new mutations, only 11 will be severely deleterious or nonviable. This means about 99.8% of random mutations are viable. So let’s fill in the vast majority of your 3,200 page book, shall we?

Of course the next question is what percentage of those will actually do something interesting and/or useful, which brings us to your second question.

How do you define required protein function? I would argue that virtually no proteins are ‘required’ when they are first generated. The organism’s ancestors all got along perfectly fine without that protein, even if subsequent generations gradually adapt around its function until it becomes indispensable to them. Again we see, in the nature of your question, a results-focused evaluation that ignores the real variety of possibility in favor of looking for a particular outcome.

The better question is, how many possible strings of 1,500 nucleotides will make a protein that does something that might be useful?

Answering that question will show you the real beauty of evolution.

1 Like

Hi Lynn
Thanks for the thoughtful reply. While I agree with you that backward probabilities are always 100%, when we look at the result we are trying to understand the cause. In the case of Lynn and Bill the chance of all the events happening looking backwards is and counting the statistics of that event is vanishingly small, it is 100% certain that we are now sharing ideas. Lets ask a different question, what is the cause of Lynn and Bill? Well, since our birds and bees explanation we have this one nailed :slight_smile: If I look at a yeast cell and want to know the origin of the spliceosome then the cause is more difficult to nail down. I know the DNA sequence that needs to be organized is at least 150000 nucleotides.

While as you claim the first protein of the 200 protein complex may have some sequence flexibility the next several have very little because they need to fit with both shape and charge.

Most nuclear proteins need to bond with several different proteins and are inherently mutation sensitive. So when I look at this marvel of engineering I am almost certain that it was not the result of a stochastic process.

Since the theory of common descent relies on a stochastic process to hypothesize new innovative features I think the sequential space problem is a show stopper.

While your comment about neutral mutations is valid it ignores the problem of them becoming fixed in the population. Also being neutral is not enough. They must find function through enormous sequence space.

Happy Thanksgiving

@Jay313

If you follow the scientific articles about a “Y-Chromosomal Adam” (sample article below) … you read that the person who possessed the original Y chromosome of virtually all males in existence today existed about 250,000 years ago. No one writes that this is the first Man . . . but by the inevitable workings of mathematics and combinations and re-combinations … 250,000 years (or 300,000 years, etc.) is enough time for that “Y” chromosome to be superimposed on all human males.

So, ironically, it is only when proposing a Young Earth, or a Young History of humanity that we would have the problem of “a Man-with-A-Soul” sharing the Earth with another identical looking Man - - who doesn’t have a soul!

The advantage of Old Earth scenarios is that long before we get to the era of written history … the first hominid deemed by God to have a soul would have had plenty of time to people all the rest of Soul-bearing humanity!

Hi Cornelius!

First of all, why do you think that evolution and design are diametrically opposed to each other? Isn’t it possible that evolutionary processes themselves were designed? If you would look at it that way, it would solve some of your arguments based purely on incredulity. Is God’s Creation incredible at times? Yes! :slight_smile:

You seem to lean very strongly on this supposed optimality. But we can never know for sure whether the DNA code in its current form is absolutely optimal in the global search space. We can only establish that it is a local optimum to some degree, given the other constraints of our biochemical system. For all we know, there could be totally different codes out there that can do a better job. For evolution, it does not really matter whether a global or a local optimum is attained, as long as it’s sufficient for its purposes. Do you completely reject the possibility that a sufficient code occupying a local optimum in the search space was arrived at through some early form of variation and natural selection?

[quote=“Cornelius_Hunter, post:100, topic:5974”]
The one way around this is to say the code is biochemically determined, so that (again, somehow) it basically self assembles. It doesn’t evolve, so much as merely comes together. [/quote]

Hmm… You say “It does not evolve, so much as merely comes together”. That’s odd. Very odd. The evolutionary history of mankind is all about things coming together just right. It does not make sense to split such contingencies from evolutionary processes and then dismiss it as incredible.

You aim most of your arguments at making a case against evolution. In the end the real meat of your arguments seems to boil down to just expressing a bare disbelief that event A or process B could have occurred in such or such way. But an argument from incredulity is not a convincing argument in itself.

More importantly, trying to “demolish” the evolutionary paradigm does not actually help you in building your case for your brand of “design”, whatever that may be. I think it would be more fruitful for you to spend more time presenting a positive case for the view you espouse. To put it frankly, chipping away at a mountain does not make your sand castle look any more majestic.

Greetings,
Casper

2 Likes

@Casper_Hesp

PERFECT !!! Drop the microphone and walk away !!!

2 Likes

I’m still not entirely sure what your sequential space problem is. The number of possibilities is enormous, yes. The number of manifest results is a tiny fraction of that. But I am still unconvinced that it follows that the options for evolving complex life are similarly small.

Here, I suppose, is where I have the opportunity to dazzle everyone with my detailed knowledge of the spliceosome. But sadly, I am not an expert on it, and am unlikely to become one by tomorrow.

But what your argument reminds me of is the “irreducible complexity” of the bacteria’s flagella. Behe staked a great deal of his argument on saying, in essence, that it was too complex, with too many interrelated moving parts, to have evolved stochastically. Because it was indeed very complex, and on the frontiers of scientific understanding of the day, scientists took a few years to identify a pathway for how it could have evolved piece by piece. But it wasn’t a shocking development when they did find a reasonable explanation for the evolution of the flagellum.

My question to you is, how confident are you that the argument based on the spliceosome you present will follow a different path? And furthermore, is that really what God is to you? An explanation you can whip out every time something looks complicated? A retreating battle on the frontiers of knowledge?

This is what I said back in comment 92: “The overall genetic variation between humans is several million differences. A fairly small number of these differences are being selected upon strongly, a larger proportion are subject to weak selection, and the rest (neutral variations) are just ‘drifting’ with the fortunes of their carriers.”

I’m not sure why you thought I was only referring to neutral mutations?

A very happy Thanksgiving to you too!

1 Like

[quote=“Billcole, post:112, topic:5974”]
While as you claim the first protein of the 200 protein complex may have some sequence flexibility the next several have very little because they need to fit with both shape and charge. [/quote]
This is incorrect and shows a lack of knowledge of the most basic protein biochemistry. Proteins are very, very sticky. They bind to themselves, for example; there are only a few known proteins that are very soluble!

Proteins bind to each other all the time–it’s the default! If you’re going to discuss this in a factual way, you need to be talking about affinities.

“Bond” is a term referring to covalent interactions, which isn’t what you’re talking about at all.

I don’t think you’ve looked carefully enough to draw such a sweeping conclusion. What are the dissociation constants for all of the interactions?

Have you looked at the evidence for the extent of sequence space that has been explored?

[quote]While your comment about neutral mutations is valid it ignores the problem of them becoming fixed in the population. Also being neutral is not enough. They must find function through enormous sequence space.
[/quote]No, they must not. Again, there is evidence that you’re missing that shows that very little of available sequence space has been used.

2 Likes

Hi Lynn Happy Thanksgiving :slight_smile:
This is a hard concept and took me a while to internalize. A sequence offers enormous possibilities so large that all random searches will get lost. There is software in the public domain you can play with that can help familiarize you with this concept. I would be happy to get you a link if you are interested.

I disagree that any reasonable argument has been offered against Behe’s flagellum. I know the phrase irreducible complexity is a buzz phrase but the problem of evolving the flagellum stochastically remains unsolved. I have had the fortune to discuss this issue with Mike Behe and think he a very solid scientist and a very clear thinker. It takes around 100000 nucleotides to code a flagellum motor so the problem is not as large as the spliceosome but still not evolvable stochastically.

I am open to any explanation that is not stochastic. I have looked into James Shapiro’s ideas and find them interesting but lacking sufficient evidence. I believe that God created the universe based on the design properties of sub atomic particles. I enjoy learning how he did this and I truly enjoy the discussion we are having. I think evolution as it is taught ignores some real severe obstacles and needs to be cleaned up based on scientific principles. I think the spliceosome is a real evolutionary mechanism and would be happy to share this with you.

Hi Ben Happy Thanksgiving :slight_smile:

The cells function is based on specific shapes and charges working together. I have done research in the cancer area based on vitamin d and how a sufficient amount in our blood can prevent cancer and auto immune decease. Vitamin d is a steroid that binds with another protein called the vitamin d receptor that is a DNA transcription factor molecule. Vitamin d is metabolized in our skin and an OH molecule is added in the liver and another OH is added in the kidney. If it is missing just one OH molecule it will not bind to the VDR. If complex molecules like proteins were not able to discriminate which molecules they bind to, life would not be possible.

Hi Chris:

I think you make an interesting point that the DNA code can be cast as a fine-tuning example. Of course, fine-tuning can get fairly complicated and I think your conclusion, that the code is no more teleological than gravity, etc., would require some work to establish firmly. I’m not against it. It just is not clear to me how to compare teleological-ness.

One reason why this would be complicated is the obvious one: these simply are very different things. Maybe not apples vs oranges, but more like apples vs walnuts. Some of the interesting aspects of the DNA code, I think, would include: It entails disparate molecular entities (e.g., DNA sequences and aminoacyl tRNA synthetases)–change one of them and things get fouled up; and we think that it evolved somehow. But I think you make an interesting point.

Well I wouldn’t make that claim. I’ve worked on plenty of very complex systems, with large volumes of data, which fit very well. I think in this age of Big Data, where the value of data is well recognized, there is a mandate to squeeze every drop of value out of data, so we’re searching for, in many cases, the tiniest of signals, that can be used for some purpose. That’s great, but that doesn’t mean there are no systems with models that work quite well. I think this is relevant to this thread because, as I’m sure you can appreciate, the argument that “Well we are always swamped with noise, that’s just the nature of science,” could be a protectionist move to protect a failed theory.

Well there are plenty of very good approaches that have been developed.

I hope this is in jest, and that you realize this is a tedious, trivial complaint that evolutionists routinely charge against skeptics. Like the Epicureans, evolutionists are making extremely heroic claims which are not at all obvious. Unfortunately, they also casually dismiss their critics with this type of tedious posturing.

Well that alone, and I’m sure you will know this but just for clarity and for other readers I will spell it out, would fail badly. Reduction of variance, in isolation, won’t do the job because it ignores parsimony in allowing for open-ended model complexity.

Right, it is this overfitting that must be objectively incorporated into your approach. You might consider approaches such as AIC/BIC. These are implementations of objective, quantitative approaches to comparing variance reduction with model complexity, and finding the model that optimizes the tradeoff between these two, according to a criterion. This is very important because one can always reduce variance in isolation (at the cost of increased complexity), but all you are doing is producing ever more accurate models of noise in the training set, with no explanatory power outside the training set.

So here is a good example of the failure of focusing on reducing variance. Thales said all is water, and that reduced variance. Likewise the flat earth and geocentrism greatly reduce variance. There is no question that one can come up with, and scientists have over and over, models that reduce variance, and yet have no correspondence to reality. And there is nothing wrong with that. People have long since realized there is nothing wrong with this, and in fact it can be quite useful. But then there are those who will use this to advance truth claims. It is interesting that you would make an appeal to “a reasonable person,” because by any reasonable measure, the failures of common descent are so obvious and ubiquitous that there would be no question about its truth status.

So Chris you’re probably aware of theory protectionism, confirmation bias, and the question of what is normative versus anomalous. There is, and always has been, a tendency for proponents to view contradictory evidence as non normative and anomalous. And to be clear, good theories do often have such anomalous data which at some point become resolved. So it is very tempting to apply this remedy to every problem. But when we see the ship disappearing over the horizon, our first reaction should not be to ignore it because we’ve reduced the variance, or it is not random, etc. The problems with common descent are of this nature, and they are everywhere in biology. These are not minor, and they are not rare.