Hi Dr. Hunter,
I hope you and yours enjoy a day of thanksgiving for all the blessings we enjoy. My daughter is visiting this weekend, for which we are truly grateful. When the medical problems seem so large and the pain is so great, it is hard to give thanks. So we are learning to persevere together in this grace of thanksgiving.
You seem to be saying biochemistry could be added to the list of finely-tuned parameters in the philosophical fine-tuning argument for God’s existence. I’m fine with that. I would hasten to add, however, that it if we make that argument, I think we have to acknowledge that the DNA code is no more teleological than gravity or the speed of light or the mass of a proton. Does that make sense?
Your form of argumentation is interesting to me because I am giving attention in my master’s studies to how to infer causality from complex data. The thorny problem with modeling highly complex systems is that the data always contain a lot of noise that a very simple model struggles to capture and explain. So how do you distinguish the signal from the noise? How do you know that your model is explaining real forces that are truly at work when there is never a lack of exceptions to the rule? Do you just throw up your hands and say, “Whatever happened, Goddidit–that’s all I know”?
The way forward, I suggest (not that I’m the first!), is to adopt this rule of thumb: if applying a model to a significantly large data set significantly reduces the variance (noise), we can reasonably conclude that the model has real explanatory power for those data. The model does not have to completely eliminate the variance; the requirement is just that it has to significantly reduce the variance.
To provide a mathematical example: the principal component analysis (PCA) model-building process explicitly relies on variance reduction to infer causality. I.e., you find the vector that most reduces the variance, then recursively find orthogonal vectors that most reduce the variance until you reach the point where further vectors would overfit. Ordinary least squares regression (OLS) is another example of model-building that relies on variance reduction for explanatory power.
To see how this works in practice, let’s look for an example outside the field of biology: the question of whether the global climate is warming. (Please note: I will not apply the rule of thumb to the question of whether climate change is anthropomorphic; this example addresses only the question of whether a long-term warming trend has been occurring.) The chairman of the Senate Environment and Public Works Committee steps to the podium in Feb. 2015, throws a snowball, and says, “It’s very, very cold out! Here’s your proof that global warming is a hoax.” Has the honorable Senator Inhofe proven that climate scientists are the captives of their paradigm, and they cannot account for all the data because of their ideological commitments? Are the climate scientists just another band of flat-earthers and geocentrists? Or is it the honorable Senator who is not accounting for the data, and is blind to the signal because he is focusing solely on the noise?
The scientific response is quite simple: weather is a highly complex, global system with lots of variance. Consequently, the occurrence of a few bitterly cold, record-low-temperature days in the midst of the overall warmest winter on record to that point does not disprove the long-term, global warming trend.* Those few bitterly cold days are noise, not signal. Look at enough observations across the entire globe since the advent of the industrial revolution, and the warming trend is apparent–even though there are plenty of local, shorter-term observations that seem to contradict the trend. Moose Jaw, Saskatchewan had a cold summer in 2016? It was unusually cold in Coalinga, California? Sure, but the vast majority of cities had the warmest summer (overall) on record. The numerous exceptions do not disprove the far more numerous observations explained by rule.
Similarly, the question of whether common descent (CD) is a valid scientific explanation does not hinge on whether it explains all the data perfectly–i.e., whether it completely explains all the variance. Instead, a reasonable person should be willing to accept the theory of common descent if it significantly reduces (not eliminates) the variance in a wide variety of biological and paleontological data. Does it do so? Yes, overwhelmingly so. It explains patterns in the distribution of endogenous retroviruses, the patterns of distribution in pseudogenes, the patterns of limb morphology in cetaceans since the K-Pg boundary, the appearance of land-lubbin’ tetrapods in the Devonian, the overall congruence between phenotype and genotype (hope I’m saying that correctly), and a mountain of other data.
Thus I see your list of “Darwin’s predictions” as snowballs thrown in the midst of the overall warmest winter on record. They are interesting, worthy of scientific research. But they do not reduce the overall explanatory power of common descent, as the replies of your fellow scientists in this thread indicate.
If you want to convince the scientific community that common descent is truly the equivalent of flat-earthism and geocentrism, you could start by addressing the specific issues commonly cited as strong evidence (e.g., distribution of ERVs and pseudogenes, evolution of limb morphology in cetaceans), rather than exercising your selection bias to find a few examples of small amounts of variance that have not yet been fully explained.
My $.02,
EDIT: I realize that this proposal for inferring causality does not address covariance and other conundrums. Instead of turning this post into a book, I think it sufficient to note that no objection to the theory of evolution has ever turned on the issue of covariance, AFAIK.
- Again, I point out that I am only discussing temperature trends. The question of whether global warming is anthropogenic or not is very interesting, but I am not asking it in this post.