I just quoted from their website where they say specifically that they are trying to predict the impact on the structure and function of the protein. If you think that is not what they really are trying to do, feel free to contact them yourself and then quote them.
Which is why in his book Behe examines what we know about adaptive mutations in nature.
I agree, which is why ID cannot validly conclude that God or gods were the designers. But then it can conclude that the designers, whether gods, aliens,or whatever, were purposely arranging things in organisms.
I guess I accidently deleted the rest of what you said, but yes, physics can be much more precise than SETI or ID in predicing unseen things, even though it can still be mistaken, such as Maxwell’s ether. But that does not mean that SETI or ID cannot produce reasonable belief.
[Someday I’ll figure out how to do this quote thing correctly]
Yes, all with the goal of predicting effects on phenotypes.
Perhaps that is the ultimate goal, but the immediate goal seems to be what they say it is: to be predict the impact on the structure and function of the protein.
Yes, it’s a good place to start looking for what researchers use the tool for - to predict the effect of a mutation on phenotype. The tool itself cannot distinguish between a “positive” and a “negative” effect on phenotype, only that phenotype is likely to change. That’s why PolyPhen-2 isn’t the best tool to use for showing what Behe (and now Gauger) are trying to show.
Sure, because they’re assuming that any change to the function of a protein is going to be damaging – which is almost always correct. If you’re dealing with a change that has been selected for, however, that assumption ceases to be valid.
A tool that tries to predict the impact on the structure and function of a protein? That sounds exactly like the kind of tool to show what Behe and Gauger are trying to show.
It is a tool unable to distinguish between “positive” and “negative” effects on phenotype when a mutation occurs. Behe takes the “damaging” output as gospel, when the two readout options are “neutral” and “damaging”, without being capable of predicting a “positive, not damaging” outcome.
Earlier I referenced a review Behe did of existing lab work on adaptive mutations, that showed most were the result of damaged or broken genes. He followed that up in his book with examples from nature showing the same thing. If you think the assumption ceases to be valid, you will need to cite evidence to the contrary.
Behe then cites the knock-out experiment in mice to support his “gospel.”
This doesn’t address my current point. PolyPhen2 is not a good tool for assessing whether a mutation is “positive” or “negative”, because it does not give output as “positive” or “negative”. Can you agree or disagree with this before moving on to the next point?
Yes, I’ve read his review. I made no comment at all about Behe’s overall claim – I’m just pointing out that this particular piece of software cannot tell you whether a mutation involves loss of function or not.
No. The tool is attempting to predict whether a mutation is neutral, possibly damaging, or probably damaging. Behe’s review showed that the fact that a mutation damages or breaks a gene does not mean that it cannot be adaptive. In fact, most adaptive mutations did damage or break the gene. So when PolyPhen-2 shows that an adaptive mutation is probably damaging to a protein, there is good reason, based on our previous experience, that the protein probably is damaged.
This software is trying to predict whether the mutation is damaging. It could be mistaken. It could be it is not damaging. However, Behe’s review strongly suggests that the software is getting it right.
Ok, so if I ask you a hypothetical “yes” or “no” question and the only answers you are allowed to give are “possibly yes” or “probably yes”, that might be a bit of a problem. Yes or no
Actually, I tried just answering “no,” but was told that my reply had to have at least 11 characters. So I went crazy. How about this:
No, no, no, no. I think that’s enough characters.
The problem is that no software can actually do that. So what it is actually doing is predicting whether a change will affect the protein’s ability to carry out its present function, based on protein structure and conservation. Under most circumstances, such a change almost always represents a simple loss of function. Under the present circumstances, however, that conclusion does not follow. So even if you keep repeating what polyphen is trying to do, it can’t do it here.
As I recall, Behe’s review relies heavily on laboratory studies of mutations in organisms under strong selection. Is that correct? If so, I would not attempt to generalize the findings (if accurate) to naturally occurring selection, which is usually much less intense.
Very well, I’ll amend my statement to this - most unbiased readers could see how the PolyPhen 2 output is problematic for showing what Behe wanted to show.
Now let’s move on to the mouse study - I agree that this one study in a different mammal model can give us a hint of what might be a possible loss-of-function in the polar bear APOB gene. Sorry, but that’s about as concrete as the evidence gets.
Clearly, the APOB gene has not been knocked out of the polar bear genome, as was the case for the heterozygote mouse genome in the referred study, so this is not really an apt comparison. A significant amount of further research is really necessary to substantiate the claim that Behe has tried to make. I’m not saying he’s wrong, I’m just saying his conclusions are premature.
I certainly hope the polar bear APOB gene is not the strongest piece of evidence in his book, but ENV is sure busy writing up articles trying to convince readers that his conclusions are beyond reasonable criticism.
And yet, as Gauger notes, “PolyPhen-2 is designed to detect disruption and does a pretty good job of it. In a study where it was tested against positive and negative controls, PolyPhen-2 had an accuracy of .72, a sensitivity of .8 and a specificity of .7.”
Right, so let’s investigate nature and see what we can learn. So far, indications are that it mimics lab results.
How many of those controls had been positively selected?