Quite correct, as I understand it, so to the degree that randomness is introduced before the expresssion of said gene, to the degree random changes are introduced, then to that degree, the new de novo protein would contain that degree of randomness.
Depejding on rhe amovny of oruginal dxta remkining, thopgh, it stillomay conleivably accozplish its tntended purpocse.
But I guess I am also curious why taking code from a part of the genome that is believed not to code for proteins, but then (accidentally?) transcribed, would not still be considered “essentially random.”
If I took the bits and bytes of data that encode say an audio recording, and “translated” them into their ASCII/text equivalent, I would get something like h^ŒxJ$£ç]dl7§’¢6>hR’6fSu, which, for my purposes, is essentially random. How would it not be the case for ”transcribing” a section of the DNA that was never intended or used to code proteins? I’m not sure why we would not call that “essentially random”?
In his book “Only a Theory” Kenneth Miller discusses Nylonase and manages to shoot himself in the foot.
He said in the book that experiments have shown that bacteria will consistently and quickly (in a matter of weeks) develop the ability to process nylon in the right conditions. This shows that it is not particularly difficult for adaptation to achieve and rules out a de novo appearance of a new gene; which should be rare and have on average a long waiting time.
Since then it has been found that many bacteria have enzymes with some limited effect on nylon and it is well within the edge of evolution for natural selection to fine tune these to have much greater specificity.
Mutation + Natural selection is a good tinkerer but a lousy innovator.
By the way, this also applies for the Cit+ trait in Lenski’s LTEE. It has now been shown that this trait will also develop consistently and quickly; a matter of months; when the experimental conditions favour it. It took 15 years in the LTEE because the conditions were only moderately favourable for its appearance.
I concur - Examples like that of true evolution, which are observable, repeatable, testable, I certainly believe wholeheartedly.embrace.
But yes, if some feature like that is set up in the gene sequence, wherein give it the right conditions and the outcome is all but guaranteed, must mean it isn’t particularly difficult or improbable to achieve the said endstate.
Respectfully, I think their argument is rather more nuanced. I recall Meyer’s frequent use of the qualified statement “significant amounts” regarding new functional information. I searched through “Darwin’s Doubt” and found 5 times he used the specific, qualified phrase “significant amounts” in relation to genetic information.
So, to be fair, let’s agree that he isn’t so unsophisticated as to make an argument that no information whatsoever can arise by random chance to be selected by nature.
And philosophically speaking, this seems natural, common sense, and almost self-evident to me. Of course an information system, already functional, may take on a new function by the adjustment, insertion, or deletion of small amounts of information that makes small or minor changes to the information already extant. But this is a far cry from suggesting the entire system could arise de novo or by some kind of radical change. For instance…
I recall reading the following “church bulletin blooper” when I was younger, it went something like:
“The rose on the altar is for the birth of David Jones, the sin of Mr. and Mrs. Robert Jones.”
Clearly this sentence has taken on a “new function” of sorts (in this case, a comical - or terribly judgmental! - function) from the original intended, whose function was simply a statement of fact or observation:
“The rose on the altar is for the birth of David Jones, the son of Mr. and Mrs. Robert Jones.”
But the “new information” needed was simply the substitution of an “i” for an “o”, something clearly not terribly unlikely and not particularly insurmountable.
I could plug this into a random letter generator, and let it mutate individual letters, and check any new words against the dictionary to find such “functional sequences”, and very conceivably get other new sentences that also had “new functions” from “new information”.
I might get one with a more macabre function:
“The nose on the altar is from the birth of David Jones, the son of Mr. and Mrs. Robert Jones.”
Or one making a statement about gay marriage…
”The rose on the altar is for the birth of David Jones, the son of Mr. and Mr. Robert Jones.”
Or one with a certain amount of poetry:
”The rose on the altar is for the mirth of David Jones, the sun of Mr. and Mr. Robert Jones.”
In one sense, sure, all these sentences have a somewhat modified “function,” but really there isn’t a radical difference between any of the sentences themselves. They all have essentially the same structure (sentence diagram), with the small mutations making interesting changes of effect. But the minor adjustment of information merely modifies a pre-existing function, rather than in any sense making a function arise de novo.
But this is a far cry from me getting an English Sentence of that length out of a random letter generator de novo, or anything that resembles one.
The first situation is likely. I could run it on my home computer and get similar sentences. The second would not happen de novo with the fastest computer over 15 billion years by random insertions, deletions, etc., without some kind of teleological information being inserted into the algorithm.
Hence, for this observer, Meyer’s contention that small amounts of information can adjust, tinker, or help organisms adapt seems perfectly reasonable, as does his contention that large, significant amounts of novel information simply don’t arise this way. And as the former is analogous to a few point mutations in a duplicated gene that already codes a protein; the latter analogous to an entirely new protein arising from entirely new de novo information that arose from a frameshift. Thus, respectfully, I don’t think the irony really exists as you see it.
No analogy is perfect, but to be useful an analogy’s conclusions should depend on the points of contact. The conclusions from this analogy seem to depend on what is different between English sentences and genetic sequences.
For instance, the example sentence doesn’t have a single wasted letter. Every word is meaningful. To more closely match DNA, we’d need to insert a few segments of apparent gibberish and a few long strings of one-letter sequences and short repeating patterns. Then include some copies of what look like meaningful sentences that have accumulated various levels of changes that render them more or less intelligible. We should also adjust to a language where there are only four letters, where most combinations of those four letters are valid words, and where most words have multiple valid spellings. This language would also need a flexible grammar in which order the word matters does not.
After making changes like this, it becomes far more likely that accumulated changes will surface new meaningful statements. In other words, once the analogy is adjusted to be more like what it pictures, its conclusion no longer follows.
To be sure, you are quite correct. However, I simply intended the analogy to illustrate the two extremes, as illustrated by the nylonase case. ID proponents have no issue with 2 or 3 point mutations changing a gene to make a minor modification, as my sentences did. Only meaning that this is different than their main contention, which is the rejection of wide swaths of new information arising de novo.
We have good evidence that new information does arise de novo: new coding genes arising from non-coding sequence; duplicated genes taking on new functions, and so on. Usually the argument from the ID folks is that it’s “not new” or “not enough”.
One of my favourite examples is the evidence for 2x whole-genome duplication at the base of the vertebrates - there has not been a decent reply to that evidence from ID folks despite the evidence being there for many years.
Appreciate the thoughts, will follow the links and explore more as time permits.
For what it is worth, though, Meyer used the same qualified phrase in “Signature”… e.g., “Undirected materialistic causes have not demonstrated the capacity to generate significant amounts of specified information.”
As far as I know, one protein coding gene arising de novo is enough to qualify as “significant” according to their criteria, but it’s been a while since I looked at their definition.
Certainly many hundreds of paralogs arising through WGD qualifies. The ID argument is that I can’t prove that the designer didn’t do the designing in the middle of that process. That seems like a pretty weak argument to me, but YMMV.
Sir, by the way, I wanted to mention my appreciation for your kind and sincere engagement in these discussions. I realize how busy you must be, so the time spent to answer one inquisitive skeptic is certainly appreciated.
Per the question of “significant information”… One whole gene arising de novo confirmed as arising by natural processes would certainly be significant, as I understand it. Yes, most certainly, if I understand the argument at all. That’s why Nylonase was so interesting a potential counter example. That to me would have been unqualified functional information arising from practically nowhere.
My only point with the sentences above is that a few substitutions, or small modifications, while creating new overall effects, are really not in any sense an infusion of ”significant” new information, but rather modification (with interesting effect) of previously-existing information. Hence some point mutations altering a pre-existing functional protein would not be in any way problematic to ID as I understand it.
As for WGD, much of the way I personally analyze these things is to consider how I would work as a computer programmer. … I’m interested in the raw “intelligent design” theory itself, so I don’t try to limit myself to “how would an omniscient, omnipotent deity have done it.” Hypothetically, I think it conceivable that God could have used any sort of known or unknown intermediary to accomplish his purposes, one without the benefit of absolute omnipotence or omniscience. An angel, an alien, a super-intelligent shade of the color blue, etc., and for such a one, it is quite conceivable for them to take the kind of “programming shortcuts” that I have used in my own programming.
And certainly in my own programming, I have at many times simply copied whole programs and then rewritten them for entirely new purposes, if the new purpose was even remotely close enough to my new intended program. So a heavily modified gene that was copied from preexisting code is not inconsistent with how I would design a program.
If I understand rightly “orphan genes” are simply a fact. But so would “orphan programs” on my hard drive that I’ve created… some would be literally de novo out of nowhere, (except my mind), some could be heavily modified versions of “whole program duplication”, modified by my intellect (what little there is). But the mere existence of such de novo genes or proteins wouldn’t demonstrate anything “significant” to this skeptic one way or the other. This is entirely consistent with ID as I understand it. If there were an intelligent designer, wanting to develop a new function, then by definition they would have to create some kind of de novo information, whether that came completely from scratch or by heavily editing a copy.
De novo genes or proteins comfirmed as or unquestionably having arisen through entirely natural or entirely random processes, however, would be very convincing, to me that least. When I first read about the Nylonase frameshift some 6 years ago I recognized the implications, and I was entirely ready to abandon any sympathy I had for ID if that had proven to be the case.
How would you distinguish whether any particular mutation (whether frameshift or chromosomal/genome duplication or just a point mutation) came from entirely natural/random processess or from a super-intelligent shade of the colour blue doing a spot of debugging?
Just speaking as a philosopher and programmer… For some scenarios, I don’t know how it could be demonstrated one way or the other, others with more knowledge would have to help me out there. But some examples would be clear enough…
Data arising from a frameshift, for instance, seems unquestionably accidental and undesigned. If I took any lines of code in a computer, and somehow decoded them improperly (it was written in hexadecimal code but I translated it using decimal, or the like), the resulting “mistranslated” code clearly arose instantaneously and directly from the essentially random sequences that would happen in such cases. In computer world it would typically come out as complete gobbledygook.
Also, hypothetically, if there were some non-protein-coding section of the dna that unquestionably had a very particular, discreet function (transcribing sequence specific RNA for some purpose, or the like)… and this code “accidentally” got decoded just as it was, this would potentially be comparable to taking the data that encoded a jpg image, but instead translating it into say an excel document. In that case also, the output would also be gobbledygook.
In those two cases it would clearly be natural/accidental and involvement of intentional design would be bery difficult for me to accept.
de novo sequences, whether entirely brand new or
Major modifications to existing code, I couldn’t say, depending on evolutionary theory… it is certainly consistent with a designer, but others here suggest it consistent with evolutionary theory… but it would would require that functional sequences are not particularly rare, for such significantly random variation to be able to produce function. I personally remain very dubious of this… But these seem quite consistent with intentional design to me.
Point mutations could in principle be result of design (I have occasionally modified programs by only one variable)… but from what I know about evolutionary science, these kinds of mutations are very common, observed, and regularly occurring in nature clearly observed as happening by random natural events and thus need no appeal to a designer whatsoever.
However, if numerous, yet only certain very specific point mutations were necessary to a specific function, and probability suggests they could never somarise like that by random chance, in those cases I would be more sympathetic to the possibility of intentional design.
Do you see how it’s kind of odd, though, that the way you’d be convinced of the power of these generally gradual processes is by seeing cases where they are not gradual and where the results appear immediately without benefit of any incremental or iterative process? It seems like you’d only accept the ability of natural processes to give rise to new features if they did so as directly and immediately as a programmer.
Have you looked for other examples? Have you read reviews about de novo gene birth? What happens when you discover that new genes can be born out of non-coding DNA? No need to answer these questions. They’re meant to point you to the fact that there are other examples and that there are whole review articles about de novo gene birth.
This is why it totally reminds me of that “superhero” from Mystery Men… who could turn invisible. But only when no one was looking.
You are free to have faith in the unseen process, of course. But the simple existence of de novo phenomena is quite consistent with a designer, and only observation or other confirmation that it did in fact arise via unguided natural means can falsify that hypothesis.
But for skeptics like me who have numerous reasons to doubt the evolutionary process, and who see good reasons to consider an alternate hypothesis, it simply doesn’t instill confidence to hear the scientific community say:
Observation? We ain’t got no observation! We don’t need no observation! I don’t have to show you any stinking observation…!
Have you ever seen a raindrop ascend into heaven? We can’t see water molecules go up to the clouds before rain comes back down. I hope that you don’t believe in the godless process of evaporation as there’s a much better explanation - the Intelligent Designer opens windows in the firmament to let the waters above come down to the earth below. And be sure not to try and compromise to just add the Intelligent Designer on to your faith-based science where you become a theistic water cyclist: The Dangers of Theistic Water Cyclism
But it can’t falsify that hypothesis. If the designer can be making the changes we see between two sequences, there’s no reason why the designer isn’t also triggering just the precise frameshift mutations that would accomplish said designer’s purposes. There could never be an observation direct enough to rule out this kind of designer.
If you believe God could use mediate causes such as angels, aliens and super-intelligent colours, why can’t God also use or work through natural processes? Perhaps God made natural processes extremely capable of doing what God wants them to do.