Different kinds of gaps

What about the first part? Mutations triggered by a cosmic ray that is caused by an event that just happens within a star 100,000 light years away?

What about it?

2 Likes

Is it a possibility in theory?

My subsequent comment about crossing out ‘methodological’ addressed what I think my confusion was.

Yes. Ionizing radiation is capable of causing mutations, no matter the source. If we are being hit by cosmic rays from distant cosmic events then they have a chance of causing a mutation. They also have a chance of chemically altering any number of things in our environment.

It’s a bit like rain. If you look up when it’s raining there are a few drops that followed just the right path to hit you right in the middle of the eyeball.

3 Likes

And the second part, about it just happening in a star?

Sorry, not going down this silly rabbit hole.

3 Likes

:joy: I’ll take that as a “I don’t know”

@T_aquaticus
Thus far, you have only mentioned differences with respect to morphology.

I’m a bit surprised by your comment. Mainly I’ve mentioned changes in embryonic development, so my then saying that difference in these reflect changes in genomic information is clearly relevant. And as changes in morphology are due to changes in embryonic development, I’m not sure what you’re saying /asking?

@T_aquaticus
You need to factor in the number of neutral mutations that occur within a population. The overall rate of fixation for neutral mutations is about the mutation rate, so 50-100 neutral mutations will fix in the human population per 25 years.

First, for clarification. The rate of generation of neutral mutations (per nucleotide, per generation) is about the same as the rate of fixation of neutral mutations within the population (because chance of fixation for each is inversely proportional to N). However, this is much lower than the overall mutation rate. Most mutations are detrimental (which I prefer rather than deleterious which I think implies something has been deleted, which may or may not be the case). But I agree with your figure for the number of neutral mutations accumulating in the human population.

However, is this going to help with explaining the proposed diversification of embryonic development (the initial discussion) and/or evolutionary progress? Why are they neutral? Because, either (1) they occur in protein-coding sequences but do not change the coded-for amino acid, or (2) they occur in lengths of DNA that do not have another function. This paper Neutral Evolution: The randomness that shapes our DNA | eLife estimates that only 5% of the genome is susceptible to neutral mutations. So why do you think that these sorts of mutations are going to lead to constructive changes in embryonic development (whether to change it, but to a similar phylotypic stage, or to make some substantial change to the final morphology)? This paper is ‘explaining’ why most of the millions of nucleotide differences between chimps and humans have arisen through neutral mutations, rather than through mutations with a selective advantage. But how credible is it that mutations that arose and fixed without any function, in due course fortuitously (magically?) provide constructive changes in embryonic development?! I think the only reason the idea is given any space at all is because all else fails, and ‘neutral evolution’ is enough of a ‘black box’ to hope that it will get evolution out of its explanatory hole.

@T_aquaticus
That really doesn’t address the issue at hand. The problem is the Sharpshooter fallacy.

No. As I said previously, mutations will probably occur at all locations, but only those that confer resistance without deactivating he enzyme will work. It’s natural selection that does the selection, not me.

@T_aquaticus
If there are many, many combinations of 2-4 mutations that can produce a beneficial phenotype then it isn’t surprising that we see them evolve, especially if those mutations are beneficial on their own.

The 2-3 mutations I mentioned previously was only for modifying an enzyme. To reprogramme developmental mechanisms you’ve got to generate / modify promoter sequences and/or corresponding transcription factors. The core promoter region is usually at least 100 base pairs long, with several specific sequences within it. For example

Promoters can be about 100–1000 base pairs long, the sequence of which is highly dependent on the gene and product of transcription, type or class of RNA polymerase recruited to the site, and species of organism. [Wikipedia: Promoter (genetics)]

@T_aquaticus
There are also tons of neutral mutations at different frequencies in any given population, some of which may have reached fixation 10’s of millions of years ago.

It’s you who’s using a sharpshooter fallacy: assuming that sequences that have arisen and spread by chance just happen to be useful when you want them. When, by definition, they’re just random sequences (unless coding for synonymous codons in protein sequences), whether they were fixed recently or long ago.

@T_aquaticus
Why? With a large enough population you can have a mutation at every available base in a single generation.

That’s what I said:

@Leyton
You have to wait 50 times as long for the mutation(s) to arise that many times. If you want the attempts to run in parallel, then you have to consider how unlikely it is for the beneficial mutation(s) to arise simultaneously. Which depends on the size of the population:

If you roll 6 dice then on average you’ll get a six each time; to get 2 sixes on average then you have to roll the six dice twice or roll 12 dice.

@T_aquaticus
The chances of two mutations happening at the same base and moving towards fixation in a relatively short period of time is very unlikely.

That’s not what I was meaning. I was thinking in terms of a potentially favourable mutation arising at one location, potential in the sense that it could have a function if in due course appropriate bases arose in neighbouring locations. But while waiting for that to happen, the first one will be susceptible to being lost.

@Leyton:
Not at all. As indicated above, I assume that mutations will occur throughout the genome. But unless they confer a benefit, they will be lost / randomised.

That would be a bad assumption.

On the contrary, unless you can give a reason why it will not be susceptible to being lost. This is like Dawkins’ METHINKSITISLIKEAWEASEL where, when a right letter arises he assumes there’s some way of preventing it from changing to something else while the rest of the sequence arises.

1 Like

Hmmm… Almost 20 years ago, I had this atheist take to calling me a weasel on Facebook in a debate on religion. So I changed my profile picture to a weasel with a rat in its mouth…

I didn’t read much into Dawkins, but I did read his objections to the philosophical arguments, as well as Harris and Hitchens. I liked Hitchens the best as a person. Doug Wilson’s debate with him was excellent, and Harris may be the one I took the most seriously.

In case you’re interested, as it’s early and I might not come back to this. I’m thinking back on those Facebook threads. It was there that I first began to get a sense that an uncaused cause would be unobservable by nature, and it was there that me and this aged philosopher, Roger, had this moment of silence as we came to the understanding that it was rationally possible for an uncaused cause to be unaware of its action.

Some time later, in the way science was trying to close the gap, the LHC for example, it felt like fleas chasing the end of the cats tail. And yet the tail keeps growing longer.

How does it get lost? Once it has spread to the population every member with that mutation has to either have another mutation at the same location or all the members have to die off. Both seem to be low probability events. Evolution happens in populations, not individuals.

So what specific differences are you talking about, and in what sense do you think those specific differences can not be produced by common descent and evolution?

If we are talking about the human genome, then the vast majority of mutations are neutral. Only about 10% of the human genome shows evidence of selection against deleterious mutations. The rest is evolving at a rate consistent with neutral fixation.

That’s not what that paper says. What that paper is saying is that neutral mutations will only evolve neutrally in 5% of the human genome. The paper describes mechanisms that will cause neutral mutations to deviate from what we would consider truly neutral evolution. For example:

Epistasis. Mutations that are initially neutral can interact with future mutations and result in a beneficial change.

Why do you think this involves magic? Epistasis is a known mechanism in biology. It has been observed in the lab. Even Behe thinks it happens. I don’t see why embryonic development would be immune to this mechanism.

The only hole is the one you have invented.

It has nothing to do with what I want. We can see the mutations that separate genomes, and those are the ones responsible for the differences in morphology. We know that neutral mutations can reach fixation, and we also know that beneficial phenotypes that result from the interaction of two or more mutations will be selected for. We can observe that they are useful. There’s no Sharpshooter fallacy here.

You are still pretending as if there is only one possible beneficial mutation in any given genome.

It will also be susceptible to fixation.

2 Likes

The greatest gap here is between God as He is and the biblicist God.

A bigger gap is between the rational possibility of theism and solipsism

@Daniel_Fisher
Would you mind sharing more specifically or in more detail (or point me to where you’ve already posted): exactly what it was that you studied during your university time that planted doubts in your mind about the scientific plausibility of the theory?

At university I studied biochemistry and maths which included statistics. I don’t know what prompted me to try to work out the specificity / improbability of a typical protein, but I chose cytochrome c which at just over 100 amino acids (aas) is fairly short. An obvious question I realised from quite early on is how specific does the sequence need to be? One approach is to look at what aas are the same across all the sequences we know, which span a wide range of plants and animals for cytochrome c. But there’s still the possibility – I would guess likely - that a completely different structure, with completely different aa sequences, could work. How can we quantify this? Maybe, with our increasing ability to model protein folding, and knowledge of how aas work at an enzyme’s active site, one day we’ll be able to answer this. However, with our present knowledge I think there are 2 strong reasons against an evolutionary origin to biochemical mechanisms.

  1. One is that we now know the structure of some proteins, and how their active site works. For me, a key example of this is DNA polymerase III (Poly 3) which is the main enzyme for DNA replication. The main subunit of this is >1000 aas, the key amino acids for its active site are 3 aspartates at positions 401, 403 and 555; and there are other aas which are also essential for its function, including for base-pairing and to make sure that the appropriate deoxyribonucleotide is used rather than the ribonucleotide. It is not surprising that the protein needs to be this sort of size, because it wraps around the nascent double helix (like the fingers of a right hand) in such a way that the end/start of the double helix is between the thumb and forefinger where the active site is; and the forefinger closes of the site when the right nucleotide is in place (but not for a wrong one) and activates the active site. For more info you can see DNA Polymerase III . I expect there are other structures which could do this (although all DNA polymerases we know of have this right-hand conformation), but whatever they are, in order to work they’re going to be of a size and specificity that is totally unrealistic to find opportunistically. Because of its core role in cell function, some may dismiss this as part of the origin of life and not an evolutionary question; but I think that’s a cop-out, and I’m sure soon (if not already) we’ll know enough about proteins that have arisen subsequently to raise similar challenges.

  2. The other aspect is the two-tier complexity of molecular biological systems. It isn’t just that the components have specified complexity, but so do the systems. For example, the Poly3 I’ve just mentioned cannot replicate DNA by itself, but it’s a part of the DNA polymerase holoenzyme which has 10 different proteins; and the overall process requires other proteins to e.g. open up the double helix. Most if not all of the proteins by themselves prima facie presents an insuperable challenge to an evolutionary origin. Then most of the components are essential for a functioning system, so all of these need to be available before there is any function which could be the basis for natural selection to operate. So in the unlikely event of one of them arising by itself, it’s of no value by itself, so it won’t conveniently hang around waiting for others to evolve. So all of the essential components need to arise together (at the same time and place) opportunistically. I think that to believe that could happen is like believing in fairies at the bottom of the garden (to misquote Dawkins).

That’s a meaninglessly detailed bull’s eye you’ve just drawn on LUCA on which to build another fallacy.

In this case, it might be nice to mention the vast amount of fixed differences between the human and chimpanzee genome (hundreds of millions). Given the mutation-rate (10^-8) and population sizes, this can not be reached in about 600 000 generations (waiting time problem). In addition, humans have some genes that are completely unique, and without ancestors in the animal kingdom.

Sorry, how does that affect the rational narrative? And show your ‘math’.

How so?

Such as?

1 Like