Adam, Eve and Population Genetics: A Reply to Dr. Richard Buggs (Part 1)

Allele frequency spectrums (AFS) do not give a solid view of ancient bottlenecks, but they do of recent population structure. Ironically, very recent bottlenecks are not well ascertained by MSMC and PSMC and LD-Blocks, but they are clear in AFS. This is covered pretty well here:

So yes, in the ancient past you cannot really infer much from AFS, but that has never been @glipsnort’s claim. His claims are consistent with what I showed with argweaver.

  1. @glipsnort has not made any claims of heliocentric certainty.

  2. He would agree that past about 500 kya, we do not expect allele frequency spectrums to detect a bottleneck of a single couple. That is where he places a tentative cutoff. So his results are essentially the same as argweaver, though the evidence form argweaver is much stronger.

  3. His original reason for delving into AFS was to respond to some young earth creationists that claimed the AFS was inconsistent with a large ancient population and required a single couple origin just 6,000 years ago: (Can someone explain like I'm 5 yo, what's wrong with this refutation of Biologos?).

  4. His response to Ola Hossjer (colleague of @agauger) has been very well measured, and entirely correct. (Glipsnort responds to a critical article) Notice that he does not prese a case against ancient bottlenecks, but only for common ancestry with great apes and huamns, and against a recent bottleneck. Both those claims are very well supported by the evidence, and he produces analysis of his own all the time.

I know you are not attacking @glipsnort personally, or even leveling an unfair scientific critique. I do think, however, it is important to clarify that he has been a measured and careful voice. In my opinion, he has not drawn incorrect conclusions from the AFS work, nor has he overstated his certainty of those results.

2 Likes

A couple technical updates:

ArgWeaver Does Not Assume Large Population. The computed TMR4A is biased downwards, not upwards, by the prior.

The Correct Mutation Rate. ArgWeager is using an experimentally confirmed mutation rate.

And, more importantly, this improvement of the estimate…

Correctly Weighting Coalescents. An improve esitmate of TMRCA is about 500 kya.

I finally got around to correcting this part of the code, and recomputing the TMR4A. Here is what we arrive at, a TMR4A of 495 kya, nearly 500 kya. This is a better estimate.

https://discourse-cdn-sjc2.com/standard9/uploads/peacefulscience/original/1X/94c9420257f170b3e5f847aff3363ba3451568a2.png

1 Like

An actual H. erectus (or heidelbergensis) named “Adam” might have been capable of naming “Eve” and the animals, but not much more. Of that much, we are certain …

1 Like

Hi Joshua,

I’m just catching up with this dialogue on a train. I should be marking essays, but will just take a moment to quickly repond to a couple of points.

Thanks, I had not seen that exchange before between Ola Hossjer and @glipsnort. Very interesting. However, it does pre-date the current discussion, and I am keen to hear Steve’s own response to the papers I have referenced on the AFS method. I agree with you that he has been a measured and careful voice in this discussion and I have great respect for his expertise.

But would you agree than in their analyses reported in the paper they have assumed a constant effective population size? If not, how do you understand the footnote to the table that I referenced above.

My train has just arrived at King’s Cross Station - sorry to have sign off. I greatly appreciate your work on this thread, and the honesty and open-mindedness that you have shown.

1 Like

@Swamidass:

Once you go back beyond 6,000 years, and especially 10,000 years, what’s the point of trying to prove a bottleneck “older than 10,000 years, and hidden in a shadow”?

If it creates a motivation for YEC’s to preserve their position in an Old Earth Scenario… good… .let them work for that.

Our job has been to show that the “Young Earth” part of any Christian’s world view is untenable. The more YEC’s work to legitimize an Old Earth Scenario, the better it will be for everyone!

1 Like

I hope to get back to this thread within a few days.

2 Likes

Hi Joshua @Swamidass
I am taking a look at the ARGweaver paper more throughly. It is very clear that the ratio of mutation rate to recombination rate is critical to the accuracy of the method, as the authors comment in the paper, and as several of their supplementary figures (S4-S8) show. When the mutation rate is high relative to the recombination rate, they have much more power than when it is low. However, I am struggling to see what recombination rate they used or estimated when analysing the 54 human genome sequences. Do you know what recombination rate was used? I notice that on page 8 they comment that ARGweaver has “a slight tendency to underestimate the number of recombinations, particularly at low values of mu/rho” and also that they say that other sources give a low value of mu/rho for human populations. This suggests that in their analysis of the 54 human genomes they may well have estimated a lower rate of recombination than the correct rate. However, I can’t find the figure. Is this something that you have looked at, please? If they have underestimated the recombination rate, how do you think that would affect the TMR4A?
best wishes
Richard

2 Likes

Steve, that’s great news. I would also be really glad to hear your view on Joshua’s analyses of the ARGWeaver data, if you have time.

1 Like

@RichardBuggs please exuse the delay in responding to you. I’d normally put a high priority on it, but my father unexpected passed away this last Saturday. I will return with haste, but have more pressing matters at the moment. Peace.

@Swamidass,

My deepest sadness to hear this news. Prayers for you and your family! George Brooks

3 Likes

Joshua, I am so sorry to hear this. You and your family are in my thoughts and prayers.

1 Like

Josh, so sorry to hear this. I will be praying for you and your family.

1 Like

Just to come back to points raised by @GJDS and @Jon_Garvey that I did not get a chance to respond to earlier:

I am not sure if this is relevant to your question, and you probably are well aware of this already, but just in case it is useful to the discussion, here are some comments.

There is quite a large literature modelling the population genetic effects of severe bottlenecks on genetic diversity in populations, by, amongst others, Alan Templeton, Brian Charlesworth, Nick Barton and Masatoshi Nei. This was partly motivated by a debate about whether or not founder event bottlenecks can cause speciation (note, the debate was not about whether or not severe bottlenecks can happen - it was about whether they drive evolutionary change). This led to quite a lot of empirical studies on natural populations that were known to have passed through bottlenecks (evidenced by past human observation and records) and on experimental populations. For example, here is a recent paper that experimentally shows that populations do much better after a bottleneck if the founding couple are outbred rather than inbred previous to the bottleneck: Szűcs, M., Melbourne, B. A., Tuff, T., Weiss‐Lehman, C., & Hufbauer, R. A. (2017). Genetic and demographic founder effects have long‐term fitness consequences for colonising populations. Ecology letters, 20(4), 436-444.

I think it is fair to say that models of the effects of bottlenecks on genetic diversity are well developed and well tested. Of course, there are inherent limits to how well we can test the long term effects of bottlenecks in natural populations or experiments, as we are limited in the number of generations that we can study. I guess this is the major problem that you were both pointing out.

Perhaps the best empirical study available to us on the effects of bottlenecks is the Lenski long-term evolution experiment. Though this has the disadvantage of being on an asexual organism, it has the advantage of having run for 60000 generations. This experiment started with an extreme bottleneck, as each of the 12 parallel populations came from the same bacterial colony. Lenski et al (1991) wrote: “over all the founding populations, there was essentially no genetic variation either within or between populations, excepting only the neutral marker.”

Recently a fantastic study was done by Lenski and his collaborators tracking the genetic changes that have occurred in each of the 12 populations that all originated at the same time with the same bottleneck.
https://www.nature.com/articles/nature24287
The results are quite startling, in that very different dynamics have occured in each population. Here are the allele frequency trajectories for just three of the populations, from Figure 1 of the paper:


The authors found that the different dynamics were for several reasons, including: changes in mutation rates, periodic selection, and negative frequency dependent selection. The final paragraph of the paper reads:

“Together, our results demonstrate that long-term adaptation to a fixed environment can be characterized by a rich and dynamic set of population genetic processes, in stark contrast to the evolutionary desert expected near a fitness optimum. Rather than relying only on standard models of neutral mutation accumulation and mutation–selection balance in well-adapted populations, these more complex dynamical processes should also be considered and included more broadly when interpreting natural genetic variation.”

I think this perhaps supports the point you were making. It is a very very different system to human populations, but in many ways it should be a simpler system, and therefore easier to model. It underlines the difficulty of going from models to real evolution.

If we were presented with the twelve different Lenski LTEE populations that exist today and asked to reconstruct their past, I very much doubt we would be able to detect the fact that they all went through the same bottleneck 60000 generations ago.

4 Likes

@RichardBuggs Thanks for the reply, Richard.

That’s a truly astonishing graphic, given the tight constraints in the Lenski experiment.

@RichardBuggs,

Those are impressive numbers! And now we actually have a baseline for more fulsome future discussions when someone inevitably asks “Have we tried to demonstrate evolution in a laboratory.”

But there are those amongst us who are interested in how this labor demonstration applies to a 6,000 year time frame.

So I thought I would take the scale of the three sample results, and “zoom in” as required.

Taking the first 5000 generations as my starting point (and to provide context), I then made an approximate division of the 5000 generations in two, indicating where 2,500 generations would end.

I then divided 2,500 in half, to show where 1,250 generations would end. This was followed by another division in half, showing the end of 625 generations.

If we use the aggressive number of 20 years to a generation, 6,000 years would translate into about 300 generations. So rather than insert yet another confusing red line, I placed a bold red dot “in the middle” of the Zero-to-625 generations area of each chart.

I wonder if anyone would care to comment what these three samples can tell us about a proxy for 6,000 years, or 300 generations, as the time scale of the genetic experiment?

Readers, be sure to click on the image to see it at it’s largest magnification!

Thanks Richard; you have provided a great deal of information and it will take me some time to digest it.

I will respond in a general way at this time (note I am not questioning any technical aspect, or making any criticism of the modelling approache (s)). My interest is in “imagining” how a population of species that appear to be dispersed in a large area would somehow come together to form a relatively stable population, and then from there undergo further modification to form a bottleneck that may indicate a shrinking number. (at least that is how I envisage the modelling - a population that causes a mixing leading to genetic diversity) and followed by a bottleneck that leads to new genetically relevant species. I wish I can make the comment clear, but I cannot.

Is the proposed bottleneck (whatever its size) a result of hunters forming communities of thousands, to be followed by some type of shrinking? Is a bottleneck a devise required by models of one sort or another? Or am I asking the wrong questions?

“All models are wrong, but some are useful.”

My original point was about how accurate population genetics is over prolonged periods (and how it could be verified). Approximations or neglected factors in models can tend (one hopes) to be self correcting over time, or to lead to increasing divergence (as in uncalibrated Carbon dates), which one may have to live with if no calibration can be found. That was what I mainly had in mind.

But Lenski’s results are astonishing because they appear to show that the neglected factors “including: changes in mutation rates, periodic selection, and negative frequency dependent selection factors” seem (to me, at least) to result in a chaotic type of divergence over 60K generations.

Would that not suggest that such things cannot be factored in successfully, in order to correct the model over such timeframes, any more than additional factors would enable one to describe the weather a year ago from calculations based on the last three days weather?

I would add that this chaotic divergence is seen in Lenski’s model system, where the environment is entirely stable, reproduction asexual and the original population genetically uniform. To apply it to humans (or anything else) in the wild, one must also consider sexual (and non-random) reproduction, migration that’s far more uncertain after recent deiscoveries than this time last year (with the separation and rejoining of multiple breeding populations), known (and unknown) hybridization events, and an environment changing in entirely unknown ways.

“Certainty” seems a little hard to come by in all that. Can one even produce useful ranges of possibilities?

1 Like

Hi Jon,

I would point out that @Swamidass often provides his conclusions in terms of a range, such as 300 - 400kya. This tells me that he is taking stochastic factors into account, such as the ones Lenski mentions, in communicating his results. Trsnslating his phrasing into a number, I eould guess that the error in his estimates might be on the order of ±15%.

To acknowledge some uncertainty in the estimate does not open the door to speculation from the peanut gallery that the numbers might be off by orders of magnitude.

Moreover, if the error were substantial enough to get us from 500kya to 7kya, I am sure that a well-informed skeptic of the modeling such as @RichardBuggs would have brought that to our attention.

Your fellow member of the peanut gallery,
Chris

3 Likes

Hello All,

Going to try work through some of this in the coming days.

I do not think they assumed constant population size, but I do agree they used that word “assume” imprecisely. What they did was compute an estimate of the trees using a weak prior, which was overwhelmed by the data, by design. This is a standard approach in statistical modeling and is not correctly called an assumption.

This is important because there is no modeling of the population taking place in argweaver; its just computing trees. Contrast this with, for example, the ABC method. In the ABC method (e.g. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach) populations are explicitely modeled and assuming Ne > 10,000 would make detection of lower Ne impossible.

As I explain here: Heliocentric Certainty Against a Bottleneck of Two? - #10 by swamidass - Peaceful Science

  1. As a prior, this is not an assumption, but a starting belief that is meant to be overridden by the data. The only way that the ArgWeaver program uses the population size is in computing this prior. Population size is neither simulated nor modeled in the program except for placing this weak prior on population size. Remember, priors are not assumptions or constraints. That is why the measured

  2. The ArgWeaver output files tell us the strength of the prior vs. the data, and it is just about 5%. That means the model output is dominated 95% by the data, and not by the prior (as it is designed).

  3. The prior distribution for TMR4A is at about 100 kya, but we measured the TMR4A at about 420 kya. That means the data is pulling the estimate upwards from the prior, not downwards.

This last point should end any confusion. To draw analogy, it’s like we measured the weight of widgets, with the weak starting belief that the average weight of these widgets is 200 lb. After weighing several of them, and taking the prior into account, we compute the average weight is 420 lb. The fact we used a prior could be an argument that the real average is greater than 420 lb, but that is not a plausible argument that the true average is less than 420 lb. The prior, in our case is biasing the results downwards, not upwards.

The paper is imprecise in its use of the word “assume,” but the way it is actually used in the code, it is a weak prior, not an assumption.

That means the TMR4A (and all TMRCAs) are determined primarily using the formula: D = T * R, where D is mutational distance, T is time, and R is the mutation rate. That is the key determinants of the TMR4A. The prior has only a tiny impact on this, pushing the estimated T lower (not higher) than that which the data indicates.

Of course, we could try and redo the analysis without a prior, or a weaker prior. We would not expect much to change except for the TMR4A estimate to increase.

Remember, also, as you pointed out…

So we expect high Ne, even if there was a bottleneck. This is a pretty important point. Even if the method assumed Ne is high, there is no reason to doubt the TMR4A we compute from the data. Because Ne is largely decoupled from a single generation bottleneck in the distant past.

And I appreciate you bringing the question forward. It has been fun to get to the bottom of this.

More to come when I can.