Adam, Eve and Population Genetics: A Reply to Dr. Richard Buggs (Part 1)

I have to wonder about this pair of sentences:

“… [it] does not mean that if there were such a bottleneck it would not have profound population genetic effects…” < [ Meaning, it could have profound genetic effects ]

“… even if it were undetected by our methods.”

How profound could the effects be if we couldn’t detect them?

What would be an example of such a profound effect?

1 Like

I thought this post required a more detailed follow up…

This is a great hypothesis but it is not borne out in the data marshaled. It confirms, instead, my point that the effect is only small on TMRCA. At the core of this is misunderstanding of the statistics at play.

Case in point is S4:

18

Look at the last column where it shows the true ARG branch length. We see that the variance of the estimate increases but there is no systematic error that is shifting estimates upwards from the true value. This is a critical point. Remember we are taking the median of the TMR4A, which only depends on where the distribution is centered, not its spread. That is why we chose median, not mean. This graph is very strong evidence that the recombination inference errors are NOT increasing the TMR4A, just as I hypothesized.

In fact, even when recombinations are underidentified (bottom row, middle column), ARG length is still measured about the same, but with higher variance (bottom row, right column). We chose an estimator that does not depend on variance, however, so this has zero impact on TMR4A.

We see the exact same pattern in S7. Even though you write…

10

The correlation between the true and estimated TMRCA drops a little, from 0.87 to 0.78, but if there is systematic error, it is very low. We cannot tell precisely this graph, but we can from the prior graph. The median of each distribution is going to be very close to each other. Taken with the prior figure, this is evidence that the under inference of recombination is not a major source of error. In fact, these data points show that recombination inference mistakes do not change the average/median TMRCA estimates. Also, TMRCA has much higher variance than TMR4A, and TMR4A will be even less susceptible to these types of errors.

In the end, these figures validate pretty clearly my alternate hypothesis, which I formed based on knowledge of how this algorithm works.

Remember, I am a computational biologists, and I build models for biological systems like this in my “day job,” so I am working from a substantial foundation of firsthand experience in how these algorithms work. This is not an appeal to authority (trust or distrust me as you like), but an explanation of why I had some confidence in this in the first place. What we see is what I guessed. There is increased variance in the estimate, but no clearly evidence that the TMRCA is increased more than it is decreased.

This hypothesis seems false. We can see from figure S4 and S7 that about 50% are less parsimonious than the correct tree (higher TMRCA) and about 50% are more parsimonious (lower TMRCA). Remember, we do not expect the trees to be precisely correct. There are just estimates, and we hope (with good reason) that the errors one way are largely cancelled by the errors the other way when we aggregate lots of estimates.

Finally, we are aggregating a lot of estimates together to compute the TMR4A across the whole genome. This is important, because by aggregating across 12.5 million trees, we reduce the error. While our estimate in a specific part of the genome might have high error, that error cancels out when we measure across the 12.5 million trees. This is a critical point.

The statistics here substantially increases our confidence in these numbers.

Just about any source of error we can identify will push some of the TMRCA estimates up and some of them down. However, because we are looking at the median of all these estimates, this increase in variance does not affect the accuracy much. A great example of this is mutation rates.

Yes, there is variation in mutation rate. We can measure it in different populations, and we can even detect some differences in the past. These variations, however, in humans are all relatively small. These variations, also, are not always to higher mutation rates, but also to lower mutation rates. So yes, it is likely that mutation rates were slightly higher in particular populations or points in the past (let’s say within 2-fold per year), but it is also likely they were slightly lower at times too. For the most part, this just averages out over long periods when looking at the whole human population. That is not 100% true, but the law of averages is why variation in mutation rate is not going to dramatically increase our confidence interval on TMR4A by much.

Let’s remember why we are here:

I think it’s fair to say at this point that I did rigorously test the bottleneck hypothesis. Right?

Perhaps there will be improved follow on analysis that will refine my estimates, and I encourage that. However, TMR4A is a feature of the data. It is the length (in units of time, computed by mutational length / mutation rate) of the most parsimonious trees of genome wide human variation. This is not an artifact of a population genetics modeling effort. Rather, it is a way of computing the time required to produce the amount of variation we see in human genetics.

Also, this analysis is very generous to to the bottleneck hypothesis. Though I’m not certain yet (and plan to do the studies to find out), bottlenecks going back as far as 800 kya might be inconsistent with the data we see. There are some large unresolved questions about how a bottleneck effects coalescence rate signatures before the median TMR4A, and if they are detectable.

I hope there can be some agreements on these points, as a conclusion to this portion of the conversation would be valuable. It would be great to move on to more interesting data.

4 Likes

I should also add that the referenced supplementary figures (S4 and S7) appear to be using only 20 sequences. Accuracy improves dramatically as more sequences are added. For the data we used, there were 108 sequences, so we expect better accuracy than the figures shown.

Also, S6 is an important figure, that shows the inferred vs true recombination rates for simulation using the known distribution of recombination rates across a stretch of the genome.

A few things to note about this.

  1. For most of the genome, recombination rate is low (corresponding to high u/p), but only is jumps up at recombination hotspots (the places where u/p is low).

  2. The model picks up some of the recombinations, but not all of them. Most of the recombination inference errors are there, in the recombination hotspots, which are confined to a very small proportion of the genome.

  3. At recombination hotspots, the trees will span shorter amounts of the genome than the rest of the genome. Trees with low bp span are signature for high recombination rate.

  4. That means that most of the genome has a high u/p and is being estimated accurately, but there is only difficulty at recombination hotspots where u/p is low.

  5. By weighting trees by the number of base pairs they cover, we can dramatically reduce any error that might be introduced by recombination inferences. That’s because recombination hotspots are were the vast majority of the errors are, and these hotpots are just a few percent of the genome.

And that is exactly what I did. Rather than reducing the TMR4A estimate, downweighting the error prone recombination hotspots (by weighting by bp span of trees) increases the TMR4A estimate.

I’m going back over all this to point out I was already thinking about the effect of recombination and correcting for it in a plausible way. There are always sources of error in any measurement. This is no exception. The fact there is error however, does not mean the error is large. Clearly, we are only computing an estimate, but this is a good estimate of TMR4A.

https://discourse-cdn-sjc2.com/standard9/uploads/peacefulscience/original/1X/94c9420257f170b3e5f847aff3363ba3451568a2.png

Of note, correcting for recombination errors by downweighting recombination hotspots increases the TMR4A estimate. It does not decrease it. Thats because for trees spanning only short segments of the genome, they will be more influenced by the prior. That’s because in short segments of the genome, there is not enough data/evidence to overwhelm the prior, so it takes over. On longer genome segments, the data is strong enough to disagree with the prior. As we have seen, the prior pull the TMR4A estimates downwards on real data. So in the end, reducing the effect of recombination hotspots just increases the TMR4A estimate. This is appropriate, because we want the TMR4A least dependent on the prior.

This may seem surprising, and in conflict with the the S4 and S7 data. It is not. In the S4 and S7 experiments, the prior matched the simulation, and did not pull the results up or down. In the real data, the prior pull the TMR4A estimates down, and pulls them down most in recombination hotspots because their bp spans are smallest. So this counterintuitive effect makes sense as an interaction with the prior and recombination hotspots. This error is important to understand, because unlike most types of errors:

  1. it is biased in one direction (towards artificially lowering TMR4A)

  2. its impact is large (about 70 kya, or about 15% relative effect)

Note, also, that I identified this source of error and corrected for it several weeks ago. Even in my first estimate, I disclosed it was going to be an issue.

Before I looked at the prior, however, I guessed wrong on the direction of the effect. I cannot identify any other sources of error likely to have this large an effect. Also, this adjustment was within my +/-20 confidence interval, which shows even my original estimate was not overstated.

Moreover, I have at this point corrected for it. A better correction might take this further, by just excluding the trees with small bp lengths, thereby excluding all regions where recombination rate is high. This refinement, will certainly increase the TMR4A estimate. I’m more inclined to improve this estimate with a different program first. That would likely have more value in the long run.

4 Likes

Hi Joshua,
Thank you for your patience with me regarding Ne and ARGweaver. I think I have misunderstood something, and I am just having more of a think about this. As I go back over your posts, I am struck by how many times you have made the same point to me, without me really taking it on board:

Sorry I have not taken this on board sooner! Can I try to paraphrase this. What you seem to be saying is that they are simply taking a molecular clock approach to estimating TMRCA. Time is the number of differences divided by the mutation rate. They are building phylogenetic trees and dating them.

The reason why I have been so preoccupied with Ne is because I thought this was a coalescent analysis, where time to coalescence is proportional to effective population size. The bigger the population size, the longer it takes to get back to a MRCA, even in the absence of mutation. The reason why I was thinking that a bottleneck followed by exponential population growth to 7 billion individuals would reduce TMRCA in such an analysis is encapsulated in this figure from Barton et al’s textbook “Evolution” published by CSHL (note especially part C)

If ARGweaver is not doing coalescent analysis in this sense, then I can see that Ramussen et al are simply taking a molecular clock approach, as you seem to be saying.

I am not sure that you are saying that exactly though, as you also seem to be saying that the Ne value they choose is placing a prior on the TMRCA:

This sounds to me like a coalescent analysis, not a simple phylogeny and molecular clock.
I’m sorry, but I seem to be misunderstanding something here.This is why you have had to repeat yourself so much, and I am sorry it is taking me so long the understand what is going on here.

2 Likes

@RichardBuggs

Are the results of this abstract consistent with your expectations? Or do you think they are making a fundamental error somewhere?

GENETICS journal
2016 Nov; 204(3): 1191–1206.
Published online 2016 Sep 15.

Inferring Past Effective Population Size from Distributions of Coalescent Times
by Lucie Gattepaille, Torsten Günther,and Mattias Jakobsson

Abstract
Inferring and understanding changes in effective population size over time is a major challenge for population genetics. Here we investigate some theoretical properties of random-mating populations with varying size over time.

In particular, we present an exact solution to compute the population size as a function of time, Ne(t), based on distributions of coalescent times of samples of any size. This result reduces the problem of population size inference to a problem of estimating coalescent time distributions.

To illustrate the analytic results, we design a heuristic method using a tree-inference algorithm and investigate simulated and empirical population-genetic data. We investigate the effects of a range of conditions associated with empirical data, for instance number of loci, sample size, mutation rate, and cryptic recombination.

We show that our approach performs well with genomic data (≥ 10,000 loci) and that increasing the sample size from 2 to 10 greatly improves the inference of Ne(t) whereas further increase in sample size results in modest improvements, even under a scenario of exponential growth. We also investigate the impact of recombination and characterize the potential biases in inference of Ne(t). The approach can handle large sample sizes and the computations are fast. We apply our method to human genomes from four populations and reconstruct population size profiles that are coherent with previous finds, including the Out-of-Africa bottleneck. Additionally, we uncover a potential difference in population size between African and non-African populations as early as 400 KYA.

In summary, we provide an analytic relationship between distributions of coalescent times and Ne(t), which can be incorporated into powerful approaches for inferring past population sizes from population-genomic data.

@RichardBuggs thanks for your last post. I think you honed in on the point of confusion. Thanks for elucidating it.

You are right, that is the key point. I’m glad we are getting chance to explain it.

That is exactly right. That is what they are doing, with a few bells and whistles. Essentially, this is exactly what MrBayes does (http://mrbayes.sourceforge.net/), except that unlike MrBayes, ArgWeaver can handle recombination. Technically, it is constructing ARGs (ancestral recombination graphs), not phylogenetic trees. ARGs (of the sort argweaver computes) can be represented as sequential trees along the genome. That’s convenient representation that is easier for most of us to wrap our heads around, but the actual entity it is constructing is that ARG.

Except, as you are coming to see, this is not a coalescence simulation at all.

To clarify for observers, there are three types of activities/programs relevant here.

  1. Phylogenetic tree inference. Starting DNA sequences → find the best fitting phylogenetic tree (or ARGs when using recombination) → assign mutations to legs of tree (or ARG) → use #mutations to determine length of legs. (see for example MrBayes)

  2. Coalescence simulation. Starting from a known population history → simulated phylogenetic trees (or ARGs when using recombination) → simulated DNA sequences. (see for example ms, msms, and msprime)

  3. Demographic history inference. Many methods, but one common way is compute #1. Starting from DNA sequences → Infer phylogenetic trees / args (task #1) → compute the coalescent rate at time windows in the past → Ne is the reciprocal of the coalescent rate. (see for example psmc and msmc).

It seems that there was some confusion about what ArgWeaver was doing. Some people thought it was doing #2 or #3, but it is actually just doing #1. The confusion arose because it used a fixed Ne as parameter, which seemed only to make sense if it was doing #2, and might make its results suspect if it was doing #3. However, ArgWeaver was never designed to do #2 or #3. Instead, it is doing #1.

So what is the Ne for? One of the features of ArgWeaver is that it uses a prior, which is good statistical practice. They were using Ne to tune the shape of the prior, but ultimately this does not have a large effect on the trees. It’s only important, in the end, when there is low amounts of data. As I’ve explained several times too, the prior they used pushed the TMR4A downwards from what the data showed too.

How This All Gets Confusing…

In defense of the confused, one of the confusing realities of population genetics is that the same quantities can be expressed in several different units. Often they are all used interchangeably without clear explanation, and its really up to the listener to sort out by context what is going on.

At the core of this is the units we choose to measure the lengths legs a phylogenetic tree. To help explain, let’s go back to a figure from much earlier in the conversation:

https://discourse-cdn-sjc2.com/standard9/uploads/peacefulscience/original/1X/7a137bd8ef95f0a198251ddb8480d0ad6f8ca0d9.jpg

.
In this figure, the dots are mutations assigned to legs in tree, the scale bar is in units of time (years in this case), and the leaves of the tree are observed DNA sequences obtained from actual humans. I’ve seen several units of tree length pop in this conversation and the literature…

  1. Number of mutations (dots in figure, or D in my formula)
  2. Years (scale bar in figure)
  3. Generations (argweaver)
  4. Coalescence units (number mutations / sequence length, or D in my formula)

A critical point it that the mutations are observed in the data, and the number along each leg is used to estimate the time. All these things are all just unit conversions, provided we clarify mutation rates, the length of the sequence, and (sometimes) generation time. So all these units are essentially interconvertible if we know the mutation rate. If we just express them as coalescence units or number of mutations, then they do not even require specifying a mutation rate and they are a fundamental property of the data itself.

Though, as we have discussed, we have reasonable estimates of mutation rates. For example, ArgWeaver uses a generation time of 25 years / generation, and a mutation rate of 1.25e-8 / bp / generation. This is equivalent to using a mutation rate of 0.5e-9 / bp / year.

Maximum Likelihood Estimation (MLE) of Lengths

One of the easiest ways to estimate a leg length is with a MLE estimate. Let’s imagine we observer 10 mutations in a 10,000 bp block (or 1e4). For illustration, we can convert this to all the units we’ve mentioned, using the argweaver defaults.

  1. 1e-3 coalescent units (or 10 mutations / 1e4 bp).
  2. 2,000,000 years (1e-3 coalescent units / 0.5e-9 mutation rate per year)
  3. 800,000 generations (1e-3 coalescent units / 1.25e-8 mutation rate per generation)

In actual trees, it is a little more complex, because some branch points have multiple legs. In these cases, we are going to average lengths computed across the data in each leg if we are building an ultrametric tree (distance from tip to each leaf is the same). In this application, the ultrametric constraint makes a lot of sense (because we all agree these alleles are related), and this gives a way to pool data together to get a higher confidence estimate that is not sensitive to population specific variation in mutation rates.

Nonetheless, these units are so trivially interchangeable, that they are not consistently used. While coalescence units is the most germaine to the data, it is also the most archain. So it is very common for programs to use different units to display results more understandably. Argweaver and msprime, for example, use “generations.”

Maximum A Posteriori (MAP) Length

So MLE is great when we have lots of data, but it is very unstable when there is only small amounts of data.

  1. For example, what if the number of bp we are looking at is really small, let’s say exactly zero. In this case, what is the mutation rate? 0 mutations / 0 bp is undefined mathematically, and creates problems when taking recombination into account, some trees can end up having 0 bp spans in high recombination areas.

  2. How about if the number of bp is just 100, and the observed mutations is zero. What is the mutation rate then? From the data we would say zero, but that’s not true. We know it is low, but its not zero.

So how do we deal with these problems? One way to solve this problem is to add a weak prior to the mutation rate computation. There is a whole lot of math involved in doing this in a formal way (using a beta prior), but I’ll show you a mathematically equivalent solution that uses something called pseudocounts.

With pseudocounts we preload the estimate with some fake data, pseudo data. If the mutation rate is 0.5e-9 / year and we think this leg should be about 10,000 years long, we can use this to make our fake data. In this case, we will say the fake data is a 100 bp stretch, where we observed 0.0005 mutations (100 * 10000 * 0.5e-9). This is fake data so we can make fractional observations like this. We choose 100 bp to make this easily overwhelmed by the actual data.

Now, we estimate the mutation rate by looking at the data + pseudo data, instead of the data alone. If, for example, we are looking at no data. We would end up with a length of 10,000 years instead of the nasty undefined 0/0 we get in the MLE. Likewise, if we look at a real tree over a 2,000 bp region where 3 mutations are observed.

  1. We can make a MLE estimate of the length in coalescent units, at 0.0015 (or 3 / 2000), which is equivalent to 3 million years.
  2. We can also make MAP estimate of its length (using our pseudo counts), at 0.001428 (or 3.0005 / 2100, which is equivalent to 2.8 million years)

There are a few observations to make about this example.

  1. These numbers can be converted into other units as discussed above.

  2. The MLE estimate and MAP estimate are pretty close. The more data there is, the closer they will be.

  3. Even though our prior was 10,000 years, it’s totally overwhelmed by the data in this case, to give an estimate of millions of years.

  4. Only a few mutations is enough to increase the estimate of the length, which is why individual estimates have very high error (they will both be above and below the true value). We really need to see estimates from across the whole genome. NOnetheless, this example is not quite typical (just for illustration) and had 3 mutations in a tiny stretch of 2000 bp. That is a really high amount of mutations.

  5. In the end, we want to choose a prior that will have little impact on the final results, but will help us in some of corner cases where things blow up in the MLE estimate. That is why were use a weak prior (low pseudocounts).

This is just an illustration, designed to be easy to understand without requiring statistical training. It is not precisely how argweaver works, for example, but is a very close theoretical analogy.

ArgWeaver Works Like MAP

ArgWeaver works very close to a MAP estimate. Our median TMR4A estimate is very much like a MAP estimate of TMR4A. What are the differences, however, with how ArgWeaver works from MAP…

  1. ArgWeaver is not making a single MLE or MAP estimate (as described above). Instead, it is sampling ARGs based on fit to the data (likelihood) and the prior. This called Markov Chain Monte Carlo (MCMC) and is closely related to a MAP estimate when a prior is used in sampling (as it is here).

  2. ArgWeaver prior is not implemented using pseudo counts, instead they are using an explicit prior distribution. Using a prior distribution (rather than psuedocounts) is the preferred way of doing this, as it is less ad hoc, more flexible, has clear theoretical justification, and clarifies upfront the starting point of the algorithm.

  3. The ArgWeaver prior does not use a fixed time (we used 10,000 years above), but a range of times. This is how the Ne comes in. They use the distribution of times expected from a fixed population of 11,534. I have no idea why the chose such a specific number.

  4. The ArgWeaver prior is on the time of coalescence, not the length of a leg in the tree. This is subtle distinction, but the TMR4A is actually the sum of several legs in the tree. The prior ArgWeaver uses says that we expect (not having looked at data) for that TMR4A time (which is a sum of leg lengths in the tree) to be at about 100 kya. As implemented, it’s a weak prior, and is overwhelmed by the data. Ultimately, the tree lengths computed in the by ArgWeaver are not strongly influenced by the prior.

  5. Though I have explained this as actions on trees, ArgWeaver is applying this to branch lengths on the ARGs (the ancestral recombination graph). This is important because ARGs end up using more information (larger lengths of sequences) to estimate the length than naively trying to estimate phylogenetic branch lengths independently for each tree. The trees we have been using are an alternative representation of an ARG that is less efficient, but easier to use for many purposes (like estimating TMR4A).

In the end, to ease interpretation, ArgWeaver reports results in “generations” but its converting using the equations I’ve already given. So we can easily convert back and forth into any of these units. Most importantly, at its core, we are just using the fundamental formula:

D = R * T

Mutational distance is the product of mutational rate and time. That’s all that is here. That is what enables the conversions. The fact that argweaver makes the surprising decision to use Ne to parameterize its weak prior is just a non issue. As I have explained, the prior it uses for TMR4A is lower than TMR4A, so it’s just pulling the estimate down any ways. Getting rid of it will only increase the estimate (only a small amount). MAP estimates, also, are considered vastly superior to MLE estimates, so it just makes no sense to doubt this result for using a better statistical technique.

A Prior Is Not an Assumption

It should be clear now why it is just incorrect (despite that footnote in the paper) to call a prior an assumption. It is also incorrect to say that argweaver is “simulating” a large population. All it is doing is using a weak prior on the tree lengths, and that is a good thing for it to do that makes the results more stable.

As an aside, the language of prior and posterior is chosen intentionally. The terms are defined in relation to taking the data into account. In Bayesian analysis, the prior is updated by the data into the posterior. Then, the posterior becomes the new prior. We can then look at new data, to update it again. So priors, by definition, are not assumptions. They are starting beliefs that are updated and improved as we look at more data. It is just an error to call them assumptions.


Okay, I know that is a lot, but I figure that some people will find this useful. This is a good illustrative case of the fundamentals of Bayesian analysis. While the rigorous treatment requires a lot of math, this should give enough for most observers to follow what is going on here.

7 Likes

My favorite post ever! Thanks a gazillion for taking the time to explain the analysis so clearly.

1 Like

Agreed. Thanks, @Swamidass!

1 Like

Thank you Joshua @swamidass for such a clear explanation. I am very glad to have got to the bottom of where I was misunderstanding the ARGweaver paper. I will have to have a bit of a think now about what this means for the various critiques I was offering before.

2 Likes

Swami’s posting … link below… is now for the history books:

Posting 481 in Thread 37039 !

Hello all. When I make mistakes, I like to correct them as quick as possible, even if they do not have an impact on my overall point. I try to do so quickly, but please do keep in mind that this is not my real job. I do this on the side to serve everyone that cares about these questions. So, unfortunately, sometimes it takes me a bit longer then I’d prefer.

Fixing the Prior

For a while, I had been saying that the ArgWeaver prior was using a Ne = 10,000, leading to a prior TMR4A of 100 kya. That turns out not to be precisely correct. Instead…

This was pointed out to me in private a couple weeks ago, and since then I have been able to confirm it. So some of my earlier figures (and statements) were not precisely correct. The median of the prior of TMR4A is 123 kya, and the median estimate (the posterior) of TMR4A is 495 kya. You can see the figure below.

So the prior is 123 kya for TMR4A, but the data updates this posterior to 495 kya. Does this change affect any of my key points? Not that I can see. Still I did want to make the correction. I wish I had enough time for this to have retracted it sooner.

About Retractions in Science

One of the counterintuitive things about science is that we respect those who retract their errors quickly. Scientific work is difficult, and we know firsthand that even the best of us make mistakes. Though our instinct is to never admit mistakes, we really reward scientists that admit their mistakes.

As surprising as this may be, I’m not sure BioLogos as an organization is accustomed to this part of scientific culture. It is a very non-intuitive thing. Making a retraction ultimately increases our reputation. I do hope that, given what we are doing here, that some thought will be given to retracting statements that have gone beyond the evidence.

I think, for example, that there is a “Part 3” of @DennisVenema’s response to @RichardBuggs scheduled. It’s curious that it has not yet been published. I’m hopeful that figuring out the right way to do this (and perhaps getting it approved) is why it has been delayed. That would be a very good thing, and a great outcome of this conversation. If that is what happens, eventually, it’s important to remember that the best scientists make retractions, it is one of the ways we recognize honestly, and it’s something worth respecting.

2 Likes

It’s been delayed because I’ve been working with Charles Cole - the person that @RichardBuggs cited regarding PSMC modelling - to use PSMC models to directly test Richard’s hypothesis as best we can. Charles has been busy, I’ve been busy, the modelling wasn’t straightforward, and it’ll be a bit yet before I’ve got it together. I’ll probably invite @Swamidass and @RichardBuggs to look over the data before putting the post up so we can perhaps put our heads together on it. I’m hoping Cole will also join us here for that discussion. It should be interesting. Intuitively, one would think that PSMC modelling should see something if Ne went to 2 - but testing is better than intuition.

I also think we’ve reached a point in the conversation where the evidence is solidly showing to @RichardBuggs 's satisfaction that we can reasonably exclude a bottleneck to 2 in the last 350,000 years - am I correct there, Richard? If so, that pretty much means that we are in agreement. My certainty level in Adam and the Genome was only to 200,000 years ago, though I’ve said that I’m ok pushing that back to 300,000 plus or minus. This is of course excluding interbreeding with Neanderthals and Denisovans. @Swamidass, as I mentioned to you via PM, I’m really talking about ancestors to present-day sub-saharan Africans over this timeframe. Any species definition is going to break down, especially with hybridization going on. Once we include hybridization, we’re back past 500,000 years as far as I can see.

Thanks again for your really nice exposition of the Argweaver paper. Kudos. I don’t know that I’ll actually do anything on LD - I think the argweaver paper more or less covers that territory better. I think the next part will be the PSMC results and we’ll leave it at that.

8 Likes

@DennisVenema I’ll take this as your version of a retraction.

You made two claims, both of which you have backed off of in this post. That is good news, and should be received as such.

Claim 1: Homo sapiens never go to a single couple

  1. Homo sapiens specifically do not dip down to a single couple in 300 kya to the confidence we have in heliocentrism.

But population size estimates are always of Homo sapiens + all of our other ancestors at the time. The finding that our ancestors do not go to a single couple tells us nothing about Homo sapiens specifically, because Homo sapiens are not our only ancestors past about 50 kya.

The Ecological Fallacy.: Homo sapiens go to zero, so why couldn’t they go to two?

Now, as you explain here…

Which seems like a long way of saying that you cannot demonstrate with heliocentric certainty that Homo sapiens never go to a single couple. After all, they go to zero, so they might very well start with a single couple by some definitions.

That is a pretty important concession, as claims of heliocentric certainty really seem to have provoked the whole debate in the first place. It looks like you have backed of that claim, because you cannot defend it.

Claim 2: Our ancestors never go to a single couple after 3 mya.

  1. Our ancestors as a whole do not dip down to a single couple between 300 kya and 3 mya with very high confidence, but maybe not as high.

That is a bit of a soft pedal too, because at times you have made the claim they never go to a single couple since well before they diverged from chimpanzees. However, now…

That is really excellent that you are doing this. I shows a sensitivity to the question and a desire go engage the data. I think this is a really important effort, and I’ll look forward to seeing the results. Of course, its on my to-do list too, so we’ll see who finishes it first.

However, that study is also an admission that you are going of instinct, not settled scientific work. Given that the TMRCA for humans is about 1.8 million, we just do not expect that anything based on coalescence inference like PSMC or MSMC will be able to detect a couple after 1.8 million, which is clearly before 3 million, and also well after Homo erectus arises: the first “human” as understood by @agauger. Maybe there will be a surprise here, but it seems that this claim too is ending up unsubstantiated.

The fact that new research is being commissioned is a good thing, but it also makes clear that we are at the frontiers of scientific inquiry, not established scientific findings. Clearly, a mistake was make when instincts about this frontier were presented as settled scientific findings.

Retractions are Good

So, of course, it is a good thing that the original claims are being walked back. It would better to acknowledge the mistake more clearly, because I think that @RichardBuggs deserves some credit here. Though it took some help from me to make the case, his instincts on the big points appears to have been borne out. Honestly, it is not what I expected. @RichardBuggs deserves some credit for helping us see this more clearly.

What about Tran-species Variation?

This might seem preemptive, in light of additional data (e.g. HLA haplotypes), to observers. However, there has been substantial behind the scenes conversation that shows that this is not nearly as strong evidence as I first thought. At this point, we may have to just take my word for it. Hopefully, we will get a chance to get into it. Of course, if my assessment (totally unjustified right now) ends up being wrong, its possible that Dennis might gain some ground on claim 2.

I would certainly agree with this. This whole conversation has been very helpful and has increased my understanding of this area, to be sure. Richard’s questions and contributions have been a significant part of that, so credit where credit is due.

I do think Richard is being a bit too skeptical, though. Perhaps he can clarify - he seems to be looking for any possible reason to doubt the evidence - even going back to his final replies to @glipsnort about the allele frequency spectrum. He also seems to be doing the same sort of thing with your discussion of the Argweaver paper - I just don’t see how the Lenski work relates to that at all. I’m all for critical thinking and skepticism, but there comes a point where it looks like a duck and quacks like a duck. I haven’t even yet seen @RichardBuggs say that he’s in agreement with no bottleneck to 2 in the last 300-350KYA - but I might have missed it. Are we in agreement, Richard?

The next thing to consider, in my mind, is how reasonable Richard’s proposed bottleneck is (biologically - not really thinking theology here, but that’s an issue too). A population drop from ~10,000 down to 2 in a single generation followed by exponential population growth - how exactly did this happen? I can’t think of a reasonable biological explanation for this. Our lineage was widely dispersed in Africa at the proposed time of this event - what happened to wipe all of them out but just two? Richard - thoughts?

2 Likes

Are there any published studies which say homo sapiens emerged from a single couple rather than emerging from a population? Thus far I have seen no evidence to contradict the statement that “Homo sapiens specifically do not dip down to a single couple in 300 kya to the confidence we have in heliocentrism”. All I’ve seen in response is “Well maybe it happened but it did so in a special way which left absolutely no evidence and is totally undetectable”. That’s just YEC reasoning, like the idea that God did a big “cleanup” after the flood to remove all the evidence of meteors and volcanoes and comets and other silly ideas.

1 Like

@DennisVenema do you agree with @Jonathan_Burke on this assessment?

If we are so certain that it did not happen, why can no evidence be marshaled in support of the claim?

Actually my statement was aimed at your claim, not his. You seem to be the one saying “Well maybe a recent homo sapiens bottleneck did happen but it did so in a special way which left absolutely no evidence and is totally undetectable”. In terms of evidence, all the genetic population studies (including your own), seem to demonstrate repeatedly that there’s no evidence for such a bottleneck even when testing specifically and robustly for such a bottleneck.

Additionally, you keep saying that since the homo sapiens population was zero at one point, it might very well have been a single couple at one point. I don’t understand the reasoning for this.

I don’t understand this either. If the boundary lines between species are fluid, it is populations that get designated a divergent species, not individuals. Using the language analogy, you could never identify the “first couple who spoke French.” It would be a whole population that would be called French speakers that diverged at some point from the ancestral Latin form that became French over time. The only way I can imagine a population of two homo sapiens is if the rest of the homo sapiens (or whichever species we are talking about) got killed off somehow, not as two special individuals emerging from a population of non-homo sapiens.

6 Likes

@DennisVenema I believe the YEC position is that God could have arranged for Adam and Eve to have the diversity as if they were the only survivors of a 10,000 population. We could even also assume that God made the pair to genetically emulate the result of surviving a 10 million population, right?

But does this really change things much? Isn’t there a point in the curve where it really doesn’t matter how big the “hypothetical prior population” is? There is only room for a certain number of alleles… so maybe even 10,000 is well past that point in the curve?

1 Like

That is exactly why I find this fixation on the idea that homo sapiens emerged as a single couple so odd. The language analogy is widely used to explain the non-intuitive idea that homo sapiens did not emerge as a single couple. The only reason I can see for insisting on a single couple origin of homo sapiens, is theological; specifically to give the YECs a foot in the door and to imply that they can legitimately oppose evolution.

3 Likes