Adam, Eve and Population Genetics: A Reply to Dr. Richard Buggs (Part 1)

I see the conversation has continued in my absence - I’ve been away from email for two days. Hunting, actually. Hauling a rifle up and down steep cliffs seeking elusive deer is remarkably therapeutic and very good exercise…

Steve, thanks for those simulations. Very nice.

Richard - I see you’re still at me for specific references. The paper that specifically undergirds the “pick a few genes” part of the broad-brush summary statement about allele methods is this one, which like PSMC, is a coalescent-based approach:

Darrel Falk and I also discussed that one way back when I first started writing for BioLogos.

Now, is PSMC an allele-based method? Does it “count” alleles? I guess it’s a bit semantic at this point - but that’s one of the challenges of writing for a non-specialist audience. Explain coalescence, or talk about it in simpler terms? I went with simpler terms. Sorry if it was confusing.

2 Likes

Also, for this paper, note that it gives an Ne at about 18,000, not the usual “10,000” - and it’s based on polymorphic Alu insertions, and thus not on the standard forward mutation frequency (Mu). So, it’s another independent measure of ancestral population sizes that again, does not (at all) support a bottleneck to two.

1 Like

Also note this from the paper: they considered, tested, and rejected a strong bottleneck hypothesis in the time frame we are discussing. Note how they cite the allele frequency spectrum for the Alu variant sites:

“The disagreement between the two figures suggests a mild hourglass constriction of human
effective size during the last interglacial since 6000 is very different from 18,000. On the other hand our results also deny the hypothesis that there was a severe hourglass contraction in the number of our ancestors in the late middle and upper Pleistocene. If humans were descended from some small group of survivors of a catastrophic loss of population, then the distribution of ascertained Alu polymorphisms would show a pre- ponderance of high frequency insertions (unpublished simulation results). Instead the suggestion is that our ancestors were not part of a world networkof gene flow among archaic human populations but were instead effectively a separate species with effective size of 10,000-20,000 throughout the Pleistocene.”

3 Likes

@RichardBuggs,

Really? Why don’t we conclude that the speed of light gets faster and faster with time while we are at it.
6000 years ago, it took 4 hours for you to see the stampeding wildebeasts… and naturally, you were run over by them before you could literally see them.

For you to assume that mutation rates were different, you would need evidence for why that would be, yes?

Conversely, if we correlated mutation rates across several types of animals and types of phenotypes… if we saw convergence and general agreement, your theory would be proved wrong.

So… instead of “what-if”-ing scientists to death … maybe you could collect the evidence that shows you something?

George, the quote you are making is from Dennis’ book, which I was quoting. I did not pen those words. However, I would point out there is a considerable literature on the evolution of mutation rate. Michael Lynch has done a lot of work on this. This is very different from the speed of light.

Hi Dennis, good to hear you have had a nice hunting trip. I trust you are enjoying some venison now.

I am indeed. I cannot underline enough how important this issue is. If you are making unsubstantiated or mistaken claims about science in your book, just lines after saying “given the importance of this question for many Christians”, I don’t think it is just me who views that as quite a serious issue. This is why I am so keen to give you every opportunity to substantiate this passage.

For readers struggling to follow all the different threads in this comments stream, let me remind them that this is the passage from Adam and the Genome that we are discussing

So far, these are the attempts you have made to substantiate this passage from your book.[quote=“DennisVenema, post:13, topic:37039”]
Some of the citations you’re looking for are just working familiarity with published data sets.
[/quote]

I have argued that the plain meaning of that passage in your book, backed up by your Part I blog, is that it is not a summary statement and not a reference to PSMC and that to merely refer to datasets without the described analyses is not an adequate citation.

I don’t know how much time you have had to read through all the posts since your hunting trip. To make sure that others would agree with me about the plain meaning of the passage from your book about allele counting methods, I posed three simple questions for others to answer about it.[quote=“RichardBuggs, post:35, topic:37039”]
The questions I ask you are, when you read the extract from Adam and the Genome in bold below, which I show in its context:

Does the passage make you think that it is referring to a scientific study where a few genes have been selected and the number of alleles of those genes in current day human populations have been measured?
Does the passage make you think that someone has done calculations on these genes on a computer that have indicated that the ancestral population size for humans is around 10,000?
Does the passage make you think that this is a different method to the PSMC method?
[/quote]
To which your Biologos colleague Ted Davis answered:

And another reader also agrees with my reading of the passage: [quote=“tallen_1, post:38, topic:37039”]
As far as I can tell, Dennis makes three claims most relevant to your point: One, that there is a method to estimate minimum ancestral population sizes based upon measurements of number of alleles across various genes present in a population, and that this method indicates a population of approx. 10,000. Two, that an independent method exists that does not rely upon estimates of past mutation rates, involving “linkage disequilibrium,” that converges upon the same ancestral population size of 10,000. Three, that there has been a more recent method that is similar (not identical) that is not independent of mutation rate but also converges on similar results, namely the PSMC method.

Of these three approaches, Dennis’s support for the first seems to derive mostly from calculations on collected data. Presumably done by himself or others. Of the latter two approaches, that does seem to be something that is published and to which he could (and I think did) direct you. But I’m unclear as to whether the published studies for the latter two methods explicitly state Dennis’s conclusions or if he is drawing as well primarily on their collected data for support. I’m perhaps at a bit of a handicap on this as I’m relying on only excerpts of his book here on this thread. But to your point I do believe he describes three distinct methods. I’m eager to hear more about the sort of calculations conducted in these methods and how they may or may not support Dennis’s argument. That is what I am looking forward to in his remaining parts to this topic.
[/quote]
No one, so far, has defended your reading of the passage. This is making me think that your reading of the passage is what you wish you had written, rather than what you actually wrote.

And now in your latest posts you are saying:

My first response was to think “Well, thank you, why didn’t you say so before?” But a quick skim of the paper convinces me that, again, this is not an adequate citation to support the passage we are discussing.

  1. It was published before the Human Genome Project, and before we had “sequenced the DNA of thousands of humans”
  2. Most of the genes (if we may loosely call a retrotransposon a “gene)” in the paper are monomorphic in the human population studied and a handful are dimorphic. Thus the maximum number of alleles at any locus in the study is two. The allele counting method as described in your book, and elaborated upon in your Part I blog, explicitly requires higher numbers of alleles.

So again, I don’t think this is an adequate citation.

Dennis, I have to say the conclusion I am coming to is that you made a mistake in your book. If so, I would have huge respect for you if you were willing to admit it, then we could all move on and discuss the interesting science of the other methods you have written about, and the work that Steven Schaffner is doing. We all make mistakes, and those of us active in research are very used to having them forcibly pointed out to us when we get back peer review comments on our manuscripts and grant proposals. It is never much fun to have them pointed out, but part of being a good scientist is being willing to correct our mistakes and move on.

1 Like

Richard, I agree the passage is not clear. My mistake was trying to shove too much into a short summary in a way that would be accessible. I was over my word count as it was and things needed to be concise. Obviously that part was too concise to the point of confusion.

Like I said, it’s my (in hindsight poor) attempt of a summary of the field as a whole, for all allele-based methods.

I’m not sure why you continue to insist that that summary excludes PSMC methods. It doesn’t. I was primarily thinking of the 1000 genomes papers, but also all of the older literature prior to the human genome project work. Are you really saying that you know better than I what was in my mind as I wrote that passage?

What should be a bigger issue than my unclear writing is that there is no evidence in the literature that supports your hypothesis, and plenty of evidence that supports my conclusion in the book - which is the whole point that that passage is trying to convey, albeit in too compressed a fashion. Early work and the massive results from the Human Genome Project agree: humans are too diverse to have come from just two people.

So yes, by all means, let’s discuss the science. How about that Alu paper? It specifically tests the hypothesis you’re asking about, which counters your claim that researchers have not considered your hypothesis. Do you think that Alu polymorphisms in present-day humans are compatible with a population bottleneck to two people within the last 200,000 or 300,000 years? Why or why not? My take on it is a resounding “no”. It’s also nice that it’s not based on the nucleotide mutation rate, so it provides a check against papers that have to estimate that. It fits right in with the allele frequency spectrum data for SNPs that Steve is laying out for you. If there had been a human bottleneck, we would see skewed frequency spectrum for Alu insertions as well as for SNPs.

After we’re done discussing that paper, we can also discuss this other older one if you like (and then the 1000 genomes papers, and so on):

And since we’re on the topic of the pre-genome project literature, here are two other papers that I consider “older” papers, although they are sort of genome project papers, since they were published based on studies on specific genome regions when the HGP was underway/nearly done. These papers also were part of the older literature that formed my understanding of the data, and they are based on allelic variation in small genome regions. Even though they are older, they remain relevant. One even explicitly says there was no severe bottleneck in the last 500,000 years.

http://www.pnas.org/content/97/21/11354.full.pdf

https://academic.oup.com/mbe/article/18/2/214/1079293

And last, but not least, another early paper that is part of the body of knowledge of the field as a whole.

https://academic.oup.com/mbe/article/10/1/2/1030040

These last three papers are also under the surface of the “pick a few genes” statement, FYI.

Edit: a few more early papers, also part of the discussion. Remember, that summary statement is a gloss of the field using allele diversity methods. There might be other papers I’m not remembering at the moment too, but these at least give a sampling.

http://www.pnas.org/content/96/6/3320.abstract

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1712470/

1 Like

Dennis,

As a non-specialist here, I’m doing my best to follow along. But I’m hoping you can clear things up.

In my response to Richard above, I noted you seem to provide three distinct methods to estimate a minimal bottleneck size in human population over the past (presumably) hundreds of thousands of years. With one of them being independent of mutation rate. And all three converging on the same result. In the back-and-forth on this thread, I’m having a difficult time separating evidence and claims presented into these three methods. It seems to muddle together a bit. Again, as a non-specialist. I’m also struggling to gain clarity on how some of the studies tossed around here connects to the distinct methods you discussed in your book, and whether they explicitly back up your claims for at least the latter two methods or if there is some chain of inference and extrapolation that distances us somewhat from the authors’ conclusions in those studies. And perhaps better understand exactly how you intended all of that to come out. Again as a non-specialist. So I’m doing my absolute best here, but am hoping for some help. I presume most other readers with passing familiarity with the material, but no professional expertise, may find themselves in a similar quandary. Looking forward to your thoughts. Thanks Dennis!

1 Like

Yes, it does get a little muddled. Let’s give this a try, and then maybe you can respond for additional clarification if needed.

Early studies on human variation, prior to the human genome project (HGP) were restricted to working with alleles of single “genes” (in reality, generally short stretches of DNA that included a gene but also some DNA around it). These studies depended on the researchers actually going out and sequencing a large number of people for this specific gene, and then making sense of the allele diversity they found for that region (by modelling using mutation frequency, etc). These are not PSMC methods, but earlier coalescent-based methods.

For example, this early paper looks at a few such genes for which data was available at the time and concludes this (from the abstract, my emphases):

“Genetic variation at most loci examined in human populations indicates that the (effective) population size has been approximately 10(4) for the past 1 Myr and that individuals have been genetically united rather tightly. Also suggested is that the population size has never dropped to a few individuals, even in a single generation. These impose important requirements for the hypotheses for the origin of modern humans: a relatively large population size and frequent migration if populations were geographically subdivided. Any hypothesis that assumes a small number of founding individuals throughout the late Pleistocene can be rejected.”

Later pre-HGP papers were in agreement with these results. For example, this paper looked at another gene (the PHDA1 gene), and reports a human effective population size of ~18,000.

Another paper from this timeframe looked at allelic diversity of the beta-globin gene and found it to indicate an ancestral effective population size of ~11,000, and conclude that “There is no evidence for an exponential expansion out of a bottlenecked founding population, and an effective population size of approximately 10,000 has been maintained.” They also state that the allelic diversity they are working with cannot be explained by recent population expansion - the alleles are too old to be that recent. (This also fits with the genome-wide allele frequency data we see later from the HGP.)

It is in this timeframe that the Alu paper is also published. It looks at allelic diversity of a different kind. Alu elements are transposons - mobile DNA - and they can generate “alleles” where they insert. Generally, if an Alu is present, that’s an allele, compared to when an Alu is absent (the alternative allele). This paper is also nice because it does not depend on a forward nucleotide substitution rate - i.e. the DNA mutation rate, since Alu alleles are not produced by nucleotide substitutions. This paper concludes that the human effective population size is ~18,000. They also state (my emphases):

“The disagreement between the two figures suggests a mild hourglass constriction of human
effective size during the last interglacial since 6000 is very different from 18,000. On the other hand our results also deny the hypothesis that there was a severe hourglass contraction in the number of our ancestors in the late middle and upper Pleistocene. If humans were descended from some small group of survivors of a catastrophic loss of population, then the distribution of ascertained Alu polymorphisms would show a pre- ponderance of high frequency insertions (unpublished simulation results). Instead the suggestion is that our ancestors were not part of a world network of gene flow among archaic human populations but were instead effectively a separate species with effective size of 10,000-20,000 throughout the Pleistocene.”

From here, we start to get into what are really HGP papers but are focused studies on small DNA regions, rather than genome-wide variation. These are still not PSMC studies. For example, this paper looks at a small section of an autosomal chromosome (chromosome 22). They conclude (my emphases):

"The comparable value in non- Africans to that in Africans indicates no severe bottleneck during the evolution of modern non-Africans; however, the possibility of a mild bottleneck cannot be excluded because non-Africans showed considerably fewer variants than Africans. The present and two previous large data sets all show a strong excess of low frequency variants in comparison to that expected from an equilibrium population, indicating a relatively recent population expansion. The mutation rate was estimated to be 1.15 10 9 per nucleotide per year. Estimates of the long-term effective population size Ne by various statistical methods were similar to those in other studies. "

A second paper of this type looked at a region of chromosome 1. They also do a variety of estimates of population size for this region, and they conclude the following (my emphases):

An average estimate of ∼12,600 for the long-term effective population size was obtained using various methods; the estimate was not far from the commonly used value of 10,000. Fu and Li’s tests rejected the assumption of an equilibrium neutral Wright-Fisher population, largely owing to the high proportion of low-frequency variants. The age of the most recent common ancestor of the sequences in our sample was estimated to be more than 1 Myr. Allowing for some unrealistic assumptions in the model, this estimate would still suggest an age of more than 500,000 years, providing further evidence for a genetic history of humans much more ancient than the emergence of modern humans. The fact that many unique variants exist in Europe and Asia also suggests a fairly long genetic history outside of Africa and argues against a complete replacement of all indigenous populations in Europe and Asia by a small Africa stock. Moreover, the ancient genetic history of humans indicates no severe bottleneck during the evolution of humans in the last half million years; otherwise, much of the ancient genetic history would have been lost during a severe bottleneck.

In other words, the alleles we see in the present day cannot be explained as arising after a severe bottleneck in the last 500,000 years.

From here, we’re on to the HGP papers and later the 1000 genomes papers as they extend this sort of thing to the genome as a whole, show the allele frequency spectrum for a much, much larger dataset, and now we start seeing PSMC analyses included. There’s a lot to summarize in those papers, but the take-home message is those papers support the same conclusions as the previous work, but now using a massive data set. No one looked at the HGP/ 1000 genomes work and said it’s time to revisit the previous conclusion that a sharp bottleneck had been ruled out. On the contrary - the HGP/1000 genomes papers provide additional evidence that the prior work was solid.

So, there’s a full treatment of what is glossed as a few sentences in Adam and the Genome.

I’ll cover linkage disequilibrium (LD) (which is independent of the nucleotide substitution rate) and the single-genome PSMC approaches in my upcoming replies to Richard. Hopefully this gets you (and everyone else) up to speed thus far. Let me know if you’d like clarification on any of the above.

1 Like

No assumption is needed to deal with variation in mutation rate across the genome. Both the mutation rate and the genetic variation data include contributions from (more or less) the entire genome. It doesn’t matter whether the different parts of the genome contribute uniformly or not – they’re all contributing to both. (Unless you have to worry about multiple mutations at sites, but that’s not the case here.)

Variation in mutation rate with time could cause problems, provided the variation were large enough. There are good reasons to think it’s not in fact an issue, though. First, the high-end mutation rate I mentioned (2 x 10^-8) was calculated by comparison with the chimpanzee genome, so it would include any previous higher rate. As I showed with one plot, using that rate does not qualitatively change the situation. Second, there is no biologically plausible mechanism for changing rates for different mutational processes in sync. If one process had changed rate, I would expect to see that reflected in the proportions of different kinds of mutation over different time scales, but I don’t. In particular, the ratio of mutations at CpG sites to mutations at other sites is the same in intra- and inter-species data, even though they are caused by very different processes.

Structure can indeed be important, but you have the sign wrong. There is a body of theoretical work on the effect of population structure on detecting bottlenecks, and as far as I know, it all points to structure causes spurious signals of bottlenecks, not erasing the signatures of actual bottlenecks. (See this paper, for example, and references therein, in particular John Wakeley’s 1999 paper, in which he concludes that we underestimate the ancestral human population size when we fail to consider population structure and migration.[quote=“RichardBuggs, post:53, topic:37039”]
3) As far as I can see the model currently also assumes no admixture from outside of Africa.
[/quote]
This is really just another version of (2), I think. In general, a fragmented population (inside or outside Africa) creates two classes of parts of the genome: those with genetic ancestry entirely within one population, and those with ancestry from a second population. The former will have coalescence times (and therefore diversities) characteristic of the population of the single population, while the latter will have longer coalescence times and higher diversities; their most recent common ancestor has to lie before the time the populations diverged, or at least far enough back for earlier migration to have carried the lineage into the second population. This signature – many regions with low diversity, some with much higher diversity – is also the signature of a bottleneck, in which some regions have variation that made it through the bottleneck and some don’t.

While positive selection has certainly occurred in the human lineage, its effect on the overall landscape of genetic diversity is actually pretty hard to pick out, and is almost certainly smaller than the effect of background selection (which acts more or less to reduce the effective population size relative to the census size near functional elements in the genome), and even more so than the effect of neutral drift. There has been a debate whether the effect of positive selection is even detectable.

I assumed that all variants in the founding couple were what they inherited from their ancestors, who were part of a large, constant-sized population. For each simulation, I included as much as was needed to match the predicted and observed data for the higher portion of the allele frequency distribution.

5 Likes

@RichardBuggs,

But both statements are “Deus Ex Macina” objections… you pull this notion out of nowhere… what if things were different a million years ago? Or just 3000 years ago?

I don’t know… what if? When Galileo had the Pope’s representatives look through the telescope, he asked if the could see the imperfections to the Lunar sphere - - craters and mountains and jagged hillsides on the supposedly pristine Lunar plains.

Their answer was that they could just detect an invisible layer of Lunar material covering over these imperfections, to render the Moon, once again, as a divinely perfect object.

Galileo, with his eyes flaring, bends over to look through the telescope again. And he steps back and concludes, but gentlemen, I see invisible mountains and craters on top of your invisible perfect lunar planes!

Propose the fringiest fringe ideas you would like … but you have to start showing results that would support these
and related contentions.

@DennisVenema

I think it is pretty clear that if there is a bottleneck, it happened within Africa, and not in the out-of-Africa diaspora.

[Typo: “not is the out-of-Africa diaspora” corrected to “not In the out-of-Africa diaspora”]

Hi Dennis,

Thank you very much, that clears things up for me considerably. I look forward to your future discussion on the Linkage Disequilibrum and PSMC approaches. Also, does this then leave us with four methods being discussed here? Earlier coalescent-based methods involving (1) allelic diversity via nucleotide polymorphisms (mutation rate dependent) & (2) allelic diversity via Alu insertions (mutation rate independent), as well as (3) linkage disequilibrium (also mutation rate independent) & (4) single-genome PSMC? So (2) & (3) could both be considered independent checks irrespective of mutation rate? Thanks!

You’re welcome. You’ve basically got it, yes, but be aware that there are a variety of related coalescent methods in those papers cited above, but it’s a bit fuzzy to draw sharp distinctions between them. They do, however, use different raw data sets. The PSMC approach in the 1000 genomes papers is also a form of a coalescent analysis, as is single-genome PMSC. But you’re right that the LD and Alu analyses are independent of the nucleotide mutation frequency. They are also independent of each other. So at a minimum, we’re looking at three independent lines of evidence (if we want to lump all coalescent modelling together). Obviously, population geneticists don’t lump them all together, otherwise they wouldn’t keep improving them and applying them to larger and larger data sets.

2 Likes

Ah, yes. The point is that if the method is powerful enough to exclude a sharp bottleneck in non-Africans, which have an effective population size (Ne) around 1200, then it is amply able to exclude one for African populations which have a much higher Ne.

1 Like

Hi Steve, @glipsnort, thanks for your responses to the points I raised about your model. I will respond more in due course, but for now I will just focus on the issue of population sub-structure.[quote=“RichardBuggs, post:53, topic:37039”]
2) Also, as far as I can see (Steve, do correct me if I am wrong), this approach depends on the assumption of a single panmictic population over the timespan that is being examined. I think it would be fair to say that there has been substantial population substructure in Africa over that timespan and that this has varied over time. To my mind, this population substructure could well boost the number of alleles at the frequencies of 0.05 to 0.2.

Let me just try to explain that in a way that is a bit more accessible to our readers. I am saying that Steve’s model (at least in its current preliminary form) is making the approximation that there is one single interbreeding population that has been present in Africa throughout history, and that mating is random within that population. However, the actual history is almost certainly very different to this. The population would have been divided into smaller tribal groups which mainly bred within themselves. Within these small populations, some new mutations would have spread to all individuals and reached an allele frequency of 100%. In other tribes these mutations would not have happened at all. Thus if you treated them all as a large population, you would see an allele frequency spectrum that would depend on how many individuals you sampled from each tribe. It is more complicated than this because every-so-often tribes would meet each other after a long time of separation and interbreed, or one tribe would take over another tribe and subsume it within itself. Such a complex history, over tens or hundreds of thousands of years would be impossible to reconstruct accurately, but would distort the allele frequency spectrum away from what we would expect from a single population with random mating. It gets even more complicated if we start also including monogamy, or polygamy.
[/quote]

I think you will find that John Wakeley’s paper supports the point I am making. My point is only about the approach that you are using: modelling of allele frequency spectra. It is not (for now) about other methods of detecting bottlenecks. The problem for the bottleneck hypothesis that you are posing is the high number of intermediate frequency alleles in present day Africa. I am suggesting that past population structure (post-bottleneck) could explain this. Similarly, Wakeley is seeking in his 1999 paper to explain the fact that in a dataset he is looking at “nuclear loci show an excess of polymorphic sites segregating at intermediate frequencies (Hey 1997). This is illustrated by Tajima’s (1989) statistic, D, which is positive…”. Wakeley then goes on to explain pattern as “due to a shift from a more ancient subdivided population to one with less structure today”. As far as I can see, this supports the point I am making: population subdivision can cause intermediate allele frequencies.

In addition, a paper which built on Wakeley’s work shows that ““in simulations with low levels of gene flow between demes… Tajima’s D calculated from samples spread among several demes was often significantly positive, as expected for a strongly subdivided population” (Pannell, Evolution 57(5), 2003, pp. 949–961).

Thus I think it is fair to say that strong population sub-structure for a prolonged period at some point subsequent to a bottleneck would shift allele frequency spectra towards having more alleles at intermediate frequencies.

No, that’s exactly the opposite of the problem. Note that in this context “intermediate frequency” means not close to 0% or 100% (look at the Hey paper Wakeley cites if you doubt this). After your tight bottleneck, you’ve still got a substantial number of intermediate frequency alleles, but you’ve lost almost all of the low frequency alleles.

Tajima’s D for the post-bottleneck scenarios is positive – very positive initially, because heterozygosity wasn’t reduced very much by the bottleneck, while rare alleles were wiped out. The real human population, meanwhile, has a modestly negative D, thanks to the excess rare alleles from population expansion. You’re proposing to add a process to the bottleneck scenario that will make D even more positive.

2 Likes

Hi Steve, perhaps I have misunderstood which aspect of you simulations is not fitting with the data. I was going on this comment that you posted near the beginning:

Then putting this comment together with this one:

I got the impression that you were saying that the problem with the model in your 100kya_16K simulation was that between 0.05 and 0.2 on the X axis the model is not predicting enough variants. This is why I suggested that one could invoke population subdivision over part of the last 100Kya to increase the numbers of these intermediate alleles, and if this were included it might be possible to fit the data.

Have you calculated Tajima’s D for the data and simulation in the 100kya_16K chart? How do they compare?

I completely agree with you that the immediate effect of the bottleneck would be a positive Tajima’s D, but I thought your argument was that 100Kya later the intermediate frequency alleles derived from the bottleneck had a very small - almost negligible effect on the allele frequency spectrum, which was now dominated by new mutations.

I am sure I must be misunderstanding something here.