Adam, Eve and Population Genetics: A Reply to Dr. Richard Buggs (Part 1)

@tallen_1

I doubt if he has anything substantive on the lower range … that’s the way an Apologist frames a sentence so he can later say he “intimated at a time frame even lower than 200,000” … which leaves him plenty of dancing room.

I don’t think any of you are going to get anything substantive from Dr. Buggs - - because he won’t even publish a blog posting on any of his agreements here that contradict the YEC party line.

He and @agauger could be a force for “unity” between the ID and BioLogos camps … but it doesn’t look like that is his or her plan!

Despite what any of us write, the only person he will actually direct his thread postings to is @DennisVenema - - because he is the “target” to bring down.

2 Likes

I pray, Dr. Buggs, you may enjoy this season of the true light entering the world, and the peace the angel promised on earth.

But I have to ask: Why others? Why not you, Dr. Buggs? I understand you don’t have the time this weekend, but maybe during the summer?

Best,
Chris Falter

1 Like

Hi Dennis,

I would very much appreciate it if you would engage with my comments about the coalescence analysis at the end of the Zhao et al (2000) paper, and let me know if you accept my points, and if not, why not? As I have said, I think it is critical that we conclude our discussion of this paper, given that you still seem to hold that it is a citation that you can use to support your case. If you will admit that is not in fact a suitable citation, then I am happy to move on and look at the other citations you have made in this discussion. You were keen for us to discuss data, and now that we are, it would be a pity for you to walk away from our discussion.

I think I have demonstrated that the Zhao et al (2000) data do not pose me any difficulties. Would you not agree?

But don’t forget I have queried every one of your “converging lines of evidence” in my blog, and you are yet to respond adequately to any of my critiques.

Please could you illustrate this claim? Perhaps this appears to be shoehorning to a non-specialist: [quote=“RichardBuggs, post:197, topic:37039”]
Regarding time, in Zhao et al, the coalescent analysis for this region gave a mean estimate of time to the most recent common ancestor (MRCA) for this region of 1,356,000 years ago; and the 95% confidence interval was between 712,000 and 2,112,000 years ago. This is assuming a constant effective population size of 10,000. To date a bottleneck of two, we do not need to go back to a single MRCA - we need to go back to four haplotypes in two individuals. As you will know, it is the final coalescence events that take longest time in a coalescent analysis - so much of the time to the MRCA can be after (going backwards ie. before in time) the bottleneck. In addition, if we are testing the hypothesis of a bottleneck of two, followed by rapid population expansion, we clearly do not have a constant effective population size of 10,000. The bottleneck will cause coalescence events to occur more rapidly than they would in a constant-sized population. In fact, we would need to use the multiple-merger coalescent to model that coalescence events in the early generations after the bottleneck. All this would reduce the time taken by the coalescence process. As I have said before, I am not putting forward a particular hypothesis about the timing of when a bottleneck could have occurred - I am just querying your assertion that one has never occurred in the human lineage - but it seems to me that a timeframe of low hundreds of thousands of years would be reasonable for this particular region of the genome, and perhaps lower.
[/quote]
If you think that is “shoehorning” let me assure you that I am simply referring to well known features of coalescent analysis. You can find them summarised quite nicely in Figure 15.8 of Barton et al’s textbook “Evolution” published by CSHL. These are just the basic features of a coalescence analysis involving a bottleneck and rapid expansion.

I have slapped together a description of the code and put it on github:

I haven’t really proofread the description, so caveat emptor.

3 Likes

I’m still not seeing how you can fit everything you need into the timeframe you’ve allowed yourself. If there is a bottleneck to 2, every haplotype in the Zhao (2000) data set has to come from your four ancestral haplotypes. Why did you decide that three mutations was an acceptable deviation from those types? How do you have time for three mutations, each interspersed with drift? Why did some of the intermediates (presumably) drift to an intermediate frequency and then drift down to zero (since we don’t see some of the intermediates)? How do you have time for all of this to plausibly happen? Don’t forget that if you lower Ne to get a faster coalescence time, you also lower the number of forward mutation events that are plausible.

3 Likes

@RichardBuggs:

Since I know you aren’t going to respond to my post, let me just tell you what I see as “shoehorning”:

  1. Reverse engineering the numbers from current diversity to the diversity of 1 mating pair,
  2. according to some very specific sequence of mutations, drift and rates of change,
  3. that would require, at a bare minimum, 200,000 years,
  4. and concluding that you have materially changed the parameters of @DennisVenema’s analysis.

What you are doing is showing what cherry-picked changes are necessary, at the least, to
materially change the conclusions of Dennis’ work, without actually making a 6000 year time
frame any more possible.

It would seem that you seek the “spoiler” roll, doing whatever you can to throw dust into the air and make @DennisVenema somehow less credible. . . while expending none of your own credibility yourself.

I’m not so sure you are being fair to Dennis or to your audience.

1 Like

Dennis,

Since Richard’s made such an effort to hold your feet to the fire over the Zhao et al (2000) paper, I’ve done my best to try to understand what’s being discussed there. I’m hoping you can tell me if I’m on the right track or fill in the gaps.

As far as I can tell, here are the relevant conclusions of the authors discussed on this thread, and how they’ve arrived there:

  1. The conclusion that there was no severe bottleneck during the evolution of non-African humans is surmised off a straight measure of nucleotide diversity (π) equaling 0.082%. Not any coalescent analysis.

  2. The conclusion that long-term effective population sizes fell around 10,000 rely upon Watterson’s & Tajima’s calculations of θ (which are based on coalescent methods), in conjunction with the derived estimated mutation rates. Yet neither of these calculations seem to utilize TMRCA values in any way. Though both are derived through coalescent methods.

  3. The TMRCA values provided and the accompanying analysis does not give any explicit descriptions pertaining to bottlenecks or effective populations sizes. Though in your 2nd installation of your response you mention that TMRCA values can be utilized to discern potential bottlenecks. It just looks like the authors didn’t leverage that approach in this particular paper.

Also, I’d picked up (if I understand this right) that long-term effective population sizes are harmonic means of a range of idealized population sizes across a period of time, which means that acute minimum population sizes could in theory exist substantially lower than these estimates.

So it looks like if one’s to rely on these authors’ conclusions, you cannot say that TMRCA values, nor any coalescent methods, were used by them to discern or reject potential bottlenecks. Only that the values for genetic diversity accomplished that. Which may be weaker? And that reasonable coalescent methods were used to estimate long-term effective population sizes, but that as is doesn’t map on to exactly Richard’s argument unless you do some further work in the way of analysis off the raw data or perhaps further extrapolation off whatever theory or knowledge of how the mechanics of all this works out, such as someone such as yourself might possess?

And now where things sit, you’re both analyzing the raw sequence data and coming to differing conclusions off that. Yet thus far neither of you have performed a rigorous analysis?

Anyway, that’s what I’ve been able to gather so far. And maybe goes to some of the confusion or frustration surrounding this paper on this thread. Can you let me know if I’m on track here or clear some things up for me if I’m not? Thanks Dennis!

1 Like

Correct. Note that the measure for African sequences would also preclude a severe bottleneck. In general, I’m trying to figure out from Richard is why he disagrees with this conclusion. This has sent us into the weeds of the data, as it were - but we’ve mostly been discussing if the haplotypes we see in the data set could be reasonably fit into four ancestral haplotypes within human history. One of the things that’s relevant here is what the population size is after the proposed bottleneck to 2 (as well as the mutation frequency). If, as I understand it, Richard wants an exponential population increase after the bottleneck to minimize loss of heterozygosity, then presumably the population would bounce back up to Ne ~ 10,000 in short order - but I’m not sure what Richard is thinking. I’m also not sure if he wants to use 1.1x10-8 or the lower mutation rate in the paper itself that is estimated based on comparisons to chimpanzees and orang-utans. This actually depends on these species sharing common ancestral populations. It’s an issue we haven’t yet broached, but I don’t know for sure that @RichardBuggs accepts common ancestry for humans and other species. Richard was widely quoted some years ago for claiming that the human-chimpanzee genome identity would eventually be recognized as far lower than the accepted value. I suspect - and this is an inference, so @RichardBuggs can correct me if I’m off base - that this claim was intended to cast doubt on common ancestry. Perhaps Richard can clarify if he’s ok with common ancestry and thus the estimate of the mutation rate for this region of the genome that is in the paper.

Yes - coalescent models can use TMRCA clustering to reveal population size changes. PSMC and related methods are an example. You’re right that this paper doesn’t use that type of approach. Part 3 - which is nearly ready - will get into PSMC modelling in depth.

Also correct. This is partially why we’re in the weeds of the data. A bottleneck to 2 will throw the entire population into extreme linkage disequilibrium (LD) - all the surviving alleles will be in one of four possible patterns, which will be very common thereafter. (I will discuss LD in part 4). Looking at haplotype data in this paper I see more than 4 types, and Richard sees 4. That’s where the conversation has been of late. I’m trying to figure out why Richard set three mutations from a starting haplotype as a cutoff. Three mutations would need to occur in this way: wait for rare mutation; wait for drift to make the first rare mutation reasonably common such that a second rare mutation would be probable on one of the copies of the first; second rare mutation occurs; wait for drift to make this new double mutant variant reasonably common such that a third rare mutation would be probable on one of the copies with two mutations; third rare mutation occurs; wait for drift to make the new triple mutant common enough to be picked up in the limited sample size that the paper uses. Also, some of the intermediate forms would also have to later become lost from the population, even though they were once common enough to allow for a rare second or third mutation to happen on the previously mutated haplotype. Waiting for mutations takes time. Waiting for drift takes time. I don’t think there’s enough time for all that. Richard disagrees.

Finally, I think the protracted conversation over this paper is a bit pointless. There are other regions of the genome with even more diversity and more haplotypes, which would be harder to explain with a bottleneck of 2 in human history. The chromosome 21 paper, the Alu paper, Zhao 2006, and so on. Then there’s the 1000 genomes papers, which use PSMC modelling, which are based on a much larger data set. Why we’re beating Zhao 2000 to death when we should be tackling the stronger data is something of a mystery to me. I was willing to grant Zhao for the sake of argument to move to that stronger data, but here we are. Hopefully once Part 3 goes up we can move on to the PSMC (and related) papers (though I still want Richard to deal with the chromosome 21 paper, the Alu paper, and Zhao 2006 at a minimum).

Hopefully that helps orient things for now.

2 Likes

@DennisVenema,

Why would @RichardBuggs want to switch to discuss stronger data ?

1 Like

I have just discovered something which makes a lot more sense of Dr Buggs’ responses. Up to now I had the idea that Dr Buggs was a secular scientist. However, prompted by Dennis’ comments about Dr Bugg’s views on chimpanzee DNA, today I looked around a bit and discovered the following facts.

  • Dr Buggs is a Christian
  • He argues that Intelligent Design is a science
  • He says that “If, as an explanation for organised complexity, Darwinism had a more convincing evidential basis, then many of us would give up on ID”
  • He served on the science panel of Truth In Science, a creationist organization promoting Intelligent Design and “Teach the Controversy”, during which time he defended the “information packs” which Truth In Science made for teaching ID in schools, and said ““We’re seeking to have intelligent design and criticisms of Darwinism taught in science lessons” (just in case it wasn’t clear)

This explains a great deal.

3 Likes

Thanks Dennis, that helps a ton!

So do you think some of the initial characterizations of this paper may have fed into some of the frustration over this paper on this thread? Looking back over your early comments, there are a couple statements that may be relevant here: [quote=“DennisVenema, post:87, topic:37039”]
Have a look at Table 5, which shows their data for the distribution of TMRCA values. This is the data and analysis they are basing their conclusions on. Bottlenecks increase the probability of coalescence (this is also how PSMC methods work). We see a distribution of TCMRA values for the alleles in the study. This is basically what a PSMC analysis does sequentially for an entire genome to get a much larger sample size.
[/quote] & [quote=“DennisVenema, post:88, topic:37039”]
I disagree. The methods used are capable of detecting bottlenecks - that’s why they are used.
[/quote]

If you could reword those characterizations now, how would you phrase them? The TMRCA values while listed in the paper weren’t used for their analysis regarding effective population size or bottlenecks. And their conclusion on there never having been a severe bottleneck didn’t rely on the sort of coalescent methods that might be sensitive to detecting it such as you’d discussed. But rather just a single measure of genetic diversity. Were you thinking of these coalescent papers as a whole when you made the statements, with perhaps some bleed over to the Zhao et al (2000) paper you didn’t intend? Or perhaps unintentionally conflating your analysis of the data on the paper with the analysis the authors performed themselves? I think what you’ve put out there so far has been pretty compelling, but these two characterizations at least of the Zhao et al (2000) paper seem a little at odds with what was just discussed. If I’m missing something though, please let me know.

As a side note, I’m still trying to wrap my head around how much weight effective population sizes have to bear on acute bottlenecks and why. Just given how frequently those have been mentioned here, they must carry some substantial significance.

BTW, I do agree it’s strange that Richard is refusing to move on to your stronger papers. There’s been some discussion on that here and it seems most readers are converging on the same conclusion. If Richard would like to preserve any remaining benefit of doubt that he’s engaging this conversation in good faith, it may be in his interest to deal with this.

Thanks!

2 Likes

Not only that, but purposefully mischaracterizing this move as Dennis “walking away from the conversation.”

2 Likes

That would require me to remember what was on my mind at the time. :slight_smile:
Pointing out the TMRCA values was shorthand for flagging up the nucleotide diversity issue - seeing a spread of long TMRCA values is a way to “visualize” the nucleotide diversity of the sample that Zhao was working with. So, this is the data set that they based their rejection of a sharp bottleneck on, even if they just used nucleotide diversity to do it. This is what a PSMC analysis does across a whole single genome - here they’ve done it on one short region in numerous people. Pointing out the nucleotide diversity then led to the conversation between Richard and I about trying to fit the haplotypes in Zhao into a max of 4 ancestral types . This paper can exclude a severe bottleneck for this small region for non-Africans (and presumably, Africans, but that is left unstated). At least, that’s what the authors claim - because nucleotide diversity is too high (it’s the same in this paper for Africans and non-Africans). I think they’re also suggesting that the TMRCA values supports this conclusion, but you’re right, on re-reading the paper I don’t see that explicitly stated. Messing about with haplotypes was then my attempt to show the problems with having coalescence back to 4 types within a reasonable timeframe.

This paper, of course, is really only of historical interest at this point. We have way more data, and it’s genome-wide. I do see some (possible) value in continuing to hash things out, though - perhaps we can establish why Richard thinks 3 mutations from a haplotype is ok. (We’d need to know mutation rates, population size (Ne) and a proposed time for the bottleneck.) Then, my question would be: is 4 mutations too much? How about 5? and so on. This might be useful to get settled, because then we could port that discussion over to other papers (for example, the Zhao 2006 one or the chromosome 21 one, which have more haplotypes, which would require more mutations from a set of 4).

Of course, eventually Richard will have to deal with the more recent data - papers using PSMC, MSMC (PMSC on multiple genomes), site frequency spectrum (SFS) methods, and methods that blend some of these approaches together in different ways, including some that use LD-type data. If you want to see a recent paper that compares some of these approaches on the same human data sets - humans are actually the best model organism for this sort of thing because we’ve done so many studies on our demography - have a look here. This paper wasn’t out when I wrote Adam and the Genome, and it’s way more technical than I would have wanted to get into anyway even if it was out, but for the purposes of this conversation it’s worth a look. I’ll give you a spoiler, though: even though the various methods have strengths and weaknesses, all of the methods shown no sign at all of a bottleneck to 2.

3 Likes

This point should be brought up whenever possible.

To use an analogy, let’s say you are a forensic scientist and you find DNA, fiber, foot print, tire print, and finger print evidence at a murder scene. Of that evidence, the DNA matches the defendant’s DNA, the fibers match a bloody shirt in his laundry room, the foot prints match his exact shoes, the tire prints match the tires on his car, and the fingerprints match the defendant’s fingerprints. Each piece of evidence has its strengths and weaknesses, and perhaps one single piece of evidence would not lead to a conviction on its own. However, when you have multiple pieces of independent evidence all pointing to the same conclusion the guilt of the defendant is pretty clear.

3 Likes

Unless you can find a good lawyer who can cause a jury to have doubts about the evidence…

1 Like

It’s just the same as the evidence for the age of the earth.

Hi Dennis,
Thanks for responding to me again on Zhao et al (2000). I think our discussion on this paper continues to hold value because it is helping us both to engage directly with data. For the time being, therefore I would prefer not to move on to other papers (nor other topics, for that matter). I would remind you that when you introduced this paper there was no mention that it was a “weaker” source of evidence for your view. Indeed, it appeared to be one of the strongest contenders for an appropriate reference for your statement about allele counting methods in Adam and the Genome. The fact that you continue to think that their dataset and coalescent analysis does support your case is being very helpful in allowing us to come down to a detailed understanding about what evidence you think supports your case. It seems that you have an intuition that three successive mutations of an ancestral haplotype preclude a bottleneck of two in the human lineage. If this were so, then I can see why you would conclude that a bottleneck of two is impossible (with a high degree of certainty). Thus our discussion of this paper is helping me to understand your thinking better.

You are misreading my posts about the Zhao et al paper if you think I have “allowed myself” a time frame. The time frames I am pointing out are those that arise from their coalescent analysis, and thinking through how a bottleneck followed by a population expansion would affect this.

Please let me repeat my argument (already outlined above), based on Zhao et al’s own analysis. In their own analysis, all the mutations in the 10kb sequence have occurred within the last 712,000 to 2,112,000 years. The different haplotypes currently found in human populations all coalesce back to one haplotype within this timeframe according to their analysis. As I have pointed out, it is well known that in a coalescence analysis, it is the final coalescence events that take the longest time. In other words, the coalescence from two ancestral haplotypes to one ancestral haplotype takes longer than the coalescence from three haplotypes to two haplotypes. And the coalescence from three to two takes longer than the coalescence from four to three. And so on. So within their own analysis, this 10kb sequence would be down to four haplotypes within roughly 300,000-1,000,000 years before present. Thus in their analysis, three cumulative mutations have occurred in this space of time, and indeed, more (remember that there are also mutations that are present in one or two individuals that were not relevant to us when trying to figure our what the ancestral haplotypes could have been).

Their analysis is entirely reasonable. Let’s do a quick back-of-the-envelope calculation. If we say that there were four haplotypes 500,000 year ago, and call this 20,000 generations ago, in a 10,000bp region with a mutation rate of 1.1x10-9 mutations per bp per generation, with an effective population size of 10,000 in each generation, then we would expect around 2200 new mutations to occur in total over the 500,000 years. You will recall that the total number of variants that they found in the population was 78. So they can have many many mutations lost via drift, and still see the number of variants that they do.

Now, their analysis assumes a constant effective population size of 10,000. A bottleneck of two, followed by a population expansion to 7 billion individuals will obviously look rather different. The question therefore is: will a bottleneck followed by a rapid expansion increase or decrease the time from a coalescence of four haplotypes to the present? A bottleneck increases the rate of coalescence, as you know, which is why I have said that a bottleneck will decrease the likely timing of coalescence to four haplotypes from the present. I don’t make this point because I am restricting myself to a certain time frame, I am making this point because it is a simple fact about coalescence analyses. In other words: If there was a bottleneck in our past, all haplotypes in the present human populations will (on average) coalesce to four ancestral haplotypes in a shorter length of time than they would if the human population had a constant effective population size through history.
I think you agree with this point. However, your counter-argument is that low effective population size after the bottleneck will reduce the number of mutations that can happen.

Yes, in a smaller population size, a lower number of new mutations are possible in terms of absolute numbers. But we also have to take into account two things:

(1) a rapid expansion causes a higher proportion of new mutations to be preserved in a population than would be possible in a population of constant size. By virtue of the rapid increase of the population as a whole, new mutations will be held by higher and higher numbers of offspring. If the population expansion is accompanied by a geographical expansion, there is also an effect sometimes called “allele surfing” (reviewed here) which can push new alleles up to high frequencies in newly colonised areas.

(2) the low population size will only last a few generations - a rapidly expanding population will soon reach sizes of well over 10,000 individuals. For example, if the population doubles every generation, within 14 generations we will have 16,384 individuals. Thus in the course of human history, the low population size of the human population in the first few generations after the bottleneck will have little impact on the total number of mutations that are possible from the time of the bottleneck until now.

Therefore, it seems to me that your intuition that three cumulative mutations would be impossible (i.e. very very unlikely) after bottleneck of two early in the human lineage is a mistaken intuition. If your intuition were correct, then I would have to agree with you that a bottleneck was more or less an impossibility. But as far as I can see, your intuition is wrong, and Zhao et al’s own analyses show this.

Richard,

I’ve noticed in your replies you continue to refuse to state what you feel the lower plausible bounds of a timeline for a bottleneck to two is. This has not gone unnoticed by readers, who have (correctly I surmise) inferred that you’re not allowing yourself to be held to the same standards of transparency and intellectual honesty to which you’re holding Dennis. If the intent of this dialogue is to discredit Dennis (and I believe it is…otherwise why continually avoid his stronger arguments?), you may want to take a look in the mirror and see how your own reputation is coming out through your unapologetic application of these double standards.

3 Likes

@RichardBuggs doesn’t appear to have explained this position very clearly. All of a sudden, it is @DennisVenema’s intution that is faulty… not @RichardBuggs.

I’ve read this sentence and the preceding paragraph five times … and I still don’t see how he gets to this sentence!

I have been cautious about weighing in here because this is such a significant conversation. As things are continuing on, I wanted to make a couple observations from my perspective as scientist in the Church, and how we can better understand and engage what is going on here.

  1. This is a very important conversation; as is evident, for example, in the large number of views this thread is getting.

  2. @RichardBuggs is entirely correct when he explains that this is a “question that the religious community is asking.” He is appropriately sensitive to this question. The insensitivity of many others to this question is a problem that most of us have failed on. The fact that this question arises from theology is not a reason to ignore it, but to take it more seriously. As scientists in the Church, the only correct response is empathy to this question.

  3. It has been point out by some that @RichardBuggs is associated with the ID movement and skeptical of “Darwinism.” Of course Darwinism (atheism) is something of which we are all skeptical here. This also is well known, and ignoring a valid question because of its source is the worst type of ad hominem. Of course he cares about this because he is a Christian, and he is willing to publicly question this partly because he has already taken controversial positions (by associating with ID). I am the first to dismantle bad ID arguments, but this is not even about design. It is valid scientific question, and his personal views are not a reason to dismiss his question.

  4. It is very rare to see conversations like this in public. In science, these conversations happen all the time, but in private. It is rare to see the established science questioned by a scientist of @RichardBuggs’s stature in public this way. I have had similar conversations with other scientists regarding evolution, but it has always been in private. This is equivalent to having Francis Collins or Richard Dawkins or Jim Tour enter the fray personally, with all the other committments they have. There is real risk here, so this is why it is so rare. Respect what is happening here, and perhaps we can all learn from it.

  5. As many have noted, there is real risks to one’s scientific reputation in joining this conversations. Rather than use that as weapon, it should increase our empathy for those asking valid questions about the mainstream position. Even though I have argued against ID arguments may times, ID advocates have come to me with genuine concern about personal safety as a non-tenured scientist at a secular institution. That is exceedingly kind and meaningful to me personally. We should be approaching this with the same genuine empathy to @RichardBuggs, who is our brother, even though we might disagree with him.

  6. It is respectful to let @DennisVenema has this out with @RichardBuggs without distracting on side issues and personal assessments of their relative positions. Material contributions (as those @glipsnort) are helpful and should be offered. However, this is a tendency for non-scientists to weigh “cheering” or “adjudicating” the positions raised. This is, fundamentally, going to be unfair to @RichardBuggs, as this forum is dominated by those who affirm evolutionary science. Nonetheless, he has decided to brave this forum, so we should continue to treat him as a guest. Ultimately, science is not resolved by public debates of any kinds, not even this one. It does not matter what a BioLogos skewed forum feels about the arguments here, but it does matter the observers in the Church see in how you treat @RichardBuggs. If you must comment or attempt to adjudicate this, consider doing it on another thread.

  7. @RichardBuggs has been pretty clear of several things here. (1) this is not merely about the science, but also ensuring accurate communication to the public in Adam and the Genome by @DennisVenema , (2) he is not arguing for de novo creation or some special biology in Adam and Eve. (3) he concedes up front that the evidence appears to preclude a bottleneck within the last couple hundred thousand years. (4) he has not proclaimed that he has the answer to this (which he does not have) but wants to ask questions.

  8. In addition to the scientific question, he has also been clear that this is also about @DennisVenema representation of the science. This explains, for example, why @RichardBuggs has not taken @glipsnort to task, and asked @DennisVenema specifically to explain himself. It is not really about the science, per se, but about whether or not @DennisVenema has honestly represented the science and is competent to be making this case. It might seem rude, but this is fairly standard to do to other scientists (in private usually). I would also add that I share similar concerns (even though I certainly affirm the consensus science here). I do not believe our case is made stronger when we overstate what science does say, and neglect to clarify what it does not say.

  9. The reason why he has focused on the specific reference is because that is what Dennis used in his book; @RichardBuggs is concerned the @DennisVenema overstated the science. @DennisVenema has pretty much conceded this, saying that in communicating with the public he was not worrying about referencing the most updated and comprehensive science. In @DennisVenema’s defense, he is right on that point. It is very common when communicating established science to the public to give historical or easier to understand references. @DennisVenema’s work here has never been “novel contributions to science” but just trying to explain what others have seen in the data, and the purposes of references are just different in a published scientific study and in commuting to the public. I understand why @RichardBuggs is wanting a larger concession, but I would offer that if he could explain the stronger data against no population bottleneck, this reference would be trivial to explain. @DennisVenema has already admitted directly that he did not use the best references.

  10. There is more than enough information in public, right now, to determine if @DennisVenema is a trustworthy voice to the Church. Given this, I do hope that we can move past the personal referendum on the weak references from Adam and Genome to deal with the larger questions. In particular, it is critical for anyone purporting to speak to the Church to engage the question of the Church with empathy, not ambivalence and incuriosity. Let’s not loose the bigger questions in the smaller things.

The question on the table is actually quite interesting from a purely scientific point of view. Population genetics is very non-intuitive. Engaging this question can help us all get this straight, even if (as I expect), we will see the mainstream position continue to be supported by the evidence. By taking the questions seriously, it gives us more certainly, and also more credibility to skeptics. Frankly, it is also fun.

WIth that, I expect this conversation can continue, but want to reemphasize how I think this could be most productive:

  1. Let’s focus on the strongest evidence, unless there is a helpful reason to deal with weaker evidence. I will say that there is very interesting scientific nuances arising everywhere, some of which are best understood when thinking carefully about simple examples. This will be profoundly educational as this gets deeper, for all of us.

  2. Let’s move past the personal referendum on @DennisVenema. If he is not trustworthy, engaging the substance of the response will make that clear. He has already admitted to having left out the strongest references (which is fairly standard in this case) and to have excluded material information. This, however, is not ultimately about @DennisVenema. It is about the questions of the Church.

  3. Let’s hold of on observers “adjudicating” who is right or wrong or behaving well, especially if we are not scientists. This disagreement is not adjudicated by us, nor is it a fair balance of views here. Let them do their work and respect that we have an amazing opportunity to watch to scientists hash out a scientific disagreement in public; something that rarely happens. I will also say that both @RichardBuggs and @DennisVenema (at least in public) have been engaging in normal ways, as I see happening among scientists all the time. If it seems disrespectful, it is just because science has a different culture than the Church.

Of course, I am just a bystander too. Perhaps everyone will ignore me. However, I really hope that we can see the value of what is happening here, and do what we can to make the most of it. From here, I will largely stay out of this thread, but it seemed important to make these points. In general, will be staying out of this thread, except in a few rare moments to make a critical technical point, or if I am requested by the primaries here.

7 Likes