How to Fine-Tune Arguments for God’s Existence

Noah

Since you’ve sent me an alert, I’ll comment. I see the fine tuning argument as a generic form of natural theology, just as convincing or unconvincing as Paley’s watch or irreducible complexity according to one’s presuppositions (or, particularly, how intact ones sense of “Wow” is in the face of materialistic education!).

I don’t accept the line of argument that suggests it’s more legitimate than the others because it involves the question of creation ex nihilo rather than “natural causes” in the world. It stands or falls on its own merits - and so, whilst it’s as attractive to the ID people as to ECs, William Dembski (for example) considers it lacks weight because one cannot calculate probabilities for non-existent universes.

CFT, in my view, is a weaker argument than Aquinas’s deductive arguments about the existence of anything, or of change, or of any causation whatsoever, simply because it’s probabilistic and inductive. However, what it lacks in logical rigour it makes up for in appeal to our basic human intuitions, which are at least as reliable as our reason.

It’s back, then, to that basic metaphysical division going back to ancient Greece: complex stuff that is is precisely organised towards interesting functions is ultimately due to an organising mind and will, or it happens by chance given sufficient opportunity (cue belief in an infinite multiverse and the weak anthropic principle). Is teleology primary to everything, or imaginary and dispensible?

One footnote: it’s good as Christians to remember why we do sciency stuff, including being impressed by the fine tuning of the constants - it’s not to prove there is a god, but to give glory to God by thinking his thoughts after him. So whatever the weaknesses of CFT as apologetics, we needn’t be apologetic about its strength as a demonstration of our Lord’s glory.

3 Likes

I feel that I should clarify my outlook regarding fine tuning/constants arguments. I do not use this argument as a basis for proving God or anything along these lines. I commence with the faith based statement, that God created the heavens and earth, and then ask if science can provide insights that would be consistent with this, or if science would contradict this faith based statement.

Without boring anyone with a lengthy discussion, the conclusion I have reached does not rely strictly on the anthropic principle (although that is a useful addition), but rather, this aspect of science shows us that the universe is - by this I mean we can probe and understand the physical world because we can anchor scientific enquiry in a type of certainty provided by maths and constants of the physical world. We need to add the intelligibility of the creation to human reason, and this brings us to the exciting aspect of scientific research, in that we accept the certainty of science, and simultaneously ask questions, speculate, and often guess - this is what I regard as the intelligibility and accessibility granted by the Creator to His creation, and to the human spirit.

This, I suggest, exemplifies the Glory of God to the sciences, and is granted to those of us who study science (and we share in the wonder and beauty understood by all of faith). I also think that agnostics and atheists also have a sense of this, but may express their feelings in a non-theological manner.

1 Like

From where I’m standing, the practice of calculating probabilities is wholly misguided when it comes to supportive arguments for God’s existence. There’s no foolproof way of assigning probabilities to historical contingencies. This is the case whether it concerns “irreducible complexity” or “fine-tuning”.

That’s an interesting thought Casper - isn’t statistics all about assigning probabilities to historical contingencies? Based on what I know from past events, or from theoretical causes, I reason that this coin toss is a fair way to start a football match, or that this drug is likely to help my patient.

It seems to make little difference if the event was in an incompletely known past: I can still say that last week’s coin toss gave an unsurprising result, or that my patient’s failure to recover was anomalous (and therefore I might look for special circumstances).

One can’t, of course, assign a probability to an actual event retrospectively, as many people have pointed out: its probability is 1. But one can still comment on whether the presumed causes are a sufficient explanation (a stone falling upwards was not just a statistical aberration of gravity).

The question of things arising at the point of creation, however, is surely slightly different, in that the “causes” are not just unknown, but unknowable, and so incalculable.

@Casper_Hesp
@GJDS
@Jon_Garvey

Good discussion. My view is not that Creation is improbable, although of course it is. My point is that all the relationships needed to be just right for the universe to exist once of course for humans to evolve.

The fact is that if just one aspect of the universe were out of kilter, the whole thing would fail. Now I understand that people say the universe could take many forms, but that is purely speculation. When one aspect changes, you really have to start all over again. Surely God should be able to do this, but who knows?

The Beginning is based on unique Biblical cosmology. The Big Bang is based on Einstein’s Theory of Relativity and reveals the relational nature of the universe. Both of these point to the relational nature of God’s redemption of nature and humanity through the birth, life, death, and resurrection of Jesus the Messiah.

Hi Jon,
There are a number of different aspects of statistics that make it inapplicable to discussions of God’s influence. Many of its limitations render it rather agnostic. I would say these limitations are in the end connected to the discussion you’ve been having with @Swamidass on information theory.

Firstly, one basic requirement for probabilistic statements in statistics is of course some aspect of repeatability. This does not mean we have to repeat the whole thing. Repeatability can be found in throwing multiple coins, testing multiple patients, drawing multiple samples for carbon dating, comparing multiple gene sequences, measuring multiple time points et cetera. But we cannot take multiple samples of the circumstances that led to the origin of the first cell(s) or that of the universe, leaving us without a robust basis for making probabilistic statements.

Secondly, a much more fundamental limitation of statistics is that it can only be used to compare the relative merits of multiple specific scenarios. This has to do with Bayesian statistics. Suppose that, given the data, scenario A is found to be extremely improbable, but still much more likely than scenarios B and C. That does not tell us that A is true or false, just that it looks better than C and D. For all we know, there could be another unknown but specifiable scenario D that would look a bit better. Also, showing that scenarios A, B, and C are all extremely unlikely does not make scenario D more likely if it remains unspecified.

A common mistake in many psychology studies is to assume that rejecting the null hypothesis (e.g., scenario A) on the basis of some probability criterion (usually p < 0.05) automatically leads to the acceptance of the alternative hypothesis (e.g., scenario B based on some pet theory). However, that reasoning is false because there could be many other alternative scenarios much more likely than B. I know a professor from my university who built half of his career on demolishing other people’s conclusions just based on Bayesian statistics!

So here’s the connection to Intelligent Design. Assigning an extremely low probability to a certain scenario A (e.g., DNA code arising through specified natural processes) does not automatically make alternative scenarios more probable. If a proposed alternative scenario B (e.g., “design”) is not formulated in terms of specifiable predictions, the calculated probability of A does not have any meaning. Or, applied to fine-tuning, assigning an extremely low probability to the parameter settings of the universe does not mean anything without specifiable predictions to compare it with.

Thirdly, there is something deeply disingenuous about employing probabilities in arguments for God. If the natural explanations turn out to capture all of the data, someone might say “wow, look at those underlying regularities, those patterns point to a Creator!” If the natural explanations appear to be unlikely, the same person might say “wow, that low probability points to design!” I think that’s fundamentally flawed. We can’t have it both ways.

As you have concluded in your conversation with Joshua, choice and randomness lead to indistinguishable results in formal statistical terms. I wish everyone would acknowledge that and stop using probabilities as pseudo-rational arguments to bolster the case for (or against) God’s existence. I want to be fair here but I can’t think of a softer way to put this clearly.

Casper

2 Likes

Casper

I agree with all this, when the discussion is about “evidence for God”. That doesn’t interest me, personally, so much as the question “How does God do his work?” And in that discussion, I think we can have it both ways, in the same way that regularity and irregularity are both characteristics of, if not evidence for, human behaviour.

And so I went and sat in the same office at the same times for most of my working career, and returned to the same house - but not always, because both reliability/habit and freedom mark human choices, and from the Bible’s witness, God’s too. What doesn’t indicate human action is chance, as such (we throw dice or trip up, it is true, but in the midst of purposive activity). In human terms “chance” is actually an alternative possibility to human activity as the explanation of an event. It makes sense to ask if a pile of rocks occurred by chance, or by human intention - but little sense to ask if someone made it by chance.

If we do turn to evidential matters, the ultimate explanation of “things in the world” is (broadly speaking) after millennia of argument still either a purposeful God or Epicurean chance. As I’ve suggested, both regular and irregular events are equally explicable in terms of God’s providence, but the theory of ontological chance has, it seems to me, a greater task: that of explaining both the existence of laws governing regularity, and of the coherence of irregular events as part of a functioning “cosmos”.

In other words, “natural causes” are never an alternative to “God’s act”, any more than “habit” ia ana lternative to “human intention”, because God is a coherent explanation behind natural causes, and they do not explain themselves: one needs a “God alternative”, usually in the form of chance, as an explanation for nature itself.

@Casper_Hesp

Great post, Casper. I agree that we should only bring to the table things that we can hang our hat on, and not things that aren’t fully understood, in order to positively focus on what we can rightfully marvel at. I vibe with this approach.

However, I’m wondering if you may be letting multiverse-promoting atheists a little bit off the hook. New Atheists like Richard Dawkins ridicule believers for having faith in anything, when at the same time they promote the multiverse for an explanation for the existence for this universe. They need the multiverse, and therefore have faith in it, even though it’s underpinning framework, superstring or M-theory, is in peril and is admittedly not experimentally verifiable. So, yes, we don’t want to get bogged down in debating the science behind the multiverse or string theory, but I’m wondering if it’s worth mentioning that atheists do have faith, as believers do.

Also, is the following an attempt at concordance, "The Big Bang model already shows that the world as we know it had a definite beginning and that it arose from primordial formlessness… Some interpreters see a conception of a pre-existing chaos in Genesis 1:2: “Now the earth was formless and empty, darkness was over the surface of the deep, and the Spirit of God was hovering over the waters.”?

Thanks.

Richard

Absolutely.

This is not correct. What you are describing is a “frequentist” approach to probability and statistics. WIth a Bayesian approach, we can talk about the probability of single events. Going further, we can use Belief theory to reason about our certainty about one off events too Dempster–Shafer theory - Wikipedia.

This, however, is true. We cannot make strong statements about these things. Not only because we cannot repeat them, but also because we do not understand them well enough to model them.

Exactly. Also, even if A is the correct answer and True, it is could be the noise in the data that makes it looks improbable. We never know how much of then measured improbability is due to noise, and how much is due to a failure to understand the patterns in the data.

Exactly. Rather, we have to compare the fit of the two theories to the data. But if the theory is not formulated clear, this is not possible, so it is excluded as having low explanatory power.

Well put. Both directions are seen as evidence of design, which demonstrates in polemic use, design is not a specified theory.

Well put. God governs the regularities and the irregularities to world. All of it.

Hi Joshua, thanks for your comments and affirming my words. I’m thankful that you take the time to contribute here, being an expert on the application of statistics in your field.

I am familiar with Bayesian statistics and it also involves an aspect of repeatability. It requires specifying one’s prior belief on basis of which the likelihood of future events can be evaluated. In one way or another, that prior belief is based on what we know about similar events based on previous occurrences. Such knowledge could be gained/implemented indirectly through computational models. This prior changes as we accumulate evidence along the way. The whole idea of Bayesian updating assumes a circular process (that’s where the aspect of repeatability comes in).

I’m happy to be corrected on this, but to talk about the probability of a single event we need to obtain some knowledge about its general class of similar events to base our prior beliefs on. One way or another, that involves an aspect of repetition/recurrence (even if it’s via computer simulations).

The frequentist approach simply assumes a uniform prior probability (all possible outcomes are equally likely), so it’s kinda like an impoverished version of Bayesian statistics.

Not necessarily. Belief Theory demonstrates (under just 3 rational assumptions) that our degree of belief can be mapped to what we call “probability” in a Bayesian sense. Belief, however, is is not defined in terms of repetitions. This means that it is valid to assign a “probability” to singleton events, and we can think of probability as the degree to which we believe explanations of that event.

To be clear here I am using “Belief” in a technical sense, which does not mean unsubstantiated or evidence free. It is closer to our use of the word “certainty”.

However, the non-repeatability of these events makes this type of reasoning descriptive more than proscriptive. Or more precisely, it is prescriptive in how to update priors, but not in how to choose them. For example, we can start with a specific definition of atheistic or theistic priors (of a sort), and then we will find that the evidence about fine tuning will lead us to different beliefs about the plausibility God existing. Basically, the evidence makes really no difference, we are all just restating our prior beliefs.

Belief theory tells us that both sides are technically valid and rational in using probability to explain their reasoning, even though this is a one off event. There is no way to adjudicate who has the right priors though. We can chose them however we like.

This is really the reason fine-tuning fails as an argument. Both the theist and the atheist can rationally consider the evidence and come to opposite conclusions regarding the origins of the big bang. Any probability we compute is dependent on the prior, but is no systematic way of assessing or setting priors.* So we cannot really use probability/belief/bayesian to adjudicating who is more “right” here.

*There is an interesting aside about why the Maximum Entropy (MaxEnt) priors commonly used in physics does not apply here. In questions like this there is no way to define state space, a state space can be chosen such that its MaxEnt prior is equivalent to any given non-MaxEnt prior. But I digress…

Not usually. Priors are descriptive, not proscriptive. We can set them however we want.

This is just almost accurate. Frequentist approach is derived without priors, as a idealization of repeated observational data.

Then, in an independent derivation, we can derive Bayesian inference, which includes this new concept of a “prior”. Now we discover an algebraic quirk: Using MaxEn priors in Bayesian inference reduces to frequentist math. So it turns out that the math of frequentism is a special case of Bayesian inference.

But this observations does not mean that Bayesian inference reduces to frequentism. They are derived from different starting points, and the Bayesian derivation does not require repeated observations. The math is the same, but the meaning is different. Remember, frequentism does not actually include a concept of priors. One of the values of Bayesian statistic is that it clarifies the implicit prior in frequentist math. Without the Bayesian framework (or equivalent), we might not have realized frequentism was assuming a MaxEnt prior.

And as I noted, MaxEnt is poorly defined in many domains (like qualitative hypothesis). There is no objective way of defining state space for hypothesis space. So saying that we will adopt the MaxEnt prior, that just pushes back the prior-selection problem to defining state space. So even trying to use MaxEnt really does not solve anything.

Even then, MaxEnt makes no sense as a starting point in most domains. The only place where we can show it is justified is well defined physical systems that admit will defined states and we are computing entropy or statistical distributions (as in MaxEnt distribution given specific constraints).

Would this way of using the term “belief” be synonymous with “assumption”?

Hi Richard,

Thanks for your thoughtful comment.

It’s true that atheism requires faith statements of some sort to be a viable framework. However, I think that is the case for a single universe just as well as for a multiverse. Atheists only “need” multiverse theory if they feel obliged to explain the properties of our own universe. In principle, they could also simply admit of our single universe that it’s “just how it is”.

The multiverse scenario is by definition difficult to support observationally, but it does have theoretical justification. It is one of the weirdest fields of scientific study but it has a solid basis in quantum information theory. I am by no means an expert in this topic, but I think it’s good to acknowledge ideas that have theoretical merit even if they currently evade experimental scrutiny.

In the end, all atheistic scenarios require the ultimate disclaimer regarding nature “that’s just how it is, folks!” as a stopgap for theistic accounts. The question of whether that’s a satisfactory or sufficient ultimate explanation does not change. It appears that the more we find to marvel at, the more pressing that question becomes.

I am happy you ask this, because that was not my intention. I don’t think God revealed anything substantially new about the specifics of natural history to the author of Genesis 1, so I don’t find direct agreement necessary or even desirable. Besides, my doctor told me I am concordance intolerant, so I’m currently on a low-concordance diet :slight_smile: .

Instead, I like to use such references as a way to open up possibilities in our thinking about Creation. The original Ancient Near-Eastern audience was presented with the notion of a pre-existing chaos from which God created heaven and earth. If the author of Genesis was theologically okay with using a description like that, there seems to be no reason for modern believers to feel uneasy about it.

Another interesting example is Genesis 1:24 where God says, “Let the earth bring forth living creatures according to their kinds—livestock and creeping things and beasts of the earth according to their kinds.” Apparently, the author envisioned the earth as having the capacity to bring forth life by itself and this description was an integral part of Israel’s theological heritage. It then seems rather illogical to me that there is such theological resistance amongst modern Christians to the idea of “earthly” evolutionary processes bringing forth life.

Casper

Hi Joshua,
I am sceptical of equivocations between belief and probability. I don’t think beliefs can be accurately represented in probabilistic terms because there is this “all or nothing” quality to them. Jesus said that having faith the size of a mustard seed can move mountains… Besides, I can also be very certain in my belief concerning a probability, as I am certain that a coin toss has 50/50 probability of heads or tails. Probabilistic statements themselves (including their error bars) require belief concerning their validity.

It seems we’re talking past one another here. Neglecting the priors involves the implicit assumption that they are uniform. I’m thinking about this in a mathematical sense, starting with Bayes’ theorem:

P(model given the data) = P(data given the model)*P(model) / P(data)

The frequentist approach assumes that the model under which the data are the most probable is also the most probable model, or in mathematical terms:

P(model given the data) ∝ P(data given the model)

The only way to omit the prior term P(model) in Bayes’ theorem is to set the prior probability to be uniform, i.e. P(model_1) = P(model_2) = P(model_3) = … Since all models handle the same data, i.e., same P(data), that indeed leads to the simplistic proportionality shown above.

Casper - an interesting point, suggesting that there is some intuitative “calculus” for natural theology. If the world had turned out to be simple (like Haeckel’s protoplasm easily forming life), the marvel would be correspondingly less pressing, yes?

A couple of other points.

I’d want to qualify that, firstly in that “formlessness” is not exactly equivalent to “chaos”, but mainly because, in contrast to the surrounding cosmologies, Genesis 1 places God as separate from the material world from the first. He is not Apsu or Tiamat, becoming what we see around us by his death or by warfare. That raises an entirely new question of how the unformed waters, earth and space above them came to be at all, and also came to be utterly obedient to God as Assigner of function to them: ex nihilo creation was implicit in the account, as the later Hebrews concluded. “The earth is the Lord’s and everything in it.” That said, the Genesis account indeed does not exclude an eternal existence for the “primal matter” (as Aquinas conceded), yet still implies God as its origin. Overthrowing the Big Bang would not displace God as sole Creator.

My second point is that you appear to be breaking your low-concordance diet with your understanding of the phrase “Let the earth bring forth”. Given evolutionary thought, that sentence in isolation might imply the earth’s creative autonomy, but in fact the text immediately goes on to rephrase the event by saying “God made the wild animals…etc”.

In an ANE context (and drawing on the parallel with Gen 2.7), “Let the earth bring forth” merely refers to the rather obvious phenomenological, or maybe rutual, fact that animals, like plants on day 3, materially come from the the stuff of the ground and return to it, just as fish come from the sea, and birds from the waters of the the heavens, on day 5.

If Michaelangelo’s biographer quoted him as saying, “Let this marble become King David!” before he set to work with a chisel, I don’t think we’d for a moment regard him as enduing the stone with the power of spontaneous sculpture.

Surely only the concordist will read back 19th century evolutionary science into a text written before there was any concept of “world” or “nature”, let alone the anachronistic concept of non-personal agency!

Marvel is difficult to quantify. Simplicity also has its own appeal :slight_smile: .

I accept your qualifiers. That’s exactly what I wanted to avoid. I see such quotes merely as food for thought and not as some kind of evidence to support a concordance with.

The reason we got into this is because I’m explaining that we do NOT need to observe multiple cases to talk sensibly about probabilities concerning these things.

You are using a different definition of belief. We can just define “belief” as our “degree of certainty” of a particular proposition at a given moment (either prior or posterior to viewing some evidence).

We find that under 3 basic assumptions, probability is equivalent to certainty (though we technical call it “belief”). This, is just math.

Now what you are saying about “all or none” you meditate connect ot Jesus.

This is a larger conversation. But your point is that we do not think about Jesus as a probability (.eg. 80% sure He rose from the dead). In the formalism of Belief theory, we could just say that you have a 100% belief in Jesus rising from the dead, that is therefore not modified by new evidence. Yes, this is not the most helpful way to describe. A better way is that this is the starting point for Christian thought and life, our foundational proposition.

This is not how it is handled in a Bayesian treatment. This is a pretty classic problem. It appears, tou are mixing two mathematical entities.

  1. We define a bernoulli process (coin flip) with a single parameter, the bias of the coin, or p.
  2. We define a “belief” distribution or “prior” distribution over p that describes what we think p is. If you take a MaxEnt prior, you say it is distributed uniform between 0 to 1. The easiest mathematical form for this would be a Beta distribution with alpha=beta=0.
  3. Next we look at data to update our “belief” or “prior to a posterior”. We will look a certain number of heads and tails, 1 and 0, respectively. Turns out the math works out so we just increment alpha for every heads we see, and the beta for every tails.

A few comments.

  1. With a weak prior, it really does not matter what the prior as long as you have a lot of data.
  2. A MaxEnt prior makes no sense. A priori, if you look at a coin, you know how it works and can check if it has both heads and tails. It is absurd to say ahead of time that “we have no idea if this is really an unbiased coin or not”. Rather, we should really start with a prior centered on 0.5, something like alpha=beta=10.
  3. The “prior” or “posterior” can really be understood as “what we think the true bias of the coin is”, which is DIFFERENT than the frequentist definition. This is just one thing (the bias of the coin), with no repetitions. Bayesian inference treats this as a separate entity than the data (which is repeated).

I agree, but how do you know that? You know that only because of Bayesian theory, not frequentist theory. As soon as you are doing this, you are using a framework totally different than frequentism. And this framework does not depend at all on repeated observations.

This is totally arbitrary. I already gave you one example where this failed (the biased coin), but this is even more arbitrary in philosophical discussions.

In the fine tuning argument, how would you set the prior concerning God vs not? How many theories would you place in each camp?

Yes, I would not use a definition of belief that equates it with “degree of certainty”. But that indeed taps into a larger conversation. For the purpose of the discussion I’m fine with equating belief with probabilistic priors, but that would exclude considerations of theism/atheism or other belief systems.

This pertains to something I was actually trying to get at. By what logic did you come to the conclusion that 0.5 would be the best prior to start with? To make that assessment anything more than a guess, you would need to have knowledge concerning the conditions of repeated instances of coin flipping (i.e., what varies and what is kept constant). Such knowledge allows you to get a hold of the outcome distribution over many repetitions (even if simply through mental imagery). The repetition aspect is therefore sneaked in through the backdoor when you choose your initial prior. For example, if instead you would be looking at the photon count of a certain source on the sky, you would have to implement completely different logic to produce an initial guess of its outcome distribution.

As you said before, we don’t have such precise knowledge concerning the conditions that gave rise to the first cell, which is the reason we currently can’t reliably model the outcome distribution of that process (same holds for fine-tuning).

I was describing the frequentist approach, which is indeed rather arbitrary. But given enough data it will still often converge on the right answer. It doesn’t fit with your example of the biased coin because that one implements Bayesian logic.

It’s impossible to set such a prior in any meaningful way. As I said, I don’t think such belief systems can be quantified in probabilistic terms. I think it’s better to leave probabilities out of the discussion when we consider theism/atheism.

At the very least, to formulate an initial prior, we need to be able to model/imagine/predict the outcome distribution over a number of trials (as in the case of the coin). That still involves an aspect of repetition, even if not directly observed. Does that make sense? We cannot do any of those things with Creation as a whole so we don’t have a probabilistic grip on fine-tuning.

A distribution centered on 0.5 is the best prior because it matches what we know of coins. It is our prior belief.

I can see that, but that is the technical definition of “Belief” in bayesian inference, and it does not depend on repetition.

It makes “sense” but it does not square with Bayesian inference. What you are describing is intuitive, but it is just not how the theory is formulated. It does not depend on repetition. Period.

That being said, I still entirely agree with your final conclusion, for a different reason.

We do not know the distribution of fine tuning constants conditioned on God or not. We just do not know this, and have no foreseeable way to get this. As you say, “we don’t have a probabilistic grip on fine-tuning”. This is why iit just is impossible to present a prescriptive analysis.

Though we can do a descriptive analysis of how people process the info based on different priors, to come to different conclusions. We find that the fine-tuning “argument” is basically everyone running around stating their starting assumptions.

@Casper_Hesp
Besides, my doctor told me I am concordance intolerant, so I’m currently on a low-concordance diet

It is very unfortunate that you are concordance intolerant. Maybe you should see if there is a treatment for that condition.

If the Two Books doctrine is true and I think it is, there must be some agreement or concord between the Two Books of Science and Theology. Certainly the Beginning is one of them.

1 Like