Not necessarily. Belief Theory demonstrates (under just 3 rational assumptions) that our degree of belief can be mapped to what we call “probability” in a Bayesian sense. Belief, however, is is not defined in terms of repetitions. This means that it is valid to assign a “probability” to singleton events, and we can think of probability as the degree to which we believe explanations of that event.
To be clear here I am using “Belief” in a technical sense, which does not mean unsubstantiated or evidence free. It is closer to our use of the word “certainty”.
However, the non-repeatability of these events makes this type of reasoning descriptive more than proscriptive. Or more precisely, it is prescriptive in how to update priors, but not in how to choose them. For example, we can start with a specific definition of atheistic or theistic priors (of a sort), and then we will find that the evidence about fine tuning will lead us to different beliefs about the plausibility God existing. Basically, the evidence makes really no difference, we are all just restating our prior beliefs.
Belief theory tells us that both sides are technically valid and rational in using probability to explain their reasoning, even though this is a one off event. There is no way to adjudicate who has the right priors though. We can chose them however we like.
This is really the reason fine-tuning fails as an argument. Both the theist and the atheist can rationally consider the evidence and come to opposite conclusions regarding the origins of the big bang. Any probability we compute is dependent on the prior, but is no systematic way of assessing or setting priors.* So we cannot really use probability/belief/bayesian to adjudicating who is more “right” here.
*There is an interesting aside about why the Maximum Entropy (MaxEnt) priors commonly used in physics does not apply here. In questions like this there is no way to define state space, a state space can be chosen such that its MaxEnt prior is equivalent to any given non-MaxEnt prior. But I digress…
Not usually. Priors are descriptive, not proscriptive. We can set them however we want.
This is just almost accurate. Frequentist approach is derived without priors, as a idealization of repeated observational data.
Then, in an independent derivation, we can derive Bayesian inference, which includes this new concept of a “prior”. Now we discover an algebraic quirk: Using MaxEn priors in Bayesian inference reduces to frequentist math. So it turns out that the math of frequentism is a special case of Bayesian inference.
But this observations does not mean that Bayesian inference reduces to frequentism. They are derived from different starting points, and the Bayesian derivation does not require repeated observations. The math is the same, but the meaning is different. Remember, frequentism does not actually include a concept of priors. One of the values of Bayesian statistic is that it clarifies the implicit prior in frequentist math. Without the Bayesian framework (or equivalent), we might not have realized frequentism was assuming a MaxEnt prior.
And as I noted, MaxEnt is poorly defined in many domains (like qualitative hypothesis). There is no objective way of defining state space for hypothesis space. So saying that we will adopt the MaxEnt prior, that just pushes back the prior-selection problem to defining state space. So even trying to use MaxEnt really does not solve anything.
Even then, MaxEnt makes no sense as a starting point in most domains. The only place where we can show it is justified is well defined physical systems that admit will defined states and we are computing entropy or statistical distributions (as in MaxEnt distribution given specific constraints).