You are probably right, but it is a very important point to understand.

You were missing my point earlier.

The only reason this came up is because I was explaining why MaxEnt is a bad way to set priors. There are cases MaxEnt is obviously wrong. This is just one case where it is obviously wrong. Also using MaxEnt is mathematically irregular in many cases, which can lead to divide by zero problems before the model encounters data. Also using MaxEnt as a prior in important practical situations (e.g. high parameters, low data) is virtually guaranteed to produce wrong results even after looking at the data.

Also, I will point out that in the bold text you are taking is idiosyncratic notion of repeatability (at least as far as the theory here is concerned). Using theory to predict what a new entity will do is NOT repetition. That is a theoretical construct by which you can arrive at a prior belief. In frequentism, there no way to incorporate this in, because there is no concept of a prior. You start (implicitly and always) with a MaxEnt prior anywhere entities are not directly comparable in frequentism, and are not allowed to transfer knowledge in this way between related problems. It insists, essentially, on math that assumes theory-free empiricism.

And to be clear, there are other ways (distinct from problem specific theory) of setting priors. It almost never makes sense to use MaxEnt. Almost never. Usually when doing any sort of serious modeling you must impose some sort of “regularization” which is (in almost all cases) directly mappable to a bayesian prior that encourages simplicity over complexity in the model.

Have you heard of bayesian information criteria, ridge regression, sparse fitting, dropout, pseudocounts, and weight decay? These are mappable to priors totally different than MaxEnt, and demonstrably better. In the frequentist approach, these “tricks” are just ad hoc tweaks to algorithms to handle boundary conditions and singularities and overfitting. A

Bayesian approach gives a clear theoretical framework that (1) explains all these “ad hoc” fixes, (2) can explain why the work, (3) guess when they are unnecessary, and (4) demonstrate failure cases. And all this is because it explicitly rejects to construct of “repeatability” and embraces the ambiguity of priors.

Regardless, outside these specific domains, in philosophical debates, there is no way to systematically set priors. And priors can dramatically affect your results. Moreover, we know that human minds do NOT work with bayesian inference. In fact, we are hardwired to think in a non-Bayesian way. So even if you could demonstrate the “correct” way to set the priors for this problem and the right way to process the evidence to come to a conclusion, it is *almost certainly* not going to be intuitive or convincing. Even experts in Bayesian Inference are biologically hardwired to think differently than the formulas, and are likely to find the result non-intuitive.