A question for the Fine-tuning argument

I recently came across a counter-argument against the fine-tuned universe in a crash course video. To quote “you can’t make a probability claim when you only have a sample set of one”. What can we make of this?

There is truth to that, although only if you don’t have a model to base your probability claims on. Indeed, we don’t have an established model for determining the probability of the physical constants of our particular universe. I’ve commented on this before, e.g.:

Also, I wrote a blog post on BioLogos about it:

It’s true that a sample size of 1 does technically ruin one’s attempted appeal to statistics. I think there is a lot more to say about this, though.

I don’t recall having seen the specific Crash Course video you may have heard this in, but it is easy to imagine they may have been taking aim at fine-tuning appeals by perceived religious enthusiasts seeking to enlist mathematics or statistics as an apologetic tool. What is interesting to me, however, is the lengths that some have gone to promote a multiverse theory in order to give answer to this very thing! I.e. I think the soft id enthusiasts can take it as a tacit nod in their direction that some have felt such a burning need to speculate into existence an infinity of universes … all because … “surely this one couldn’t be just as it is on chance alone!” So while the critic you quote is technically correct, and if they intend this as some sort of criticism against fine-tuning enthusiasts … my response would be: “If you want others to stop making so much of how this sample size of 1 has turned out, then maybe you should stop making so much of it yourself with desperate multiverse speculations!” Even if a multiverse did exist, all that would do is just kick this can up the road one more level (like when we discovered there were other galaxies). There would still be something like one “super reality” containing all the universes and a new set of confounding questions about it.

I suggest that the truly higher road to take, though, is (in the spirit of LeMaitre and many others) to stop feeling such a need for apologetic trappings and to just celebrate and care for our one sample that we are given --as awesome, dangerous, and terrible as it is all at the same time.

3 Likes

The issue of Fine Tuning is more about the aesthetic interpretation of such narrow tolerances. Some people find it amazing… others less so.

If life could only exist on planets “plus or minus 2.5x units of whatever” then obviously all those planets that are greater than 2.5 positive or greater than 2.5 negative, will have no life on them.

And if there is multiple instances of life between plus and minus 2.5 … obviously those planets that are “just within the range” [at 2.4 plus or minus] are going to think how amazing it is that the universe seems to have been closely designed to their comfort…

Thank you for all the insights @Casper_Hesp, @Mervin_Bitikofer @gbrooks9! I was exploring philosophy of religion when I stumbled upon Hank Green’s philosophy series in Crash Course. I am still searching for a good resource on this topic. Would you like to recommend any ? I would appreciate it! Thanks.

Right but we have billions of planets. Some of the fine-tuning arguments are at the universe level, where life not just as we know it on our planet but as we can conceive of at all seems to require certain tolerances at the universe level (for atoms to even form molecules, for example), and we only have one universe, as far as we know. (Thus the motivations for either multi-verse or design - in fact, if your 2.5/2.4 argument works at the planet level, it only works at the universe level if there are also billions of universes.) Planet tolerances are just another level on top of that, especially as long as we don’t find life elsewhere.

Others here will be better equipped to point you to high-caliber (or just good) sources to learn more about philosophy and the philosophy of science. But I will say that Hank Green (as much as I and my students appreciate and continue to learn from his popular youtube blogs) may tend to shoot from the hip sometimes. His science stuff is solidly good so far as I have seen, but as he edges into that territory of history and philosophy, I’m guessing that he may be a little less level headed than his brother John Green. But my criticism in this is only fueled by one specific video of Hank’s that I encountered about the gas law. In it he really seems to have it out for Robert Boyle as one who contrived to deprive Power of any credit for his work, and to secure that credit for Towneley and mostly himself instead. I did notice in revisiting that video now after several years, that Hank seems to have cleaned up and redone some of this. But he is still determined to get Boyle thrown under the bus. And he [Hank] mentions his own added wikipedia paragraph under Henry Power (–see the section about Robert Boyle) where he claims to have set the record straight. I would be most curious what @TedDavis thinks of that particular wikipedia paragraph. My sense from having read Dr. Davis’ articles on Boyle here is that he probably wasn’t the conniving aristocratic snob that Hank would have us believe. Between Green and Davis --I’m betting on Davis here. But I am still curious to hear again whether there is any truth in Hank’s renewed allegation.

added edit: I am compelled to note that I’ve not listened to Hank Green’s recent philosophy series, so I shouldn’t be prejudging him above. Maybe he does a great job with that. If it’s anything like most of his sci-show episodes, he will do a good job getting concepts out of lofty towers and down into popular discourse.

Cosmologists are able nowadays to work on supercomputers with the relevant modelling software. This enables them to model galaxy formation with accurate simulations by adjusting values on these programs to see what happens.

This strengthens their accuracy when hypothesizing about counterfactual “universes”:

See:

A universe made for me? Physics, fine-tuning and life

Geraint F. Lewis’ day job involves creating synthetic universes on supercomputers. They can be overwhelmingly bizarre, unstable places. The question that compels him is: how did our universe come to be so perfectly tuned for stability and life?​

As a cosmologist, I can use these immutable laws of physics to evolve synthetic universes on supercomputers, watching matter flow in the clutches of gravity, pooling into galaxies, and forming stars. Simulations such as these allow me to test ideas about the universe – particularly to try to understand the mystery of dark energy (more on this later).

Examining the huge number of potential universes, each with their own unique laws of physics, leads to a startling conclusion: most of the universes that result from fiddling with the fundamental constants would lack physical properties needed to support complex life.

And:

A More Finely Tuned Universe

Could life as we know it have developed if fundamental physics constants were different?

For all the progress physicists have made in figuring out the universe, they still don’t know some pretty basic things. Why, for example, do fundamental particles possess the specific values of mass that they have? Presently, physicists have no explanation for this and similar questions.

They do know something pretty significant, however. If the masses of particles or the values of fundamental constants were much different from what physicists have measured, carbon-based intelligent beings might not be here to measure them, because fundamental particles might not assemble into stable atoms, atoms might not form rocky planets and dying stars might not produce the chemical elements we find in our bodies.

These observations have led some physicists to describe the universe as “fine-tuned” for carbon-based life. Imagine the universe is like a machine with dials used to set the properties of each important piece – from the masses of the constituents of protons and neutrons to the rate of expansion of the universe. If many combinations of dial settings yield conditions in which complex life can evolve, physicists would say the universe is not fine-tuned. But if some of the dials have to be set very precisely to values that are not readily explained by theory, physicists would consider these parameters to be fine-tuned.

Physicists have recognized for decades that certain parameters do seem to be fine-tuned. The most fine-tuned of these parameters seems to be the cosmological constant, a concept that Albert Einstein proposed to provide an outward-pushing pressure that he thought was needed to prevent gravity from causing the universe’s matter from collapsing onto itself.

For the parameters that describe forces inside the atom, physicists have few hints at how fine the tuning is. In other words, how many different dial settings would create a universe that supports life as we know it?

To try to answer such questions, nuclear physicist Ulf Meissner of the University of Bonn in Germany and colleagues ran complex computer simulations at the Juelich Supercomputing Center, home of the largest supercomputer in Europe. In their simulations, the scientists created a simplified model universe that included specific values for the masses of particles and the way they interact. The simulations were based on the Standard Model, physicists’ main theory of fundamental particles and the strong, weak and electromagnetic forces. (The other fundamental force, gravity, is described by the general theory of relativity.)

The recent development of extremely powerful computers that can crunch through a thousand trillion calculations per second has now made this possible, said Meissner. With these computers, he said, “We can explore worlds where the constants have different values.”

Meissner and his colleagues ran their simulations while varying two constants. One was the average of the masses of the up and down quarks. These fundamental particles make up protons and neutrons, which in turn make up people and the universe we see. (The quarks in protons and neutrons are held together by what is called the strong nuclear force.)

The scientists also varied the fine structure constant, which accounts for the strength of the electromagnetic force between charged particles. The strong force must overcome the electromagnetic force to bind protons and neutrons into stable nuclei that make up the familiar chemical elements: helium, carbon, oxygen and all the rest…

Meissner acknowledges that the research does not answer why the values are what they are. To explain this, some physicists invoke a concept called the “multiverse,” in which “parallel” universes with many different possible values of the constants exist, and we, unsurprisingly, find ourselves in one in which complex life can evolve…

“This paper strengthens the case for the fine-tuning of the universe,” agrees Luke Barnes, an astrophysicist at the University of Sydney. Meissner’s team’s model of the carbon atom is more advanced than previous efforts, he said, especially because they can change the mass of the quarks.​

3 Likes

You make a very germane point there Mervin.

Scientific theories “live or die based on internal consistency and, one hopes, eventual laboratory testing”, to quote one prominent cosmologist George Ellis. In contrast, religious truths that derive from a purported divine intervention outside the known laws of physics, cannot in principle be subjected to testability or found to have predictive power.

But what happens when you have an idea or theoretical framework, which is elegant and exhibits all kinds of beautiful mathematics…but has no possibility of ever being “observed” or making testable predictions or for that matter being falsified?

The math works out, the framework is elegant and it may fill a gap in the standard model…but the catch is that you are possibly unable to ever test your beautiful, “explanatory” idea against actual observable physical reality?

The history of science is brimming with unfortunate examples of mathematically workable and logically elegant “ideas” that turned out to be dead wrong when empirically tested. Fred Hoyle’s “steady state theory” of the universe, which he and his colleagues formulated as an alternative to the Big Bang Theory, is a famous historical case in point. It was beautiful…and just plain wrong when its predictions failed to match up with our discovery of the Cosmic Microwave Background in the 1960s, which validated the Big Bang Theory.

Simply put, no matter how elegant or beautiful the idea or the maths involved, nature is our only guide - she doesn’t care what we think or prefer, our only way to find out if an idea is a scientific description of the world is to experimentally test it - or at least demonstrate that it can produce testable consequences if not direct testable observations.

If your idea can never hope to produce such testability and cannot be falsified then, is that a “scientific theory” like General Relativity or does it consist essentially in “faith”?

The inflationary multiverse arising from a “string landscape” is certainly plausible and indeed compelling if viewed as a purely philosophical hypothesis. After all, it has explanatory power and makes good sense of the data - so there is a logical possibility of us living in an ever expanding megaverse of unlimited physical possibilities, which might explain why the cosmological constant has an unnaturally small, knife-edge value in our universe.

However, does it make any testable scientific predictions within the observable universe? Umm, no.

At present, the multiverse is fundamentally beyond the realm of empirical test just like God, with no possibility of direct or indirect testability, predictive power and observation which renders it inherently “unfalsifiable”.

On account of their particle horizons and the larger expansion rates in an inflationary multiverse, the “bubble” universes which comprise any hypothetical multiverse would be separated from each other by enormously space-like distances that preclude casual contact, making communication between them or observation impossible forever. Light could never traverse those infinite distances, since inflation causes the universe(s) to expand at a rate exceeding the speed of light. Excepting the improbable circumstance in some kind of discrepancy discovered in the cosmic microwave background and interpreted as evidence of a “bubble collision” between two universes…the idea is untestable.

Simply put, if your best response to a fine tuning design argument prefixed on belief in the invisible, immaterial agency of a supreme being who exists outside the universe…is to posit the existence of something else outside our universe (a “multiverse”) which is equally invisible to our observation and equally unprovable as a result: then you are essentially giving ground to the theist notion that, as it stands, there is no naturalistic explanation for fine tuning to be found within the observable universe (the only universe we know to exist and which we can study with scientific tools) and answers need to be sought in untestable metaphysical realities beyond it.

Which is…just…a bizarre position for a naturalist to rely on when rebutting a fine tuning design argument.

I would find it equally unpersuasive if one were to propose the fine-tuning design argument under the guise of it being a viable scientific answer (which its obviously not), as is (unfortunately) not infrequently the case with many of those theorists who are most beholden to the inflationary-string landscape-multiverse paradigm.

A notable exception is the leading theoretical physicist Nima Arkani-Hamed, who also happens to be a multiverse proponent (and an atheist, incidentally), yet has the requisite scientific acumen, discretion and prudential judgement (not to mention just plain honesty) to recognise that,

"…Asking if we’re part of a multiverse isn’t a theory but a caricature of what a future theory might look like.”

Compare his words with those of Professor Sean Carroll when he lectures on the inflationary-string landscape-multiverse and its like night versus day. Carroll is an incredibly smart man, like Arkani-Hamed (both of whom are atheists), but unlike him Carroll does a disservice to his field when he vigorously supports the validity of “non-empirical confirmation”, or rather the idea that the multiverse allegedly has explanatory power and solves otherwise intractable fine-tuning problems in relation to the vacuum energy and hierarchy conundrum, which means it should therefore be accepted as a scientific theory even though it is incapable of making any testable predictions and is itself predicted by other frameworks that are likely to be untestable, such as superstrings. That’s the “house of cards” approach to scientific inquiry, and by using that expression I refer to: “an argument built on a shaky foundation or one that will collapse if a necessary element is removed”.

One such “necessary element” has, unfortunately for the multiverse enthusiasts, been removed by the LHC (Large Hadron Collider) at CERN. This is SUSY or Supersymmetry:

“…Supersymmetry is one of the central concepts of string theory. Without supersymmetry, string theory is unable to describe the full range of particles observed in our universe. It can deal with photons, but not electrons. Supersymmetry bridges this divide. It predicts that all of the known particles possess supersymmetric partner particles, or superpartners…”

Despite running for years and colliding particles together at the highest energies known to humankind, the LHC has found absolutely no evidence for the existence of SUSY:

"…What No New Particles Means for Physics

By NATALIE WOLCHOVER

August 9, 2016

Physicists are confronting their “nightmare scenario.” What does the absence of new particles suggest about how nature works?

Physicists at the Large Hadron Collider (LHC) in Europe have explored the properties of nature at higher energies than ever before, and they have found something profound: nothing new.

It’s perhaps the one thing that no one predicted 30 years ago when the project was first conceived.

The machine’s collisions have so far conjured up no particles at all beyond those catalogued in the long-reigning but incomplete “Standard Model” of particle physics. In the collision debris, physicists have found no particles that could comprise dark matter, no siblings or cousins of the Higgs boson, no sign of extra dimensions, no leptoquarks — and above all, none of the desperately sought supersymmetry particles that would round out equations and satisfy “naturalness,” a deep principle about how the laws of nature ought to work…

Many particle theorists now acknowledge a long-looming possibility: that the mass of the Higgs boson is simply unnatural — its small value resulting from an accidental, fine-tuned cancellation in a cosmic game of tug-of-war — and that we observe such a peculiar property because our lives depend on it. In this scenario, there are many, many universes, each shaped by different chance combinations of effects. Out of all these universes, only the ones with accidentally lightweight Higgs bosons will allow atoms to form and thus give rise to living beings. But this “anthropic” argument is widely disliked for being seemingly untestable…"

The basic problem, nonetheless, is that there is currently no adequate scientific explanation for the the unique parameters of our universe (or indeed the question of why there is something rather than nothing, in the first place).

The arguments in favour of naturalism and supernaturalism, respectively (multiverse or God), take us beyond the threshold of scientific inquiry and the observable universe, into the realm of speculative philosophy.

One cannot prove or disprove either hypothesis based upon the data. Both concepts explain the data and both are inherently unfalsifiable as far as science goes. They fall or rise based upon their philosophical and logical merits, or lack thereof.

And those merits are not cut-and-dry. It comes down to personal perspective, intuition, reasoning and opinion - inevitably influenced by one’s preconceived bias.

“God did it” is certainly no worse a hypothesis than “the multiverse did it”. Both are recognitions of the hitherto failure of naturalness arguments derived from first principles (critics would say an “excuse” for this failure and/or giving up).

2 Likes

I’m going to get on my soap box again. “Fine Tuning” and probability should never be mentioned in the same context. They have nothing to do with each other. Fine tuning is simply the observation that the habitability of the universe (or more precisely, the ability of the universe to manufacture heavy elements) appears to be highly sensitive to the values of the constants. End of story. No mention of probability. If the universe is fine-tuned, then it is fine tuned whether the values of the constants are probable or not. Furthermore, if you insist to turn in into an apologetic, IDers (as is their custom) take exactly the wrong approach. They argue (with no scientific support) that the values of the constants are vanishingly small, ergo god. When in fact it is more likely if the values are vanishingly small it would be ergo multiverse. The best apologetic would be a fundamental theory that predicts the constants (unit probability) which, when coupled with the fine-tuning (sensitivity) would be the best possible prima facie evidence for design.

2 Likes

No, this is not an argument (about fine-tuning). It is the blasted Douglas Adams puddle argument. The fine tuning of the universe has to do with the apparent sensitivity on the constants for the universe to produce heavy elements (anything beyond Helium). If the universe does not produce such elements (if it can’t manufacture rocks, as it were) then there is no life of any kind, anywhere. Arguing about the habitability of planets is not really in the domain of fine-tuning–without the apparent cosmological fine tuning related to the cosmological constant and the initial baryon density of the universe, etc–there would be no planets at all. The fine-tuning arguments and the “privileged planet” arguments are two altogether different things.

Thanks for your post, it is interesting!

Whenever someone has explained fine-tuning to me, it has been on the basis that it refers to physical constants (like the CC) which permit complexity (and therefore, by extension life emerging from that complexity) only for an extremely narrow range of “theoretically possible” values and yet these constants do, when observed, have values in that narrow range. It is technically what scientists call “unnatural” (i.e. not generic).

An example would be the lightness of the higgs mass without supersymmtery (SUSY) to make it explicable in a natural way. In the absence of one such “natural” explanation, anthropic reasoning is appealed to in the context of a hypothetical but practically untestable multiverse (only SUSY appears to be necessary for many variants of string theory to work and the string landscape is important for formulating the multiverse idea in the first place).

It would seem, from your post, that I am not approaching this from the right angle?

I think what angle you approach it from is a personal choice. We are not really talking physics here, but metaphysics. My point is only that if you want fine-tuning to say something about god, then tying fine-tuning to the extreme improbability of the constants–which is what such apologists do–is as wrong of an approach as I can imagine. It is the multiverse that “predicts” exceedingly improbable constants for each of its universes.

Thanks for the reply!

I don’t think its a case of what one “wants” fine-tuning to say and I’m certainly no apologist. Rather I think we should accept fine-tuning for what it appears to be indicating about nature, regardless of our personal beliefs, and then work from there.

It’s like the probability wavefunction in Quantum Mechanics. Most would rather that science backed up realism in the microcosm but the evidence (i.e. violations of Bell inequalities) is strongly indicative of indeterminacy (non-realism). And that’s just it.

For the CC (as an example), existence of complexity of any form would seem on the face of it to require a cancellation between contributions to the vacuum energy, accurate to 120 decimal places. It’s observed value is knife-edge tiny. If that isn’t “narrow”…then I’m not sure what is.

If fine-tuning in the way I have described fits the “prediction” of the multiverse (it certainly does fit Weinberg’s prediction of the nonzero value of the CC in the 1980s on the basis of anthropic reasoning), then we need to deal with that, I reckon - whether it aids our personal beliefs or not.

The Cosmological Constant Problem appears unlikely to have any conventional “natural” explanation derived from first principles (as evident by how hugely off by numerous orders of magnitude such attempts by very smart minds have proven thus far).

So there does seem to be a problem here with the “narrow range”.

I’m not arguing that a Creator is the explanation for this problem but I don’t think the inflationary multiverse paradigm is in any way a good enough solution.

It is a plausible philosophical response to the fine-tuning but I don’t personally find it persuasive since it produces no testable or reproducible scientific predictions or consequences (making it indistinguishable from the God hypothesis imho); the very ideas which make it possible “eternal inflation and string theory” also have significant drawbacks (namely the apparent inability to find SUSY at the LHC and again the untestability of mathematically worked out higher curled up dimensions); it violates Occam’s Razor by trying to explain our own simple universe that we can see by appealing to an infinity of unproven universes that we cannot see and finally “inflation” (which is necessary for the multiverse hypothesis) results in new fine-tuning problems of its own, such that it doesn’t actually escape from the very dilemma it was formulated to solve (although it does address the original flatness problem).

Is that really the “best game in town”? Looks more like clutching at straws. So I think Professor Steinhardt (one of the three original pioneers of the inflationary multiverse) was perfectly justified in referring to it as the “multimess theory of anything” earlier this year.

So to my mind, we do seem to have a situation in which our universe exhibits parameters with values that are “exceedingly improbable” (to use your words) but the best “theory” to account for this (the inflationary-string landscape-multiverse) is scientifically untenable and rests on paper thin foundations, taking us into the realm of speculative philosophy.

That’s just how I see it on a personal level.

I appreciate the confidence you place in me, Merv. Your instincts here are on target: I don’t think Boyle threw Power under the bus, and he never presented that idea as his own–he credited it explicitly to Towneley, a friend and associate of Henry Power. The wikipedia paragraph you linked is unexceptional. The conclusion it gives, namely that “Boyle’s promotion of the idea and his significant status as an aristocratic scientist, ensured the theory would be known as ‘Boyle’s Law,’” is reasonable, and taken from an article by Charles Webster, a distinguished historian who did briefly work on Boyle back when he was teaching HS science (I think). However, many casual readers might incorrectly interpret that sentence from Green as implying that Boyle himself planned to get the credit, when Webster was simply saying that a combination of factors–none of which lay any blame on Boyle–led others later to give Boyle the credit for an idea first broached by Power and/or Towneley.

You’re right, however, that the video continues to denigrate Boyle, accusing him of stealing Power’s work without acknowledgement. A few pages before the discussion of Towneley’s work, however, Boyle did in fact make reference to Power by name and published a verbatim account of one of Power’s related experiments. (I just looked at a photocopy of that very MS from Power, which was passed on to Boyle by someone else.) Another paper containing what he called Towneley’s hypothesis (leaving out Power’s name) doesn’t indicate that Power deserved any credit. Webster pointed that out in 1963. That paper, originating with Towneley, credits Towneley and 3 others, none of whom was Power. We can’t blame Boyle here.

IMO, then, (a) it’s not reasonable to claim that Boyle sought to exclude Power from appropriate credit and (b) it’s completely ridiculous to claim that Boyle wanted to take credit for the discovery himself. Green doesn’t say (b), but he still says (a).

1 Like

I think it is much fairer to say that we don’t know how many universes there are. There is as much evidence for multiple universes as there is for one.

Planets offer an interesting analogy. Our planet appears to be finely tuned for intelligent life, but given the number of solar systems and galaxies out there, it would seem probable that there would be at least one planet like Earth.

Therefore, I don’t see a problem with hypothesizing a multiverse as a reason why we see a universe that is finely tuned for life. The tough part is figuring out how to test it.

The problem with your analogy (which is a popular one) is that historically, no one to my knowledge theorized the existence of other planets on the basis of anthropic reasoning (i.e. distance of the sun, goldilocks zone and all that jazz).

We simply deduced based on the evidence and through telescopes that there were other planets, and then exoplanets much later on. No anthropic reasoning led to these discoveries. The causation here, the argument, appears to be “topsy-turvy”.

What’s happening today with the multiverse is quite different, indeed unprecedented in the history of science.

We are actually starting, not from a position of observations lending themselves to the existence of other “patches” or bubbles in spacetime (universes, to use the lingo), but assuming this to be the case to explain our existence and the narrow range of the constants.

Our only “evidence” for a multiverse is the fact that we are actually here to have this conversation, as opposed to being an undifferentiated cloud of hydrogen and helium in a sterile universe.

It’s odd reasoning that has never been utilized in the history of science that I’m aware of.

In terms of testability, I can think only of bubble collisions - but its exceedingly difficult (probably impossible) to prove that some anomaly in the cosmic microwave background is indicative of that. Moreover, our universe seems to be very isotropic - so it does not appear that our “bubble” ever did collide with another bubble.

And therein is part of the problem with the multiverse: if there’s no bubble collision, then it’s because we just don’t live in that type of multiverse. There’s always an answer because it can’t be disproved, or proved.

It’s as unfalsifiable as saying “God did it” (not surprising, therefore, that those most partial to the multiverse hypothesis tend to be exceedingly anti-Karl Popper. I wonder why).

Theoretical physics is not my forte, but I was under the impression that theories like M theory and string theory predict multiple universes, or at least allow for them.

I don’t think it is beyond the realm of explanation that one reason we find ourselves in a universe capable of supporting life is that there are many universes with at least one being able to support life. This is simply acknowledging the potential for confirmation bias.

As to unfalsifiability, I don’t think anyone is claiming that these theories are proven or well supported. They are in the very early hypothesis stage, and scientists are trying to figure out how to test them. What I don’t see is people pursuing ways of testing the hypothesis of “God did it”. I think a healthy way to approach the whole question is to come up with some ideas and try and figure out ways of testing them while acknowledging our ignorance where appropriate.

You are right, M-Theory provides for a so-called “string landscape” that (if we accept “eternal cosmic inflation”) could potentially lead to the different “bubble” universes having variable, accidentally generated constants (due to quantum fluctuations creating a pocket with a set of laws different from that of the surrounding space).

Without that, its my understanding that you’d just get an infinite multiverse all having the same physical laws and physical constants, basically extensions of our own universe.

But the problem is that String Theory is not actually a “theory”. It’s a framework and consists solely of mathematics at present. It has produced no testable predictions in 40 years, despite 90% of papers seemingly being devoted to it.

Moreover, for most varieties of String Theory (and there are more than one can count), supersymmetry is a necessity. But the LHC has found no trace of other particles apart from the Higgs Boson, with its unnaturally light “fine-tuned” mass. Nature seems to be telling us that the “higher curled up dimensions” pre supposed by string theorists likely don’t exist and if that’s what turns out to be the case, then neither would the “string landscape” that enables eternal inflation to result in the kind of multiverse that could solve the fine-tuning problems.

The mantra has been “we go for ever higher energies and we’ll find these superstrings, these supersymetric particles, the higher dimensions” but it just hasn’t paid off.

40 years is not “very early hypothesis” stage in my book. Let’s just remember the standard we hold theories in science up to: the idea is not “I predict xyz” and get a rough estimation. No! It’s: “I predict this exactly to ten decimal places…” You predict exactly what’s going to happen for a theory to be classed as scientifically tested. And General Relativity does, it predicts the motion of the planets to a spectacular degree of accuracy.

There is no experiment you can point to for String Theory and say, “Aha! String theorists predicted this number and we got this number”.

Furthermore, String Theory doesn’t seem to posses any predictive power, as Peter Woit notes:

“…The possible existence of, say, 10500 consistent different vacuum states for superstring theory probably destroys the hope of using the theory to predict anything. If one picks among this large set just those states whose properties agree with present experimental observations, it is likely there still will be such a large number of these that one can get just about whatever value one wants for the results of any new observation…”