Confronting the “Fine Tuning” Argument

This weekend I finally listened to Don Bradley’s “Pillars of My Faith” talk at Sunstone. His story is remarkable. Bradley is a Mormon historian who lost his faith decades ago. He forged a path through agnosticism to atheism and back to belief via the Baha’i faith, returned to Christianity via mainline Protestantism, and finally rediscovered his faith in Mormonism. His forthcoming book is an investigation into the 116 lost manuscript pages of the Book of Mormon.

According to Bradley’s talk, a crucial point in his reconversion was an encounter with James Gardner’s BiocosmBiocosm addresses the fact that the physical constants of the universe appear improbably “fine tuned” for life, a fact usually cited as evidence for God. Gardner attempts to rebut the fine-tuning argument by offering a naturalistic argument — one that was so unconvincing that it shook Bradley’s faith, as it were, in atheism:

Biocosm shattered my atheistic illusions. I’d thought the chance of a universe fitted for life was something like one in a billion. The reality was more like one in 10 to the 200th power…

I now anticipated Gardner’s answer to the cosmos-sized problem he had opened for me, and here it was: The constants of the universe were shaped by our distant descendants, who engineered the collapsing universe to restart.

Somewhere along the line, he lost me… He thinks this is more likely than God?

Full disclosure: I’m an atheist, Bradley’s talk hasn’t persuaded me to reconsider my beliefs. But neither am I interested in quibbling with his path. Belief is complex and personal, and I’m under no illusions that a coherent rebuttal to the fine-tuning argument would convince him or anyone else that there is no God.

But I am interested in the fine-tuning argument itself. It touches on a few areas that I know a little bit about, and I think it has a relatively straightforward solution that I haven’t seen articulated anywhere else.

The Argument

The fine-tuning argument begins with the Standard Model, which is the current synthesis used in particle physics. The Standard Model contains of some 25 free constants, which are set experimentally instead of derived from theory. Several of these constants are remarkably well-suited for the emergence of life. For example, if the strong nuclear force — which bind protons and neutrons in an atoms nucleus together — were only 2 percent stronger, hydrogen atoms would fuse naturally into helium, precluding the existence of stars. Similarly, if the strong force were only 5 percent weaker, no helium would form. Similar facts hold for other constants: the electromagnetic force, the charge of the electron, etc. Small perturbation of these constants would render a universe inhospitable to life as we know it.

Advocates of the fine-tuning argument hold that the odds that the constants would fall in such a narrow range are vanishingly small. Thus, the universe appears to be designed; what better designer than the deity of your choice? 

However, several naturalistic, disbelief-friendly explanations have been advanced:

  • The Anthropic Principle: If the universe weren’t tuned for life, we wouldn’t be here. Thus, given that we exist, it follows logically that the universe be one fine-tuned for conscious life.
  • The Multiverse: There is an infinitude of “parallel” universes, each with potentially different constants. It’s therefore inevitable that there exist universes with the proper constants, and by the anthropic principle we live in one such universe. 
  • The Biocosm: Gardner argues for a stronger version of the anthropic principle. In a sort of Darwinian argument, he asserts that “the destiny of highly evolved intelligence (perhaps our distant progeny) is to… accomplish the ultimate feat of cosmic reproduction by spawning one or more ‘baby universes,’ which will themselves be endowed with life generating properties.” Universes amenable to intelligent life “reproduce,” in other words, and thus thrive in a cosmic fitness landscape.

I confess that I don’t find any of these explanations compelling. The anthropic principle is trivially true, but it doesn’t explain anything. I side with physicist Paul Davies, who argues that the multiverse is unfalsifiable woo-woo. And I’m with Bradley on the biocosm; it’s no more plausible than God — or, for that matter, Cthulhu or the Easter Bunny.

What, then, are we to do about the finely-tuned universe? I think the answer lies in what we expect out of science. We expect it to be more “objective” than human endeavors have any business being.

A Pragmatic Objection

The kernel of my explanation is simple: Fine-tuning is only a “problem” if you assume there is something inherent about the Standard Model.  Those arguing for theism assume the Standard Model parameters are random, and invoke a God tweaking them in order to overcome the improbability of life. Those arguing for the multiverse assume that each parallel universe has different Standard Model parameters. In either case, they suppose that the only possible universes are Standard Model universes.  

I just don’t see any reason to believe that’s the case.

First of all, the Standard Model is known to be incomplete. If you’ve ever talked to a physics enthusiast, you’ve probably heard that quantum mechanics and general relativity are known to be contradictory; as a result, the Standard Model lacks a description of gravity. Theoretical physicists are busy trying to put together models — such as string theory or quantum loop gravity — that bridge the gap and unify physics. For all we know, the seemingly arbitrary scattering of constants in the Standard Model emerge naturally from a more fundamental “Theory of Everything”.

Sure, you might object, some of the details of the Standard Model might get revised. But fine-tuning involves concepts as simple as the charge of an electron. Surely our basic understanding of the electron isn’t likely to change? 

I’m not willing — or qualified — to make predictions about what bits of science will and won’t eventually be revised. But I do assert that we tend to underestimate the degree to which our scientific understanding is faulty. Philosophers of science use the overly-fancy term pessimistic meta-induction to denote a simple idea: Throughout human history, smart people have come up with smart ideas that have successfully accounted for the data at hand. Invariably, those smart ideas turn out to be flawed in a fundamental way.

In other words, history teaches us that our ideas are fundamentally human and thus fundamentally flawed. We should expect that today’s theories — no matter how intuitive or even successful they are — are wrong in important ways. To expect otherwise is to engage in rather flagrant recentism.

Taking my pragmatism one step further, is it even appropriate to suppose that scientific models should provide an intrinsic description of reality? A large chunk of philosophers say “no.” They argue for scientific instrumentalism, which is the idea that scientific theories should be viewed merely as tools for making predictions. Instead of worrying about whether theoretical entities “really” exist, it’s enough to worry about whether the resulting predictions agree with observation. A good theory might tell us something about the underlying structure of reality, but it might just as easily not. Since there’s no way to tell, why worry about it?

Given a pragmatic view of science, fine tuning ceases to be a problem. Scientific models are remarkable, but human, efforts at reverse-engineering the universe. We can’t expect them to be indicative of objective reality. But fine tuning requires that modern, late 20th-century physics say something sufficiently final that we should draw conclusions from it about unseeable beings or undetectable universes. It’s a strange position for theists, who tend to place objective truth with God, not man. It’s an even stranger position for scientists, who ought to be keenly aware of their fallibility.