Are predictions an essential part of science?

Theoretical physicist Sabine Hossenfelder recently wrote that that “predictions are over-rated” and that one should instead judge the merits of scientific models “by how much data they have been able to describe well, and how many assumptions were needed for this”, finishing with the suggestion that “the world would be a better place if scientists talked less about predictions and more about explanatory power”.

Others disagreed, including philosopher-of-science Massimo Pigliucci who insists that “it’s the combination of explanatory power and the power of making novel, ideally unexpected, and empirically verifiable predictions” that decides whether a scientific theory is a good one. Neither predictions nor explanatory powers, he adds, are sufficient alone, and “both are necessary” for a good scientific theory.

Popper’s criterion of falsification (the possibility that a theory’s predictions can be empirically refuted) is, of course, one of few elements of the philosophy of science to have attained widespread awareness. But is Popper’s maxim descriptive or prescriptive? If it merely describes what scientists tend to do, then, if scientists came to feel that predictions were not so crucial, would their enterprise still be “science”? Or, if the maxim is prescriptive, such that falsification is a necessary part of the scientific method, then by what authority is that prescription established?

To answer that, let’s turn to Feynman’s well known summation of science:

The first principle is that you must not fool yourself — and you are the easiest person to fool.

Scientists do their best to construct the best and truest model of the world. Pseudoscientists have fooled themselves into a false understanding of the world. And it’s easy to do that, because, given a set of facts, anyone can construct a story that weaves those facts into an overall picture that the story-teller wants to be true. They may need any number of ad hoc explanations, but humans are good at that. It’s easy to sustain the belief that, say, God always answers prayer, if you allow yourself wide flexibility about what the answer looks like, including “God said no this time”.

So both Hossenfelder and Pigliucci are right: a good scientific theory needs to explain the maximum amount of data with the minimum number of ad hoc features. But predictions really are the acid test. If you are making predictions about things where you don’t already know the answer, then your ad hoc explanations will prove useless and sterile. Only an accurate theory — one that correctly describes aspects of the world — will be able to reliably generate predictions that turn out true. This is why making predictions about things you don’t know, and then attempting to verify them, is the gold-standard method that all scientists should adopt when they can, to check whether they have fooled themselves. The telling give-away of the pseudoscientist is their reluctance to submit to that test.

But making predictions that can then be verified may not always be possible. Yes, the scientific method asks that such be done where it can be, but what if we’re talking about topics where the time required or the energy required are not possible? Need we conclude that such topics are beyond the domain of science?

Let’s take a concrete example. Given the finite age of the universe and the finite speed of light (and hence the finite speed at which information can travel from one region of space to another), we can never obtain information about the universe beyond an “observable horizon”, and thus can never empirically verify statements about such a region. Does that mean that any statement about such regions is outside science (perhaps being metaphysical or pseudoscientific)?

A further example concerns attempts to describe the physics of the Planck scale, which is relevant to attempts to find a unified description of the forces of nature, or to understand the origins of the Big Bang, yet which is orders of magnitude beyond our ability to recreate in particle accelerators. Is any such quest pseudoscience? And if it is, how far beyond our current ability to test is it permissible for scientists to construct theories? It would be weird to suggest that a factor 2 is acceptable, but a factor 10 not.

I prefer to fall back on Feynman’s dictum. If we’re discussing topics beyond our ability to test predictions then we need to be extra careful not to fool ourselves, since we won’t have the gold-standard test. But then in science we have to make do with the tools we have; neither cosmologists nor paleontologists can replicate findings in laboratory experiments, yet those fields are still sciences, despite what school textbooks might say about “the scientific method” and the requirement to replicate experiments.

We should see verifying predictions as similar to replicating experiments under laboratory conditions — yes scientists should do it where they can, as being the best ways of minimising self-fooling, but in the end science is pragmatic and often has to make do. Thus there is nothing unscientific about trying to understand aspects of the universe that we cannot directly test, one just has to be extra cautious in the claims one makes. A statement such as: “As far as we know, the universe continues much the same beyond the observable horizon” would seem to be a reasonable feature of a cosmological model, even though it is beyond empirical testing.

So, overall, predictions are not over-rated, but nor are they fully essential. I think this lands me roughly midway between Hossenfelder and Pigliucci.

6 thoughts on “Are predictions an essential part of science?

  1. Brent Meeker

    A fair statement. Besides explanatory power and predictive power; the other factor is consilience. Roughly speaking consistency with other well supported theories.

    Reply
  2. YF

    One can even make predictions in cosmology, paleontology, and forensics- by discovering clues to events that happened in the past. But the difference between prediction and accommodation in evaluating the success of a theory is a bit murky. A theory can make successful predictions about facts that are already known or facts that are still unknown and derive much the same evidential boost.

    The difference between theories that ‘merely’ accommodate the existing data (e.g., fitting a set of known data points) versus those that make successful novel predictions (extrapolating to new data) is that the former tend to be more complex than the latter. To the extent that simplicity is a theoretical virtue (perhaps embodied in the priors), a theory which successfully predicts new data will be superior to one which ‘merely’ accommodates the existing data or that makes only vague predictions. Bayesian model selection provides a nice framework for considering these issues.

    Reply
    1. Coel Post author

      “But the difference between prediction and accommodation in evaluating the success of a theory is a bit murky.”

      I think there are two distinct ideas here. First, as you say, explaining a lot of known facts with a parsimonious model is indeed a big plus for the model. Second, explaining facts not then known by whoever constructed the model is an even bigger plus, since it rules out the model having been ad-hoc rigged to explain those facts.

  3. YF

    Yes, and an ad-hoc rigged theory (one which accommodates the facts) will generally be more complex (less parsimonious) than a theory that makes the prediction ahead of time, since the former will need to explicitly include the predicted facts in the statement of the theory itself.

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s