Applying falsifiability in science

Falsifiability. as famously espoused by Karl Popper, is accepted as a key aspect of science. When a theory is being developed, however, it can be unclear how the theory might be tested, and theoretical science must be given license to pursue ideas that cannot be tested within our current technological capabilities. String theory is an example of this, though ultimately it cannot be accepted as a physical explanation without experimental support.

Further, experimental science is fallible, and thus we do not immediately reject a theory when contradicted by one experimental result, rather the process involves the interplay between experiment and theory. As Arthur Eddington quipped: “No experiment should be believed until it has been confirmed by theory”.

Sean Carroll recently called for the concept of falsifiability to be “retired”, saying that:

The falsifiability criterion gestures toward something true and important about science, but it is a blunt instrument in a situation that calls for subtlety and precision.

Meanwhile, Leonard Susskind has remarked that:

Throughout my long experience as a scientist I have heard un-falsifiability hurled at so many important ideas that I am inclined to think that no idea can have great merit unless it has drawn this criticism.

Falsification remains important for science, but it needs to be interpreted correctly. The essential point — as indeed argued by Popper — is that science is about developing models, and falsification should be applied to models, not to isolated statements or individual entities. Thus, as I argued in my recent defence of the multiverse as a scientific hypothesis, it is not correct to say — simplistically — that since we can never observe distant regions of the multiverse the idea is unscientific. This post develops that claim further to clarify how falsifiability should be applied.

Take a model that predicts consequences A, B, C and D. Suppose we verify A, B and C, while having no good alternative explanation for those three, such that we then have confidence in the model. It is then the case that the evidence points to D, and that we should accept D, even if we have no direct verification of D and even if D itself is unfalsifiable.

Accepting D would be the scientific thing to do, and indeed we would be obliged to accept D (in the provisional sense in which all scientific findings are accepted) unless we had an alternative model which did an equally effective job of explaining and predicting A, B and C, but which did not entail D.

Example 1:

Is it a scientific statement to claim: “Earth has a core consisting largely of iron and nickel”? We cannot go to the Earth’s core to check, and we have never returned a sample of core material to a laboratory for analysis, nor do we have any feasible way of doing that. Naively one could thus regard the statement as unfalsifiable and thus as unscientific.

However, we do have a large array of models about geology and planetary formation, and of the sizes and densities of planets, and about the abundances of elements in the solar system and how these are distributed, and we have seismological data and theories about how sound waves propagate in the Earth, and some understanding of the generation of planetary magnetic fields. These models are well substantiated by a large array of evidence, and many of the predictions of these theories have been verified. This web of models and the web of evidence that support them all point to the implication that Earth has a core consisting largely of iron and nickel, even though we cannot directly verify this claim by analysing a sample of the stuff in a lab. Further, there is no known alternative composition that is fully in line with the web of well-verified models.

The claim is thus a “D”, and we accept it as true owing to the overwhelming abundance of evidence pointing to it. Of course better data could, in principle, come along and show that we are mistaken, but that applies throughout science.

core

Example 2:

We have excellent models of the movement of the planets in our Solar System and can predict solar eclipses very accurately and reliably. Every year eclipse chasers find the predictions to come true. We can also run the models backwards to predict past eclipses, and some of these can be verified by historical records. But suppose we use the models to predict an eclipse occurring before human written records and before paleolithic art. There is no way that such an event could have left a trace in the world that is still extant and verifiable today, and thus the claim of an event is not falsifiable.

Is the statement that such an event occurred at that time and place then unscientific? I argue that it is entirely scientific: it is a “D”, and we accept it as true owing to it being entailed by well-verified models. We could only dissent from the claim if there were alternative models that were also in accord with all known information on planetary motion, but did not produce that eclipse.

Example 3:

A third example concerns the “observable horizon” of our universe, which exists owing to the finite distance that light can have travelled given the finite age of our universe. There is no way, even in principle, that we can obtain information from beyond the observable horizon, since the speed of light is a fundamental limit on the speed of information exchange.

We can thus ask whether it is scientific to make statements about beyond the observable horizon. The answer is yes, it is. Cosmologists build models of how the universe works and develop and verify those models based on observations within our observable universe. Such a model will then make predictions of a “more of the same” nature about what is beyond our observable horizon, and if that model is verified and tested then statements about beyond the observable horizon are just as much “scientific facts” as anything else.

The only way this would not be the case is if something additional to the model caused it not to be so. But we have no evidential basis for adding such a feature to the overall description, and it is adding information to models without empirical support that is unscientific. As I argued, the eternal-inflation multiverse hypothesis is not a matter of adding extra information content to a cosmological model, it is simply a matter of drawing out the implications of a testable model.

But, one might exclaim, in horror at my suggestion that “statements about beyond the observable horizon are just as much scientific facts as anything else”, wouldn’t it be far safer to make a distinction between the directly verified A, B and C, and the merely extrapolated D? Shouldn’t we regard only the A, B and C as fully “scientific”, and award some lesser status to the D, even if we consider it plausible?

The answer is that, if we did so, then we would be forced to reject everything as unscientific. There are no As, Bs and Cs that we can directly verify. Everything is really a D; everything is merely a deduction based on models, rather than something that can be verified independently of any model.

My first example above concerned iron and nickel in the Earth’s core, but how about a lump of solid iron in the laboratory? Surely we can directly verify that it is indeed iron? Well, no we can’t. All we can do is interpret outputs from various scientific instruments — indeed, all we really have is a stream of photons impinging on our senses as we look at plots or readouts from instruments. All of the concepts such as “atom”, “nucleon”, “metal”, “iron”, et cetera, are all models that we have constructed out of that stream of photons, and we can only verify them indirectly through that web of models.

Thus the universe beyond the observable horizon really does have the same status as the iron in Earth’s core or indeed iron in a laboratory. We are deducing their existence by the construction of models that contain those concepts, not by direct and model-free detection (this is in line with what philosophers call the Duhem–Quine thesis).

In this way science is a coherent web of ideas. It is incorrect to think of science as a process of adding to a construction, brick by verified brick. Rather, it is the web as a whole that is tested and probed by empirical evidence, and any part of the web can be replaced to produce a better match to reality.

The doctrine of falsification means that we should not add information to our web of models if adding that information has no real-world consequence and so cannot be verified or falsified. In that respect the idea of falsification is akin to that of parsimony, that we should pick the simplest model (the one with the least information content) that is compatible with the data.

But it is not the case that all implications of that model need to be testable, and indeed most of the time they will not be. It is the model that needs to be falsifiable, not every statement deriving from a model. Thus falsification remains important in science, but it is wrong to reject an idea such as the multiverse owing to an over-simplistic application of falsifiability.

3 thoughts on “Applying falsifiability in science

  1. Pingback: The multiverse as a scientific concept | coelsblog

  2. Pingback: Scientific models are falsifiable | Science or not?

  3. John Crossett

    Interesting. Your discussion reminds me of the sequence from Carl Sagan’s original ‘Cosmos’ program where he applied the “Drake Equation” to assess the likelihood of intelligent extraterrestrial life:

    I was a kid at the time but I distinctly recall my father protesting that it seemed “unscientific” for Sagan to make pronouncements about intelligent extraterrestrial life without empirical evidence that such life actually existed.

    Reply

Leave a comment