Tag Archives: scientific method

The cosmological multiverse and falsifiability in science

The cosmological “multiverse” model talks about regions far beyond the observable portion of our universe (set by the finite light-travel distance given the finite time since the Big Bang). Critics thus complain that it is “unfalsifiable”, and so not science. Indeed, philosopher Massimo Pigliucci states that instead: “… the notion of a multiverse should be classed as scientifically-informed metaphysics”.

Sean Carroll has recently posted an article defending the multiverse as scientific (arXiv paper; blog post). We’re discussing here the cosmological multiverse — the term “multiverse” is also used for concepts arising from string theory and from the many-worlds interpretation of quantum mechanics, but the arguments for and against those are rather different.

Any theory of cosmology supposes that the universe continues beyond our observable horizon (why wouldn’t it?, since that horizon is an accident of our current location in time and space, and an “edge” would be far harder to explain). From there, one can adopt a “more of the same” model beyond the horizon, or one can accept arguments that physical conditions might be radically different in different, very-remote regions, thus producing a “multiverse”.

The job of science is to model the world around us, a reality that we learn about by empirical means. The “scientific method” consists of constructing models of reality, comparing them to empirical reality, and then updating them to work better, in an endless iteration.

One oft-made claim that is not true is that science is limited by necessary metaphysical assumptions (one posited example being philosophical naturalism, an a priori rejection of the possibility of gods). Thus one should be wary of philosophers declaring that some investigation violates a necessary condition of science, and so is “not science”.

It may well be that there actually is a multiverse (distant regions of space having very different physical conditions) and so science, which is our best attempt to describe the universe, should be able to suggest that possibility. It should not be declared unscientific by philosophical fiat. Our task is to adjust science to match nature, not put artificial limits on it.

And so to “falsifiability”, sometimes claimed by the “Popperazzi” as a necessary criterion for being scientific, after Karl Popper proposed it as a way of distinguishing science from what he saw as pseudo-sciences such as Freudian psychoanalysis and Marxist social criticism.

Falsification does not work if over-interpreted as a simplistic criterion. Cosmologist Peter Coles remarks: “I’ve never taken seriously any of the numerous critiques of the multiverse idea based on the Popperian criterion of falsifiability because … that falsifiability has very little to do with the way science operates”.

But interpreted properly there is a large measure of truth to it. If some claim has no relevance to anything observable then it is not “about” the empirical world, and so is not part of what science is trying to achieve. Where ideas are carefully designed to avoid any possibility of being refuted the practitioners are not pursuing science, in that they are not attempting to match reality, and instead they are attempting to bolster an ideology. A classic example is the idea that “God always answers prayers, but sometimes he says no”.

Thus theologians (to give one example of a pseudo-science) will continually adjust their theology to avoid saying anything definite and testable. If preservation of a prior commitment is given a higher priority than matching reality, then one is doing pseudoscience. In contrast, a scientist will (or at least should!) try hard to make falsifiable predictions, looking for ways to test and refute their models, since that iteration leads to their improvement. That difference in attitude is the demarcation that Popper was pointing out.

But that is not the same as requiring immediate potential for falsification as a criterion for doing science. Science is pragmatic, a matter of doing ones best. We will always be hitting practical limits on what we can do and science is always limited and fallible. We can never have full confidence in any statement about what is beyond the observable horizon, but then we can never have full confidence in any part of science. The point is that, so long as we’re doing our best, we’re not being unscientific. That holds even where “our best” is pretty limited.

The important point is that it is not necessary to be able to test all aspects of a model, one merely need test the model as best one can. For example, one can use a Solar System model to predict the occurrence of an eclipse next year, and then test it. But the same model could predict eclipses millions of years ago, and there would be no conceivable way of directly testing those predictions. Indeed all scientific models are like this, where we can, in practice, test some but not all predictions of a model.

I doubt if anyone using a Solar System model to generate predictions of eclipses thirty millennia ago would be accused of being “unscientific”. The eclipse-prediction model would be testable only within a narrow time-span, but having tested it within that time span we could legitimately and scientifically have confidence in its predictions over a much wider time span.

A cosmological multiverse model is similar. It attempts to model all space, both within our observable universe and outside it, and we can test the predictions it makes about space within our observable horizon. If it does well there, we can then legitimately have some degree of confidence about its predictions for outside our observable horizon. Nothing about that is unscientific.

If the multiverse model had been proposed to apply only to regions beyond the observable horizon, with no relevance or implications for anything within it, then the accusation that it is unscientific would be fair, but that is not the case.

The concept of a multiverse is a prediction of our current best attempts to explain cosmology within our observable horizon. In order to make the Big Bang model work, and match empirical data about our early universe, one needs to invoke “inflation”, a rapid exponential expansion of the universe during its first tiny fraction of a second. The usual way this is modelled is by an “inflaton field” energy density which would, given the known physics of General Relativity, drive an exponential expansion.

The physics of the inflaton field are not well understood, and the whole inflationary model is unproven, but the point relevant to this post is that the scenario is constructed in order to model empirical data (the smoothness and uniformity of the microwave background; the fact that the universe’s density is so close to the critical density; the absence of particles such as magnetic monopoles; et cetera) and thus is eminently a scientific model. It makes predictions that are testable, such as a signature of gravitational waves in the microwave background.

If the inflationary scenario is correct, then our universe must have dropped out of the inflationary state by a quantum fluctuation. But, quantum fluctuations are localised in space. Therefore, only a limited spatial region would drop out of the inflationary state owing to that fluctuation. The inflaton field would continue driving exponential expansion elsewhere. And, if the quantum fluctuation can happen here, then it can happen elsewhere; and thus almost inevitably one would have a large number of pocket “normal state” universes (one of which would contain us) strewn within a vastly vaster inflationary-state space (in much the same way that holes occur within an Emmental cheese).

The physical conditions in the inflationary-state regions would be very different from those in the numerous “normal state” pockets, and hence there would be a multiverse. The different pocket universes are then simply separated by vast distances.

It is very difficult to have an inflationary model of Big Bang cosmology without having a multiverse, and it is currently hard to explain the observed characteristics of our observable universe without invoking an inflationary cosmology. Thus, our current best scientific models tell us that we live in a multiverse. That is a scientific conclusion and we should place some credence in it, though only some credence since inflationary physics is still poorly understood.

That is no different, in principle, from having confidence in the prediction of eclipses in the distant past, even though we can’t directly observe them, where that confidence arises because they are predicted by a model that has been tested and validated when applied to eclipses that we can observe. Regardless of whether the inflationary/multiverse variant of Big Bang cosmology turns out to be true, it is certainly a model within the scope of science that has been adopted and developed for scientific reasons.

Thus I think that Peter Woit is missing the point when he says: “the problem with the multiverse is that it’s an empty idea, predicting nothing”. The cosmological multiverse is not invoked to try to generate novel predictions, it’s invoked as a necessary implication and consequence of models developed and tested to explain the parts of the universe that we can indeed see.

We could falsify the predictions of ancient eclipses by showing that the model that predicted them does not work when applied to eclipses that we can observe, and we can, in principle, falsify inflationary/multiverse predictions by showing that inflationary/multiverse models do a poor job when compared to data from within our observable horizon (a lack of gravitational-wave signatures in the microwave background might indeed do that).

A naive Popperazzi (going beyond anything Popper himself said) might insist that science must be strictly limited to direct observables and immediately falsifiable claims. They might then insist that only claims relating to space within our observable horizon are “scientific” and thus, if we extrapolate the model beyond that, then we are doing “metaphysics” or something.

That would imply that eclipse predictions relating to the time-frame of recorded history are “scientific”, but that predictions relating to a time before that would be metaphysical (though the boundary would in any case be fuzzy, since who knows what cave paintings relating to an ancient eclipse might be found?). That would strike me as a fairly pointless and merely semantical distinction, based on an overly narrow conception of science. You’d be forced to conclude that an eclipse occurring a decade ago in Antarctica and viewed by no-one was also “metaphysical”. And likewise for an eclipse, visible from New York, predicted for 2099 (or pick whatever time into the future makes it insufficiently falsifiable in your eyes). Some philosophers want to limit science narrowly to empirical data, and deny it the wider conceptualising about the data and what they imply, but science has always been just as much about concepts as about data; without concepts, accumulating data would be mere “stamp collecting”.

Overall, I can only agree with Sean Carroll:

I argue that the way we evaluate multiverse models is precisely the same as the way we evaluate any other models, on the basis of abduction, Bayesian inference, and empirical success. There is no scientifically respectable way to do cosmology without taking into account different possibilities for what the universe might be like outside our horizon. Multiverse theories are utterly conventionally scientific, even if evaluating them can be difficult in practice.


How not to defend humanistic reasoning

Sometimes the attitudes of philosophers towards science baffle me. A good example is the article Defending Humanistic Reasoning by Paul Giladi, Alexis Papazoglou and Giuseppina D’Oro, recently in Philosophy Now.

Why did Caesar cross the Rubicon? Because of his leg movements? Or because he wanted to assert his authority in Rome over his rivals? When we seek to interpret the actions of Caesar and Socrates, and ask what reasons they had for acting so, we do not usually want their actions to be explained as we might explain the rise of the tides or the motion of the planets; that is, as physical events dictated by natural laws. […]

The two varieties of explanation appear to compete, because both give rival explanations of the same action. But there is a way in which scientific explanations such as bodily movements and humanistic explanations such as motives and goals need not compete.

This treats “science” as though it stops where humans start. Science can deal with the world as it was before humans evolved, but at some point humans came along and — for unstated reasons — humans are outside the scope of science. This might be how some philosophers see things but the notion is totally alien to science. Humans are natural products of a natural world, and are just as much a part of what science can study as anything else.

Yes of course we want explanations of Caesar’s acts in terms of “motivations and goals” rather than physiology alone — is there even one person anywhere who would deny that? But nothing about human motivations and goals is outside the proper domain of science. Continue reading

Science is a product of science!

The latest issue of Free Enquiry magazine contains several articles about philosophy and science, including an article by Susan Haack, a philosopher of science who “defends scientific inquiry from the moderate viewpoint”, rejecting cynical views that dismiss science as a mere social construction, but also rejecting “scientism”.

While Susan Haack talks quite a bit of sense about science, she promotes a view that is common among philosophers of science but which I see as fundamentally wrong. That is the idea that science and the scientific method depend on philosophical principles that cannot be justified by science, but instead need to be justified by philosophy. Continue reading

Reductionism and Unity in Science

One problem encountered when physicists talk to philosophers of science is that we are, to quote George Bernard Shaw out of context, divided by a common language. A prime example concerns the word “reductionism”, which means different things to the two communities.

In the 20th Century the Logical Positivist philosophers were engaged in a highly normative program of specifying how they thought academic enquiry and science should be conducted. In 1961, Ernest Nagel published “The Structure of Science”, in which he discussed how high-level explanatory concepts (those applying to complex ensembles, and thus as used in biology or the social sciences) should be related to lower-level concepts (as used in physics). He proposed that theories at the different levels should be closely related and linked by explicit and tightly specified “bridge laws”. This idea is what philosophers call “inter-theoretic reductionism”, or just “reductionism”. It is a rather strong thesis about linkages between different levels of explanation in science.

To cut a long story short, Nagel’s conception does not work; nature is not like that. Amongst philosophers, Jerry Fodor has been influential in refuting Nagel’s reductionism as applied to many sciences. He called the sciences that cannot be Nagel-style reduced to lower-level descriptions the “special sciences”. This is a rather weird term to use since all sciences turn out to be “special sciences” (Nagel-style bridge-law reductionism does not always work even within fundamental particle physics, for which see below), but the term is a relic of the original presumption that a failure of Nagel-style reductionism would be the exception rather than the rule.

For the above reasons, philosophers of science generally maintain that “reductionism” (by which they mean the Nagel’s strong thesis) does not work, and on that they are right. They thus hold that physicists (who generally do espouse and defend a doctrine of reductionism) are naive in not realising that.

“The underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known, and the difficulty is only that the exact application of these laws leads to equations much too complicated to be soluble.”     — Paul Dirac, 1929 [1]

The problem is, the physicists’ conception of reductionism is very different. Physicists are, for the most part, blithely unaware of the above debate within philosophy, since the ethos of Nagel-style reductionism did not come from physics and was never a live issue within physics. Physicists have always been pragmatic and have adopted whatever works, whatever nature leads them to. Thus, where nature leads them to Nagel-style bridge laws physicists will readily adopt them, but on the whole nature is not like that.

The physicists’ conception of “reductionism” is instead what philosophers would call “supervenience physicalism”. This is a vastly weaker thesis than Nagel-style inter-theoretic reduction. The physicists’ thesis is ontological (about how the world is) in contrast to Nagel’s thesis which is epistemological (about how our ideas about the world should be). Continue reading

Contra theologian Roger Trigg on the nature of science

scientismRoger Trigg is a senior theologian and philosopher. His new book, “Beyond Matter”, is soon to be published by the Templeton Press, part of the wealthy Templeton Foundation whose aim is to produce a religion-friendly version of science.

Roger Trigg

An excert from the book promotes a view of science that is common among philosophers. Those of us with a scientistic perspective see it as erroneous, and yet, since Trigg’s account of science is widely accepted, it is instructive to rebut it.

Trigg argues that science rests on metaphysical assumptions:

What then has to be the case for genuine science as such to be possible? This is a question from outside science and is, by definition, a philosophical — even a metaphysical — question. Those who say that science can answer all questions are themselves standing outside science to make that claim. That is why naturalism — the modern version of materialism, seeing reality as defined by what is within reach of the sciences — becomes a metaphysical theory when it strays beyond methodology to talk of what can exist. Denying metaphysics and upholding materialism must itself be a move within metaphysics. It involves standing outside the practice of science and talking of its scope. The assertion that science can explain everything can never come from within science. It is always a statement about science.

This view can be summarised by the “linear” schematic:


One can see why theologians like this account of science. If it were really true that science rested on metaphysical assumptions then science would be in big trouble, since no-one has ever proposed a good way of validating metaphysical assumptions. Continue reading

A scientific response to the Brain in a Vat

Scientia Salon is an enjoyable webzine discussing philosophical matters, which recently addressed an old conundrum: how do we know we are not a brain in a vat? As I see it, this question is straightforwardly answered by the usual scientific method, so here I’ll summarise the argument that I advanced in the Scientia Salon discussion.

The Matrix-style scenario, which dates back to the skepticism of Descartes, supposes that we are a brain kept alive in a vat, being fed with a stream of inputs generated by an Evil Genius. Everything that we experience as sense data is not real, but is artificially simulated and fed to us. Since, ex hypothesi, our stream of experiences is identical to that in the “real world” explanation, we cannot know for sure whether or not we are such a brain in a vat.

How to respond? First, the whole point of science is to make sense of our “stream of experiences”. We do that by looking for regularities and patterns in those experiences, and we develop those into descriptions and explanations of the world (I’ll use the term “world” here for the sum of those experiences, regardless of whether they derive from our contact with a real world, or from a simulated world being fed to us). Continue reading

Applying falsifiability in science

Falsifiability. as famously espoused by Karl Popper, is accepted as a key aspect of science. When a theory is being developed, however, it can be unclear how the theory might be tested, and theoretical science must be given license to pursue ideas that cannot be tested within our current technological capabilities. String theory is an example of this, though ultimately it cannot be accepted as a physical explanation without experimental support.

Further, experimental science is fallible, and thus we do not immediately reject a theory when contradicted by one experimental result, rather the process involves the interplay between experiment and theory. As Arthur Eddington quipped: “No experiment should be believed until it has been confirmed by theory”.

Sean Carroll recently called for the concept of falsifiability to be “retired”, saying that:

The falsifiability criterion gestures toward something true and important about science, but it is a blunt instrument in a situation that calls for subtlety and precision.

Meanwhile, Leonard Susskind has remarked that:

Throughout my long experience as a scientist I have heard un-falsifiability hurled at so many important ideas that I am inclined to think that no idea can have great merit unless it has drawn this criticism.

Continue reading