Tag Archives: philosophy of science

Confusion over causation, both top-down and bottom-up

I’m becoming convinced that many disputes in the philosophy of science are merely manufactured, arising from people interpreting words to mean different things. A good example is the concept of “reductionism”, where the meaning intended by those defending the concept usually differs markedly from that critiqued by those who oppose it.

A similar situation arises with the terms “top down” versus “bottom up” causation, where neither concept is well defined and thus, I will argue, both terms are unhelpful. (For examples of papers using these terms, see the 2012 article “Top-down causation and emergence: some comments on mechanisms”, by George Ellis, and the 2021 article “Making sense of top-down causation: Universality and functional equivalence in physics and biology”, by Sara Green and Robert Batterman.)

The term “bottom-up” causation tends to be used when the low-level properties of particles are salient in explaining why something occurred, while the term “top-down” causation is used when the more-salient aspect of a system is the complex, large-scale pattern. But there is no clear distinction between the two, and attempts to propose one usually produce straw-man accounts that no-one holds to. Continue reading

Science does not rest on metaphysical assumptions

It’s a commonly made claim: science depends on making metaphysical assumptions. Here the claim is being made by Kevin Mitchell, a neuroscientist and author of the book Innate: How the Wiring of Our Brains Shapes Who We Are (which I recommend).

His Twitter thread was in response to an article by Richard Dawkins in The Spectator:

Dawkins’s writing style does seem to divide opinion, though personally I liked the piece and consider Dawkins to be more astute on the nature of science than he is given credit for. Mitchell’s central criticism is that Dawkins fails to recognise that science must rest on metaphysics: Continue reading

Eddington did indeed validate Einstein at the 1919 eclipse

You’re likely aware of the story. Having developed General Relativity, a theory of gravity that improved on Newton’s account, Einstein concluded that the fabric of space is warped by the presence of mass and thus that light rays will travel on distorted paths, following the warped space. He then predicted that this could be observed during a solar eclipse, when the apparent position of stars near the sun would be distorted by the mass of the sun. Britain’s leading astronomer, Arthur Eddington, set out to observe the 1919 solar eclipse, and triumpantly confirmed Einstein’s prediction. The story then made the front pages of the newspapers, and Einstein became a household name.

You’re also likely aware of the revisionist account. That the observations acquired by Eddington were ambiguous and inconclusive, and that he picked out the subset of measurements that agreed with Einstein’s prediction. Thus Eddington’s vindication of Einstein was not warranted on the data, but was more a “social construction”, arrived at because Eddington wanted Einstein’s theory to be true. Thus Einstein’s fame resulted, not from having developed a superior theory, but from the approval of the high-status Eddington.

The story is often quoted in support of the thesis that science — far from giving an objective model of reality — is just another form of socially-constructed knowledge, with little claim to be superior to other “ways of knowing”. Even those who may grant that science can attain some degree of objectivity can point to such accounts and conclude that the acceptance of scientific ideas is far more about the social status of their advocates than is commonly acknowleged.

Albert Einstein and Arthur Eddington

A new paper by Gerry Gilmore and Gudrun Tausch-Pebody reports a re-analysis of the data and a re-evaluation of the whole story. Their conclusion, in short, is that Eddington’s analysis was defendable and correct. Where he placed more credence on some observations than others he was right to do so, and the measurements really did favour Einstein’s value for the deflection of the stars’ positions.

Thus, the later revisionist account by philosophers John Earman and Clark Glymour, taken up in accounts of science such as The Golem by Harry Collins and Trevor Pinch, are unfair to Eddington.

Images on the 1919 Solar eclipse. Faint stars are marked.

Gilmore and Tausch-Pebody say in their article:

Earman and Glymour conclude: “Now the eclipse expeditions confirmed the theory only if part of the observations were thrown out and the discrepancies in the remainder ignored; Dyson and Eddington, who presented the results to the scientific world, threw out a good part of the data and ignored the discrepancies. This curious sequence of reasons might be cause enough for despair on the part of those who see in science a model of objectivity and rationality.”

Our re-analysis shows that these strong claims are based entirely on methodological error. Earman and Glymour failed to understand the difference between the dispersion of a set of measurements and an uncertainty, random plus systematic, on the value of the parameter being measured. They speculated but did not calculate, and their conclusions are not supported by evidence.

Their error was left unchallenged and the strong conclusions and accusations they derived from it were used not only to question the scientific method then applied, but also to undermine the scientific integrity and reputation of an eminent scientist.

The crucial observations came from two different telescopes, a 4-inch telescope at Sobral, in Brazil, and an astrograph sent to Principe Island, off West Africa. Einstein’s theory of gravity predicted a deflection (for a star at the sun’s limb) of 1.75 arcsecs, while a calculation based on Newtonian gravity predicted half that value, 0.87 arcsecs.

Gilmore and Tausch-Pebody present the table below, listing the measured deflection, and how much it differed from the Einsteinian, Newtonian and zero-deflection models. The z value is the difference, in units of the measurement’s error bar, and P(z) is the probability of obtaining that measurement, were the model correct. The data clearly prefer Einstein’s value for the deflection.

Observations were also made with a third instrument, an astrograph taken to Sobral. However, the resulting images were “diffused and apparently out of focus”, resulting in a systematic error that was large and unquantifiable. Crucially, being unable to evaluate the systematic distortion, the observers could not arrive at a proper uncertainty estimate for these data points, without which they could not be combined with the measurements from the other two telescopes.

Gilmore and Tausch-Pebody conclude:

The original 1919 analysis is statistically robust, with conclusions validly derived, supporting Einstein’s prediction. The rejected third data set is indeed of such low weight that its suppression or inclusion has no effect on the final result for the light deflection, though the very large and poorly quantified systematic errors justify its rejection.

Scientists, being human, are of course fallible and prone to bias. To a large extent they are aware of that, which is why techniques such as double-blinded controlled trials are routinely adopted. And in some areas, such as the replication crisis in psychology, scientists have certainly not been careful enough. But, overall, it does seem that science succeeds in overcoming human fallibility, and that the consensus findings arrived at are more robust than critics sometimes allow.

Are predictions an essential part of science?

Theoretical physicist Sabine Hossenfelder recently wrote that that “predictions are over-rated” and that one should instead judge the merits of scientific models “by how much data they have been able to describe well, and how many assumptions were needed for this”, finishing with the suggestion that “the world would be a better place if scientists talked less about predictions and more about explanatory power”.

Others disagreed, including philosopher-of-science Massimo Pigliucci who insists that “it’s the combination of explanatory power and the power of making novel, ideally unexpected, and empirically verifiable predictions” that decides whether a scientific theory is a good one. Neither predictions nor explanatory powers, he adds, are sufficient alone, and “both are necessary” for a good scientific theory. Continue reading

Science Unlimited, Part One: Pseudoscience

Philosophers Maarten Boudry and Massimo Pigliucci have recently edited a volume of essays on the theme of scientism. The contributions to Science Unlimited? The Challenges of Scientism range from sympathetic to scientism to highly critical.

I’m aiming to write a series of blog posts reviewing the book, organised by major themes, though knowing me the “reviewing” task is likely to play second fiddle to arguing in favour of scientism.

Of course the term “scientism” was invented as a pejorative and so has been used with a range of meanings, many of them strawmen, but from the chapters of the book emerges a fairly coherent account of a “scientism” that many would adopt and defend.

This brand of scientism is a thesis about epistemology, asserting that the ways by which we find things out form a coherent and unified whole, and rejecting the idea that knowledge is divided into distinct domains, each with a different “way of knowing”. The best knowledge and understanding is produced by combining and synthesizing different approaches and disciplines, asserting that they must mesh seamlessly. Continue reading

The cosmological multiverse and falsifiability in science

The cosmological “multiverse” model talks about regions far beyond the observable portion of our universe (set by the finite light-travel distance given the finite time since the Big Bang). Critics thus complain that it is “unfalsifiable”, and so not science. Indeed, philosopher Massimo Pigliucci states that instead: “… the notion of a multiverse should be classed as scientifically-informed metaphysics”.

Sean Carroll has recently posted an article defending the multiverse as scientific (arXiv paper; blog post). We’re discussing here the cosmological multiverse — the term “multiverse” is also used for concepts arising from string theory and from the many-worlds interpretation of quantum mechanics, but the arguments for and against those are rather different. Continue reading

How not to defend humanistic reasoning

Sometimes the attitudes of philosophers towards science baffle me. A good example is the article Defending Humanistic Reasoning by Paul Giladi, Alexis Papazoglou and Giuseppina D’Oro, recently in Philosophy Now.

Why did Caesar cross the Rubicon? Because of his leg movements? Or because he wanted to assert his authority in Rome over his rivals? When we seek to interpret the actions of Caesar and Socrates, and ask what reasons they had for acting so, we do not usually want their actions to be explained as we might explain the rise of the tides or the motion of the planets; that is, as physical events dictated by natural laws. […]

The two varieties of explanation appear to compete, because both give rival explanations of the same action. But there is a way in which scientific explanations such as bodily movements and humanistic explanations such as motives and goals need not compete.

This treats “science” as though it stops where humans start. Science can deal with the world as it was before humans evolved, but at some point humans came along and — for unstated reasons — humans are outside the scope of science. This might be how some philosophers see things but the notion is totally alien to science. Humans are natural products of a natural world, and are just as much a part of what science can study as anything else.

Yes of course we want explanations of Caesar’s acts in terms of “motivations and goals” rather than physiology alone — is there even one person anywhere who would deny that? But nothing about human motivations and goals is outside the proper domain of science. Continue reading