Category Archives: Science

Fundamental ontology: what is the universe actually made of?

In his classic “Feynman lectures on physics”, Richard Feynman starts by saying:

If, in some cataclysm, all of scientific knowledge were to be destroyed, and only one sentence passed on to the next generation of creatures, what statement would contain the most information in the fewest words? I believe it is the atomic hypothesis that all things are made of atoms — little particles that move around in perpetual motion, attracting each other when they are a little distance apart, but repelling upon being squeezed into one another.

Of course atoms are not the basic unit, they are composed of nuclei surrounded by electrons. The nuclei are then composed of protons and neutrons (and short-lived virtual particles such as pions), and the protons and neutrons are themselves composed of quarks and gluons.

But what is the ultimate level? What, when one goes down to the most fundamental level, are things made of? While there are lots of opinions there is no accepted answer, and mulling it over for myself I realised that none of the options are attractive in the sense of aligning with intuition about what “physical stuff” would be made of. Here are some of the possibilities:

Particles: The concept of a particle is very useful (the above protons, neutrons, electrons are all examples of particles). The composite particles have a definite size, but the most fundamental particles, such as an electron, are conceptualised as being point-like (no spatial extent or internal structure) despite also having properties such as “spin”. That description of something as point-like seems pretty “mathematical” rather than being of a physical “object”.

Wave packets: Of course particles don’t behave as point-like entities, in that their influence has spatial extent. Electrons, for example, cannot get too close to each other, such that the effective “size” of such a particle can be summed up by the de Broglie wavelength. So perhaps we should think of the most basic “things” as being “wave packets”, spatially localised waves that behave, move around and evolve according to the equations of quantum mechanics. But surely the wave packet is a mathematical description of the physical thing, rather than being the physical thing itself?

Strings: One problem with point-like particles is that putting lengths of zero into the maths makes quantum mechanics stop working. For such reasons theorists have speculated that the basic “things” are not points but one-dimensional strings. This also has the advantage that strings naturally support waves,just as a guitar string does, in accordance with the wave-like maths of quantum mechanics. So far, though, string theory remains speculative, since empirical confirmation is beyond current technological capability.

Wave functions: Another option is to declare the quantum wave function itself to be what actually “exists”. But it is made up of complex numbers, not “real” ones, and can an “imaginary” number be physically extant? Also, formally the wave function extends to infinity, and there is only one wave function for the whole universe. So is there only one physical thing, one infinitely large object, in a complete reversal of Feynman’s account? Surely the wavefunction is a calculation tool, something that describes behaviour very well, but is not itself physically real? After all, isn’t the wave function “merely” maths?

Quantum fields: In quantum field theory particles are not seen as fundamental. Rather they are excitations of fields, being ripples and disturbances of a “field”, analogous to the waves caused by throwing a stone into a pond. But what is a “field”? It, again, is a mathematical construct. A classical field is a set of numbers, one number for each location in space. A quantum field is a mathematical operator at each location in space. That makes it a hugely abstract and mathematical concept, rather than something one would intuitively regard as physically real.

Space: So maybe we shouldn’t think of particles as being what ultimately exists, maybe we should think of space as the ultimate “thing”, with particles being ripples in space (the opposite of seeing space as an inert backdrop containing physically existing particles). But then there are proposals that “space” is itself a construct produced by the quantum entanglement of particles. So thinking of particles and space as distinct might be a mistake.

Which of the above should we go for? We already know, given quantum mechanics, that human intuition is a poor guide to reality at the micro level, so choosing based on accordance with our intuition would be unreliable.

The best option is perhaps to regard all of the above as instrumental, being models that work well and allow us to do calculations and make predictions, but not being how reality actually is. Perhaps such instrumental models are the best we can do?

One reason for doubting that any of the above is the final answer is that they are bound up with the nature of space itself, and we do not have a quantum theory of space (since we don’t yet have a theory merging quantum mechanics with gravitation). Thus we have a “known unknown” telling us that our current models are only instrumental approximations.

It does seem that the further we delve down into the most fundamental physics the more our descriptions seem mathematical rather than being about physical objects. Max Tegmark has taken this to the extreme, asserting that everything is ultimately made of mathematics. Our intuitive concept of what “physical stuff” is like may be appropriate only to the scale of our selves and our own senses, a vast number of orders of magnitude larger than the scale of the tiniest things (strings are hypothesized as being 1035 times smaller than a human). So we should surely expect any ultimate ontology to be pretty counter-intuitive. Indeed one could suggest that the fact that the character of the description changes from “physical” to “mathematical” is a sign that we’re approaching the underlying reality of what physical stuff is. Or we may just be approaching the limits of what humans can conceive.

The Second Law of Thermodynamics made easy

The Second Law of Thermodynamics is one of the few scientific laws that has attained a status in wider culture, even featuring in rock tracks by Muse. Famously, C.P. Snow cited an understanding of the 2nd Law as something that every educated person should have.

The 2nd Law is often stated in technical language that makes its meaning hard to understand, but the basic principles are actually readily grasped. I was recently challenged to explain the 2nd Law at the level of a bright 13-year-old, and so here is my attempt. Continue reading

On Stephen Law on Scientism

scientism It’s good to see philosophers taking scientism seriously, and not just using the term as a bogey word. Massimo Pigliucci and Maarten Boudry are editing a forthcoming volume on scientism (Total Science, University of Chicago Press) and some of the essays are appearing on the internet.

I’ll discuss here the draft chapter by Stephen Law (Heythrop College, University of London) who writes, discussing the proper scope of science: Continue reading

Telling science from pseudoscience and the demarcation problem

demarcPhilosophers of Science have long puzzled over what they call “the” demarcation problem, of how to distinguish science from pseudoscience. In the early 20th Century the Logical Positivists proposed the verification principle, that a statement was meaningful and scientific only if it could be empirically verified. Karl Popper then proposed a similar idea, that a scientific idea is one that can be falsified.

There is a lot of truth in both proposals, but neither can be interpreted too narrowly. The problem is that no statement can be verified or falsified in isolation. Science constructs whole webs of ideas, and it is the whole construct that is then compared to empirical data, to be adjusted and improved as necessary. Further, a statement such as Newton’s law of gravity can never be verified in the general sense, all we can say is that it worked well enough — as part of the wider web of ideas — in the particular instance we tested. Nor is it straightforward to falsify such a law. If our overall model is inconsistent with an observation then we could indeed alter one of the laws; but we might also overcome the inconsistency by altering some other part of the overall model; or we might doubt the reliability of the observations. Continue reading

Lamenting the reburial of ancient bones

In 2015 ISIS captured the ancient city of Palmyra and proceeded to destroy ancient ruins that they regarded as pagan or polytheistic. The World Heritage Site monuments were typically 2000 years old. Did ISIS have a right to destroy them? Most of us would say no, and would lament the loss of a heritage that cannot be replaced.

In saying that we are being culture-ist. That is, we are placing the values of our culture above those of ISIS, who, after all, would regard their acts as virtuous and as mandated by the highest authority, namely their religion. I readily plead guilty to be unapologetically culturist.

This comparison might be considered inappropriate, but in Nature this week I read about a 12,600-yr-old skeleton, the “Anzick Child”, that had been passed to Native American groups for reburial. The article lists 12 other skeletons, all older than 8000 yrs, that have either been reburied or might be. Reburial here effectively means their permanent loss, since they would decay relatively quickly under normal burial conditions.

As a scientist I am saddened by the loss of irreplaceable material that could tell us much about the past history of humans. I would regard such remains as part of the common heritage of us all and am unhappy about one group destroying them in the same way that I am unhappy about a group taking it upon itself to destroy Palmyra or the Bamiyan Buddha statues. This is obviously very culturist of me, but then I’ve already pleaded guilty. Continue reading

The evolutionary argument against moral realism

Having abandoned Divine Command Theory around the age of 12, when I realised that I was an atheist, I then read John Stuart Mill at the impressionable age of 14 and instantly became a utilitarian. I remained so well into adulthood; it seemed obvious that morality was a matter of objective wrong and right, and that utilitarianism — the greatest good of the greatest number — was the way to determine such facts.

Of course I also became aware of the unresolved problems with utilitarianism: there is no way to assess what is “good” except by subjective judgement, and there is no way to aggregate over sentient creatures (should a mouse count equally to a human?) except, again, by subjective judgement. Both of those rather clash with the desired objectivity of the scheme.

Periodically I would try to fix these flaws, but never succeeded. Such mulling led me to the realisation that I didn’t actually know what moral language actually meant. “It is morally right that you do X”, can be re-phrased as “you ought to do X”, but what do those mean? I realised that I didn’t know, and had been proceeding all this time on the basis that what they meant was intuitively obvious and so didn’t need analysis.

But that’s not good enough if we’re trying to solve meta-ethics and understand the very foundations of morality. And so, I eventually arrived at the realisation that the only sensible meaning that can be attached to the moral claim “you ought to do X” is that: at least one human, likely including the speaker, will dislike it if you do not do X. Similarly, “It is morally right that you do X” becomes a declaration that the speaker will approve of you doing X and disapprove of you not doing X. Continue reading

Reductionism and Unity in Science

One problem encountered when physicists talk to philosophers of science is that we are, to quote George Bernard Shaw out of context, divided by a common language. A prime example concerns the word “reductionism”, which means different things to the two communities.

In the 20th Century the Logical Positivist philosophers were engaged in a highly normative program of specifying how they thought academic enquiry and science should be conducted. In 1961, Ernest Nagel published “The Structure of Science”, in which he discussed how high-level explanatory concepts (those applying to complex ensembles, and thus as used in biology or the social sciences) should be related to lower-level concepts (as used in physics). He proposed that theories at the different levels should be closely related and linked by explicit and tightly specified “bridge laws”. This idea is what philosophers call “inter-theoretic reductionism”, or just “reductionism”. It is a rather strong thesis about linkages between different levels of explanation in science.

To cut a long story short, Nagel’s conception does not work; nature is not like that. Amongst philosophers, Jerry Fodor has been influential in refuting Nagel’s reductionism as applied to many sciences. He called the sciences that cannot be Nagel-style reduced to lower-level descriptions the “special sciences”. This is a rather weird term to use since all sciences turn out to be “special sciences” (Nagel-style bridge-law reductionism does not always work even within fundamental particle physics, for which see below), but the term is a relic of the original presumption that a failure of Nagel-style reductionism would be the exception rather than the rule.

For the above reasons, philosophers of science generally maintain that “reductionism” (by which they mean the Nagel’s strong thesis) does not work, and on that they are right. They thus hold that physicists (who generally do espouse and defend a doctrine of reductionism) are naive in not realising that.

“The underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known, and the difficulty is only that the exact application of these laws leads to equations much too complicated to be soluble.”     — Paul Dirac, 1929 [1]

The problem is, the physicists’ conception of reductionism is very different. Physicists are, for the most part, blithely unaware of the above debate within philosophy, since the ethos of Nagel-style reductionism did not come from physics and was never a live issue within physics. Physicists have always been pragmatic and have adopted whatever works, whatever nature leads them to. Thus, where nature leads them to Nagel-style bridge laws physicists will readily adopt them, but on the whole nature is not like that.

The physicists’ conception of “reductionism” is instead what philosophers would call “supervenience physicalism”. This is a vastly weaker thesis than Nagel-style inter-theoretic reduction. The physicists’ thesis is ontological (about how the world is) in contrast to Nagel’s thesis which is epistemological (about how our ideas about the world should be). Continue reading