Category Archives: Science

Human brains have to be deterministic (though indeterminism would not give us free will anyhow)

Are human brains deterministic? That is, are the decisions that our brain makes the product of the prior state of the system (where that includes the brain itself and the sensory input into the brain), or does quantum indeterminacy lead to some level of uncaused randomness in our behaviour? I’ll argue here that our brains must be largely deterministic, prompted by being told that this view is clearly wrong.

First, I’ll presume that quantum mechanics is indeed indeterministic (thus ignoring hidden-variable and Everettian versions). But the fact that the underlying physics is indeterministic does not mean that devices built out of quantum-mechanical stuff must also be indeterministic. One can obtain a deterministic device simply by averaging over a sufficient number of low-level particle events. Indeed, that’s exactly what we do when we design computer chips. We build them to be deterministic because we want them to do what we program them to do. In principle, quantum fluctuations in a computer chip could affect its output behaviour, but in practice a minimum of ~50 electrons are involved in each chip-junction “event”, which is sufficient to average over probabilistic behaviour such that the likelihood of a quantum fluctuation changing the output is too small to be an issue, and thus the chip is effectively deterministic. Again, we build them like that because we want to control their behaviour. The same holds for all human-built technology.

There may be some instances where genuine non-deterministic randomness might be useful. An example is running a Monte-Carlo simulation (a technique, widely used in science and engineering, of computing a simulation that allows for all possibilities). But, even here, in practice, one usually uses deterministic pseudo-random-number generators, simply because our computers — despite being built on quantum mechanics — don’t actually do genuine randomness.

Our brains are also built to do a job. They are the product of a genetic recipe, a recipe that is the product of evolution. In evolutionary terms the job of a brain is to make real-time decisions based on that genetic recipe and on the local circumstances, as informed through the senses. And brains are hugely expensive in evolutionary terms, consuming 20 percent of the body’s energy and (for example) forcing big compromises in female anatomy and making childbirth dangerous.

It follows that brains could not have evolved unless they were strongly selected for (they cannot just be byproduct “spandrels”), which means they must serve the interests of the genes that specify the recipe, and that means that brain behaviour (the choices they make) must be strongly influenced by the genes. And, since those choices can be being made decades after the childhood brain develops out of the genetic recipe, it follows that there must be a deterministic chain of causation that holds over generational timescales.

To head off misunderstandings, the above is not saying that behaviour is entirely specified by genes. (Whenever anyone argues for genetics being a major factor in anything, it is often attacked as being the claim that genetics is the only factor; the reality is that everything is always a product of both genes and environment.) Obviously, the brain’s job is to make decisions reflecting the local circumstances, but how to react to particular circumstances must be strongly influenced by genes, otherwise the brain could not have evolved. Nor is this saying that the environment and stochastic factors have no effect on how the genetic recipe plays out and as the brain develops; of course they do. And nor is this saying that the brain’s neural network is static and unchanging. Of course it isn’t (memories, for example, and changes in the network). But this argument does show that there must be a deterministic chain between genes and brain behaviour that holds over multi-decade timescales. It must be the case that, in general, “that behavioural decision could have been different if, three decades ago, that gene had been different”. That includes behavioural decisions that are regarded as “free will” decisions — which presents no problem if one adopts a compatibilist interpretation of free will.

The above argument doesn’t fully rule out a minor role for genuine randomness based on quantum indeterminacy. I would guess that, were a quantum dice-throwing module within the brain of evolutionary benefit to an animal, then such a module could have evolved. But it’s hard to see why it would be evolutionarily beneficial. Just as we make technology to do what we want it to, genes will make brains that behave in ways they program for. That will hold especially for the large component of brain function that is simply about regulating body functions, not producing “behavioural choices”, and for the primitive brains producing fairly simple behaviour, such as in worms. This means that the neural-network junctions (synapses) will have evolved to be deterministic. This is achieved by a neural signal, an “action potential”, being an on–off event (so that it is insensitive to small changes), with a sufficient number of ions needed to trigger one (such that quantum indeterminacy is averaged over). This is pretty much the same way that humans make computer chips to be deterministic. Since larger and more complex brains work in a similar way, just with vastly more neurons and neural connections, it follows that they also will be deterministic.

Another point: our brains are simply too big and too complex to be about producing quantum “dice throwing” decisions. A device producing an indeterminate output would have to be small (again, averaging over anything more than a couple of dozen ions gives you a deterministic output). Yet our brains have 100 billion neurons made of 1026 molecules. What is all that for, and how did it evolve, if our decisions are effectively a dice throw? The only answer is that it evolved to process the information from our senses, and make decisions based on that, and making decisions based on input information is (by definition) a deterministic process.

Lastly, twin studies tell us that behavioural traits are highly heritable, with typical heritabilities of 50%. Again, this requires a causal chain between gene expression at the stage of brain development and behavioural choices made decades later. (There’s also a large role for the environment, of course, but being affected by environment is just as much a deterministic process as being affected by genes.)

Anyhow, I was told that I was wrong, and that quantum indeterminacy plays a role in our brains, especially when it comes to “free will”, and was pointed to a review paper arguing this by neurobiologist Björn Brembs of the Universität Regensburg.

Obviously a professor of neurobiology knows more about brains than I do, but, for now at least, I’m sticking to the above arguments. So what counters does Professor Brembs give? The paper first points out that the underlying physics is indeterministic. I agree, though, as above, that does not necessitate that brains are. The main argument presented, however, is the need for an animal to be unpredictable. Clearly, a gazelle running from a cheetah will benefit if the cheetah cannot predict which way it will dart. This will hold for any animal vulnerable to predation.

I agree on the need for unpredictability, but this does not require quantum indeterminacy. The world is simply way too complex for any of us to be able to predict it, even if that were “in principle” possible given enough information and intelligence. All that matters is that, in practice, predators stand no chance of making such predictions. The nematode worm (Caenorhabditis elegans) has only 302 cells in its nervous system. But even there, different individuals have those 302 cells wired up differently, owing to inevitable differences as the embryo developed. And if I gave you a complete map of the neural network of a mere 302 cells, could you look at it and predict how it would respond to various stimuli? I certainly couldn’t. The cheetah hasn’t a hope in hell of predicting the exact behaviour of the gazelle’s brain, even if that brain is entirely deterministic, and even if it had a complete and accurate neural-level map of that gazelle’s brain (which of course it doesn’t), and even if it had complete knowledge of the gazelle’s sensory experiences (which it also doesn’t).

So you don’t need indeterminacy to have in-practice unpredictability; the world is just too complex. And, while a prey animal wants unpredictability, it does not want to make dice-throwing decisions. There are some directions to dart — straight into the paws of the cheetah — that would be a rather bad choice. The gazelle still wants to make best use of all the information from its senses, and that requires a deterministic neural network.

And that’s about it: the above argument, that “predictability is one thing that will make sure that a competitor will be out of business soon”, and thus that “deterministic behaviour can never be evolutionarily stable” is pretty much the only argument that Professor Brembs presents for brains being indeterministic. He does argue at length for brains needing to produce “behavioural variability”, and that they need to flexible and adaptable and responsive to their environments. I agree entirely. But this is a separate issue from them being non-deterministic. Indeed, being responsive to their environments is itself a deterministic concept. The whole point of quantum indeterminacy is that it is not the result of anything, and so is independent of local conditions.

As an example, Brembs argues that:

“… the temporal structure of the variability in spontaneous turning manoeuvres both in tethered and in free-flying fruitflies could not be explained by random system noise. Instead, a nonlinear signature was found, suggesting that fly brains operate at criticality, meaning that they are mathematically unstable, which, in turn, implies an evolved mechanism rendering brains highly susceptible to the smallest differences in initial conditions and amplifying them exponentially. Put differently, fly brains have evolved to generate unpredictable turning manoeuvres.”

But sensitivity to small differences and a non-linear response is not the same thing as being non-deterministic. Deterministic systems often behave like that. A roulette wheel is non-linear and unpredictable in practice, even if it is deterministic. As another example, modern fighter jets are designed to be aerodynamically unstable, in order to be highly manoeuvrable — just like the fruit fly — and it would be straightforward to write a black-box computer code to produce a flight plan that another human (having no sight of the code) could not then predict.

I do wonder whether a rejection of determinism might be motivated by the false presumption that a “deterministic” response must inevitably be a simple, linear and inflexible response that is not responsive to the local circumstances? But this is not so. Responding to local circumstances is (by definition) a deterministic process, and behavioural variation and flexibility is exactly what you’d get from a complex but deterministic neural network.

But Professor Brembs wants a role for indeterminacy as an ingredient for his conception of free will.

While some argue that unpredictable (or random) choice does not qualify for their definition of free will, it is precisely the freedom from the chains of causality that most scholars see as a crucial prerequisite for free will.

I consider this to be misguided. As a compatibilist, I assert that the only “free will” that we have is entirely compatible with determinism. We have a “will” and often we are “free” to act on it. And yes, that will is indeed a product of the prior state of the system. The sort of “will” that arises independently of the prior state of the system does not exist (unless one wants to argue for dualistic souls that tell matter how to move).

But many people dislike that conclusion; they reject dualism but want to rescue a “will” that is “free from the chains of causality”. They hope that some mixture of causation and indeterminism might do that. Thus Brembs argues (as have others) for a “two-stage model of free will”:

One stage generates behavioural options and the other one decides which of those actions will be initiated. Put simply, the first stage is ‘free’ and the second stage is ‘willed’. […] freedom arises from the creative and indeterministic generation of alternative possibilities, which present themselves to the will for evaluation and selection.

This may be a failure of my imagination but I don’t see how this helps. Yes it gives freedom (from the causal chain) and it gives will, but the part that is free is not willed and the part that is willed is not free from causation. So it doesn’t give a “free will” if that is a will that is free from the causal chain.

The “free” part is simply generating a list of possibilities. The “will” part, the bit that is doing the choosing from the list, is still a computational process. Non-deterministic choice making could not have evolved.

Invoking the first stage serves only to salve an intuitive dislike of the idea that we are bound to the same processes of cause-and-effect that govern the rest of nature. People consider this to be beneath human dignity. But human intuition is unreliable, and a mechanism invoked purely to salve human dignity is unlikely to be how things actually are. If you think that a dice-throwing component is needed to generate options that could not otherwise be available, then you’re likely underestimating the degree to which the world is so complex that a deterministic neural network would already produce ample behavioural flexibility.

In short, I suggest that appealing to quantum indeterminacy in any discussion of free will is a red herring that has intuitive appeal but that cannot be made into a coherent account of a will that is uncaused. We should, instead, accept that there is nothing wrong with our wills being caused.

But it seems that Brembs is motivated by a dislike of the idea that we are at the end of a causal chain:

I hope to at least start a thought process that abandoning the metaphysical concept of free will does not automatically entail that we are slaves of our genes and our environment, forced to always choose the same option when faced with the same situation.

So what would be the problem if that were indeed the case?

In fact, I am confident I have argued successfully that we would not exist if our brains were not able to make a different choice even in the face of identical circumstances and history.

I don’t agree that the argument has been successfully made. And, anyhow, in practice, “identical circumstances and history” never recur. Walk around outside and look around you. The local environment that you are looking at is made up of something like 1032 individual atoms. That is so many that, if you were to point to them individually at a rate of one every second, it would take you 100,000,000,000,000 times the age of universe before you’d finished. The idea of “identical circumstances”, with every one of those atoms being in the same place and behaving the same way, is a valid philosophical thought experiment but is not of practical significance. The world already contains enough complexity and variation that there is no need to invoke quantum indeterminacy in order for a cheetah to be unable to predict the darting of a gazelle (if the cheetah counted neurons in the gazelle’s brain at a rate of one a second it would take it a mere 3000 years).

It makes no practical difference at all whether a different decision arises because of a quantum dice throw, or because the circumstances were slightly different. I don’t see why the former is preferable, or has more “moral” salience, or is more in accord with our human dignity (not that that actually matters; since the universe is not obliged to accord with our preferences).

Our Earth is not at the centre of the universe. The universe as a whole does not have a purpose. It was not created for us, we are just one product of an evolutionary process. And, being a product of the universe, we are material beings produced by and governed by the same laws of cause and effect that describe everything else.

Confusion about free will, reductionism and emergence

Psychology Today has just published: “Finding the Freedom in Free Will, with the subtitle: “New theoretical work suggests that human agency and physics are compatible”. The author is Bobby Azarian, a science writer with a PhD in neuroscience. The piece is not so much wrong — I actually agree with the main conclusions — but is, perhaps, rather confused. Too often discussion in this area is bedevilled by people meaning different things by the same terms. Here is my attempt to clarify the concepts. Azarian starts:

Some famous (and brilliant) physicists, particularly those clearly in the reductionist camp, have gone out of their way to ensure that the public believes there is no place for free will in a scientific worldview.

He names Sabine Hossenfelder and Brian Greene. The “free will” that such physicists deny is “dualistic soul” free will, the idea that a decision is made by something other than the computational playing out of the material processes in the brain. And they are right, there is no place for that sort of “free will” in a scientific worldview.

Azarian then says:

Other famous scientists—physicists, neuroscientists, and complexity scientists among them—have a position virtually opposite of Greene’s and Hossenfelder’s …

He names, among others, David Deutsch and the philosopher Dan Dennett. But the conception of “free will” that they espouse is indeed just the computational playing out of the material brain. Such brain activity generates a desire to do something (a “will”) and one can reasonably talk about a person’s “freedom” to act on their will. Philosophers call this a “compatibilist” account of free will.

Importantly, and contrary to Azarian’s statement, this position is not the opposite to Greene’s and Hossenfelder’s. They are not disagreeing on what the brain is doing nor about how the brain’s “choices” are arrived at. Rather, the main difference is in whether they describe such processes with the label “free will”, and that is largely an issue of semantics.

All the above named agree that a decision is arrived by the material processes in the brain, resulting from the prior state of the system. Any discussion of “free will” needs to distinguish between the dualistic, physics-violating conception of free will and the physics-compliant, compatibilist conception of free will. They are very different, and conflating them is just confused.

Do anti-agency reductionists believe that the theories of these scientists are not worth serious consideration?

No, they don’t! What they do think is that, when talking to a public who might interpret the term “free will” in terms of a dualistic soul operating independently of material causes, that it might be better to avoid the term “free will”. People can reasonably disagree on that, but, again, this issue of semantics is distinct from whether they agree on what the brain is doing.

Origins-of-life researcher Sara Walker, who is also a physicist, explains why mainstream physicists in the reductionist camp often take what most people would consider a nonsensical position: “It is often argued the idea of information with causal power is in conflict with our current understanding of physical reality, which is described in terms of fixed laws of motions and initial conditions.”

Is that often argued? Who? Where? This is more confusion, this time about “reductionism”. The only form of “reductionism” that reductionists actually hold to can be summed up as follows:

Imagine a Star-Trek-style transporter device that knows only about low-level entities, atoms and molecules, and about how they are arranged relative to their neighbouring atoms. This device knows nothing at all about high-level concepts such as “thoughts” and “intentions”.

If such a device made a complete and accurate replica of an animal — with every molecular-level and cellular-level aspect being identical — would the replica then manifest the same behaviour? And in manifesting the same behaviour, would it manifest the same high-level “thoughts” and “intentions”? [At least for a short-time; this qualification being necessary because, owing to deterministic chaos, two such systems could then diverge in behaviour.]

If you reply “yes” then you’re a reductionist. [Whereas someone who believed that human decisions are made by a dualistic soul would likely answer “no”.]

Note that the pattern, the arrangement of the low-level entities is absolutely crucial to this thought experiment and is central to how a reductionist thinks. A Star Trek transporter does not just deliver the atoms and molecules in a disordered heap, and then expect the heap to jump up and fight Klingons.

The caricature of a reductionist is someone who would see no difference between a living human brain and a human brain put through a food blender. It shouldn’t need saying that such a view is so obviously wrong that no-one thinks like that.

What is the difference between a tree and an elephant? It is not the component parts. Trees and elephants are made of the same atoms (carbon, oxygen, nitrogen and hydrogen make up 96% of an organism’s weight, with the rest being a few other types of atoms). After all, elephants are made up of what they eat, which is trees. Indeed, by carefully controlling the water supply to a tree, one could, in principle, make its constituent atoms identical to those of a same-weight elephant.

So the difference between a tree and an elephant — and the cause of their different behaviour — is solely in the medium-scale and large-scale arrangement of the component parts. No reductionist (other than those constructed out of straw) disagrees.

Azarian then spends time contrasting a fully deterministic view of physics with the possibility of quantum-mechanical indeterminacy. This is more confusion. That topic is irrelevant here. Whether the state of the system at time t+1 is entirely specified by the state at time t, or whether there is also quantum dice throwing, is utterly irrelevant to concepts of “will”, “intention”, “agency” and “freedom” because quantum dice throwing doesn’t give you any of those.

Indeed, Azarian accepts this, saying: “quantum indeterminism alone does not help the notion of free will much since a reality with some randomness is not the same as one where an agent has control over that randomness”. But he then argues (quoting George Ellis) that: “While indeterminacy does not provide a mechanism for free will, it provides the “wiggle room” not afforded by Laplace’s model of reality; the necessary “causal slack” in the chain of cause-and-effect that could allow room for agency, …”.

I agree that we can sensibly talk about “agency”, and indeed we need that concept, but I don’t see any way in which quantum indeterminacy helps, not even in providing “wiggle room”. [And note that, after invoking this idea, Azarian does not then use it in what follows.]

If we want to talk about agency — which we do — let’s talk about the agency of a fully deterministic system, such as that of a chess-playing computer program that can easily outplay any human.

What else “chooses” the move if not the computer program? Yes, we can go on to explain how the computer and its program came to be, just as we can explain how an elephant came to be, but if we want to ascribe agency and “will” to an elephant (and, yes, we indeed do), then we can just as well ascribe the “choice” of move to the “intelligent agent” that is the chess-playing computer program. What else do you think an elephant’s brain is doing, if not something akin to the chess-playing computer, that is, assessing input information and computing a choice?

But, Azarian asserts:

The combination of the probabilistic nature of reality and the mechanism known as top-down causation explains how living systems can affect physical reality through intentional action.

Top-down causation is another phrase that means different things to different people. As I see it, “top-down causation” is the assertion that the above Star-Trek-style replication of an animal would not work. I’ve never seen an explanation of why it wouldn’t work, or what would happen instead, but surely “top-down causation” has to mean more than “the pattern, the arrangement of components is important in how the system behaves”, because of course it is!

Top-down causation is the opposite of bottom-up causation, and it refers to the ability of a high-level controller (like an organism or a brain) to influence the trajectories of its low-level components (the molecules that make up the organism).

Yes, the high-level pattern does indeed affect how the low-level components move. An elephant’s desire for leaves affects where the molecules in its trunk are. But equally, the low-level components and their arrangement give rise to the high-level behaviour. So this is just another way of looking at the system, an equally valid way of looking at the system that is. But it is not “the opposite of bottom-up causation”. Both views have to be fully valid at the same time. Unless you’re asserting that the Star-Trek-style replication would not work. And if you’re doing that, why wouldn’t it?

This means that biological agents are not slaves to the laws of physics and fundamental forces the way non-living systems are.

Well, no, this is wrong in two ways. First, the view that the low-level description gives rise to the high-level behaviour is still equally valid and correct, so the biological agents are still “slaves to the laws of physics and fundamental forces”. That is, unless you’re advocating something way more novel and weird than you’ve said so far. And, second, this would apply just as much to non-living systems that have purpose such as the chess-playing computer.

One might reply, but that “purpose” only arises as a product of the Darwinian evolution of life, and I may readily agree with that, but that’s a somewhat different distinction.

With the transition to life, macroscopic physical systems became freed from the fixed trajectories that we see with the movement of inanimate systems, which are predictable from simple laws of motion.

If that is saying only that life evolved to be more complex than non-living things, and thus their behaviour is more complex (and can be sensibly described as “purposeful”), then yes, sure, agreed.

The emergence of top-down causation is an example of what philosophers call a “strong emergence” because there is a fundamental change in the kind of causality we see in the world.

Well, no. Provided you agree that the Star-Trek-style replication would work, then this counts as weak emergence. “Strong” emergence has to be something more than that, entailing the replication not working.

Again, words can mean different things to different people, but the whole point of “strong” emergence is that “reductionism” fails, and reductionism (the sort that anyone actually holds to, that is) is summed up by the Star-Trek-style replication thought experiment.

[I’m aware that philosophers might regard the view summed up by that thought experiment as being “supervenience physicalism”, and assert that “reductionism” entails something more, but that’s not what “reductionism” means to physicists.]

And if by “a fundamental change in the kind of causality we see in the world” one means that Darwinian evolution leads to hyper-complex entities to which we can usefully ascribe purpose, intention and agency, then ok, yes, that is indeed an important way of understanding the world.

But it is not really “a fundamental change in the kind of causality we see in the world” because the bottom-up view, the view that these are collections of atoms behaving in accordance with the laws of physics, remains fully valid at the same time!

I’m actually fully on board with viewing living creatures as entities that have intentions and which exhibit purpose, and fully agree that such high-level modes of analysis are, for many purposes, the best and only ways of understanding their behaviour.

But it is wrong to see this as a rejection of reductionism or of a bottom-up way of seeing things. Both perspectives are true simultaneously and are fully compatible. I agree with the view that Azarian is arriving at here, but regard his explanation of how it is arrived at as confused.

… agency emerges when the information encoded in the patterned interactions of molecules begins telling the molecules what to do.

True, but it always did. Even within physics, the pattern, the arrangement of matter determines how it behaves, and hence the arrangement will “tell molecules what to do”. Even in a system as simple as a hydrogen atom, the arrangement of having the electron spin aligned with the proton spin will lead to different behaviour from the arrangement where the spins are anti-aligned.

Towards the end of the piece, Azarian’s and my views are perhaps converging:

The bottom-up flow of causation is never disrupted — it is just harnessed and directed toward a goal. We still inhabit a cause-and-effect cosmos, but now the picture is more nuanced, with high-level causes being every bit as real as the low-level ones.

Agreed. Azarian then explains that, when it comes to biological systems, what matters for behaviour is the macrostate of the system (the high-level “pattern”) rather than the microstate (all the low-level details).

Yes, agreed. But that is also just as true in physics. Analysing a physical system in terms of its macrostate (where many different microstates could constitute the same macrostate), is exactly the mode of analysis of “statistical mechanics”, which is at the heart of modern physics.

Azarian quotes neurogeneticist Kevin Mitchell in (correctly) saying that:

The macroscopic state as a whole does depend on some particular microstate, of course, but there may be a set of such microstates that corresponds to the same macrostate …

But he goes slightly wrong in quoting neuroscientist Erik Hoel saying:

Recent research has argued exactly this by demonstrating the possibility of causal emergence: when a macroscale contains more information and does more causal work than its underlying microscale.

The macrostate cannot contain more information (and cannot “do more causal work”) than the underlying microstate, since one can reconstruct the macrostate from the microstate. Again, that is the point of the Star Trek thought experiment, and if that is wrong then we’ll need to overturn a lot of science. (Though no-one has ever given a coherent account of why it would be wrong, or of how things would work instead.)

So, where are we? I basically agree with the view that Azarian arrives at. So maybe we’re just disagreeing about what terms such as “reductionism”, “strong/weak emergence” and “top-down” causation actually mean. It wouldn’t be the first time! As I see it, though, this view is not a new and novel way of thinking, but is pretty much what the hidebound, reductionist physicists (like myself) have been advocating all along. None of us ever thought that a living human brain behaves the same as that same brain put through a food blender.

Here’s GJ 367b, an iron planet smaller and denser than Earth

This is an article I wrote for The Conversation about a new exoplanet, for which I was a co-author on the discovery paper. One reason for reproducing it here is that I can reverse any edit that I didn’t like!

As our Solar System formed, 4.6 billion years ago, small grains of dust and ice swirled around, left over from the formation of our Sun. Through time they collided and stuck to each other. As they grew in size, gravity helped them clump together. One such rock grew into the Earth on which we live. We now think that most of the stars in the night sky are also orbited by their own rocky planets. And teams of astronomers worldwide are trying to find them.

The latest discovery, given the catalogue designation GJ 367b, has just been announced in the journal Science by a team led by Dr Kristine Lam of the Institute of Planetary Research at the German Aerospace Center.

The first signs of it were seen in data from NASA’s Transiting Exoplanet Survey Satellite (TESS). Among the millions of stars being monitored by TESS, one showed a tiny but recurrent dip in its brightness. This is the tell-tale signature of a planet passing in front of its star every orbit (called a “transit”), blocking some of the light. The dip is only 0.03 percent deep, so shallow that it is near the limit of detection. That means that the planet must be small, comparable to Earth.

But Dr Lam also wanted to know the planet’s mass. To do that her team set about observing the host star at every opportunity with HARPS, an instrument attached to a 3.6-metre telescope at the European Southern Observatory in Chile, that was specially designed to find planets. It does this by detecting a slight shift in the wavelength of the host star’s light, caused by the gravitational pull of the planet. It took over 100 observations to detect that shift, meaning that the planet, in addition to being small, must also have a low mass.

Artist’s impression of GJ 367b (Credit: Patricia Klein)

Eventually, as observations accumulated, the numbers were tied down: GJ 367b has a radius of 72 percent of Earth’s radius (to a precision of 7 percent), and a mass of 55 percent of Earth’s mass (to a precision of 14 percent). That demonstrates that astronomers can both find Earth-sized planets around other stars, and then measure their properties. The measurements tell us that this planet is denser than Earth. Whereas Earth has a core of iron surrounded by a rocky mantle, this planet must be nearly all iron, making it similar to our Solar System’s Mercury.

Mercury orbits our Sun every 88 days. Blasted by fierce sunlight, the “daytime” side is bare rock heated to 430 degrees Celsius. GJ 367b is even more extreme. The recurrent transits tell is that it orbits its star in only 8 hours. Being so close, the daytime side will be a furnace heated to 1400 Celsius, such that even rock would be molten. Perhaps GJ 367b was once a giant planet with a vast gaseous envelope, like Neptune. Over time, that gaseous envelope would have boiled off, leaving only the bare core that we see today. Or perhaps, as it formed, collisions with other proto-planets stripped off a mantle of rock, leaving only the iron core.

GJ 367b is, of course, way too hot to be habitable. But it shows that we can find and characterise rocky, Earth-sized planets. The task now is to find them further from their star, in the “habitable zone”, where the surface temperature would allow water to exist as a liquid. That is harder. The further a planet is from its star, the less likely it is to transit, and the longer the recurrence time between transits, making them harder to detect. Further, orbiting further out, the gravitational tug on the host star is reduced, making the signal harder to detect.

But GJ 367b’s host star is a red dwarf, a star much dimmer than our Sun. And, with less heating from starlight, the habitable zone around red dwarfs is much closer in. NASA’s Kepler spacecraft has already found planets in the habitable zone of red-dwarf stars, and the TESS survey promises to find many more.

The next step is to ask whether such planets have atmospheres, what those atmospheres are made of, and whether they contain water vapour. Even there, answers may soon be forthcoming, given the imminent launch of the James Webb Space Telescope. If JWST is pointed at a star when a planet is in transit, it can detect the starlight shining through the thin smear of atmosphere surrounding the planet, and that might show subtle spectral features caused by molecules in the planetary atmosphere. We’ve already found water vapour in the atmospheres of gas-giant exoplanets. As planet discoveries continue apace, it is becoming feasible that we could, before long, prove the existence of a planet that has an atmosphere and a rocky surface, on which water is running freely.

A rocky, Mercury-like exoplanet.

Science does not rest on metaphysical assumptions

It’s a commonly made claim: science depends on making metaphysical assumptions. Here the claim is being made by Kevin Mitchell, a neuroscientist and author of the book Innate: How the Wiring of Our Brains Shapes Who We Are (which I recommend).

His Twitter thread was in response to an article by Richard Dawkins in The Spectator:

Dawkins’s writing style does seem to divide opinion, though personally I liked the piece and consider Dawkins to be more astute on the nature of science than he is given credit for. Mitchell’s central criticism is that Dawkins fails to recognise that science must rest on metaphysics: Continue reading

Eddington did indeed validate Einstein at the 1919 eclipse

You’re likely aware of the story. Having developed General Relativity, a theory of gravity that improved on Newton’s account, Einstein concluded that the fabric of space is warped by the presence of mass and thus that light rays will travel on distorted paths, following the warped space. He then predicted that this could be observed during a solar eclipse, when the apparent position of stars near the sun would be distorted by the mass of the sun. Britain’s leading astronomer, Arthur Eddington, set out to observe the 1919 solar eclipse, and triumpantly confirmed Einstein’s prediction. The story then made the front pages of the newspapers, and Einstein became a household name.

You’re also likely aware of the revisionist account. That the observations acquired by Eddington were ambiguous and inconclusive, and that he picked out the subset of measurements that agreed with Einstein’s prediction. Thus Eddington’s vindication of Einstein was not warranted on the data, but was more a “social construction”, arrived at because Eddington wanted Einstein’s theory to be true. Thus Einstein’s fame resulted, not from having developed a superior theory, but from the approval of the high-status Eddington.

The story is often quoted in support of the thesis that science — far from giving an objective model of reality — is just another form of socially-constructed knowledge, with little claim to be superior to other “ways of knowing”. Even those who may grant that science can attain some degree of objectivity can point to such accounts and conclude that the acceptance of scientific ideas is far more about the social status of their advocates than is commonly acknowleged.

Albert Einstein and Arthur Eddington

A new paper by Gerry Gilmore and Gudrun Tausch-Pebody reports a re-analysis of the data and a re-evaluation of the whole story. Their conclusion, in short, is that Eddington’s analysis was defendable and correct. Where he placed more credence on some observations than others he was right to do so, and the measurements really did favour Einstein’s value for the deflection of the stars’ positions.

Thus, the later revisionist account by philosophers John Earman and Clark Glymour, taken up in accounts of science such as The Golem by Harry Collins and Trevor Pinch, are unfair to Eddington.

Images on the 1919 Solar eclipse. Faint stars are marked.

Gilmore and Tausch-Pebody say in their article:

Earman and Glymour conclude: “Now the eclipse expeditions confirmed the theory only if part of the observations were thrown out and the discrepancies in the remainder ignored; Dyson and Eddington, who presented the results to the scientific world, threw out a good part of the data and ignored the discrepancies. This curious sequence of reasons might be cause enough for despair on the part of those who see in science a model of objectivity and rationality.”

Our re-analysis shows that these strong claims are based entirely on methodological error. Earman and Glymour failed to understand the difference between the dispersion of a set of measurements and an uncertainty, random plus systematic, on the value of the parameter being measured. They speculated but did not calculate, and their conclusions are not supported by evidence.

Their error was left unchallenged and the strong conclusions and accusations they derived from it were used not only to question the scientific method then applied, but also to undermine the scientific integrity and reputation of an eminent scientist.

The crucial observations came from two different telescopes, a 4-inch telescope at Sobral, in Brazil, and an astrograph sent to Principe Island, off West Africa. Einstein’s theory of gravity predicted a deflection (for a star at the sun’s limb) of 1.75 arcsecs, while a calculation based on Newtonian gravity predicted half that value, 0.87 arcsecs.

Gilmore and Tausch-Pebody present the table below, listing the measured deflection, and how much it differed from the Einsteinian, Newtonian and zero-deflection models. The z value is the difference, in units of the measurement’s error bar, and P(z) is the probability of obtaining that measurement, were the model correct. The data clearly prefer Einstein’s value for the deflection.

Observations were also made with a third instrument, an astrograph taken to Sobral. However, the resulting images were “diffused and apparently out of focus”, resulting in a systematic error that was large and unquantifiable. Crucially, being unable to evaluate the systematic distortion, the observers could not arrive at a proper uncertainty estimate for these data points, without which they could not be combined with the measurements from the other two telescopes.

Gilmore and Tausch-Pebody conclude:

The original 1919 analysis is statistically robust, with conclusions validly derived, supporting Einstein’s prediction. The rejected third data set is indeed of such low weight that its suppression or inclusion has no effect on the final result for the light deflection, though the very large and poorly quantified systematic errors justify its rejection.

Scientists, being human, are of course fallible and prone to bias. To a large extent they are aware of that, which is why techniques such as double-blinded controlled trials are routinely adopted. And in some areas, such as the replication crisis in psychology, scientists have certainly not been careful enough. But, overall, it does seem that science succeeds in overcoming human fallibility, and that the consensus findings arrived at are more robust than critics sometimes allow.

Are predictions an essential part of science?

Theoretical physicist Sabine Hossenfelder recently wrote that that “predictions are over-rated” and that one should instead judge the merits of scientific models “by how much data they have been able to describe well, and how many assumptions were needed for this”, finishing with the suggestion that “the world would be a better place if scientists talked less about predictions and more about explanatory power”.

Others disagreed, including philosopher-of-science Massimo Pigliucci who insists that “it’s the combination of explanatory power and the power of making novel, ideally unexpected, and empirically verifiable predictions” that decides whether a scientific theory is a good one. Neither predictions nor explanatory powers, he adds, are sufficient alone, and “both are necessary” for a good scientific theory. Continue reading

Science Unlimited, Part Three: Philosophy

This is the Third Part of a review of Science Unlimited? The Challenges of Scientism, edited by Maarten Boudry and Massimo Pigliucci. See also Part 1, focusing on pseudoscience, and Part 2, focusing on the humanities.

Science started out as “natural philosophy” until Whewell coined the newer name “science”. As a scientist I have a PhD and am thus a “Doctor of Philosophy”. And yet many philosophers assert that today “philosophy” is an enterprise that is distinct from “science”.

The argument runs that philosophy is about exploration of concepts, and what can be deduced purely by thinking about concepts, whereas science is heavily empirical, rooted in observation of the world. Thus philosophy (exploration of concepts) and science (empirical observation) are fundamentally different beasts. And both are necessary for a proper understanding. Continue reading

Science Unlimited, Part One: Pseudoscience

Philosophers Maarten Boudry and Massimo Pigliucci have recently edited a volume of essays on the theme of scientism. The contributions to Science Unlimited? The Challenges of Scientism range from sympathetic to scientism to highly critical.

I’m aiming to write a series of blog posts reviewing the book, organised by major themes, though knowing me the “reviewing” task is likely to play second fiddle to arguing in favour of scientism.

Of course the term “scientism” was invented as a pejorative and so has been used with a range of meanings, many of them strawmen, but from the chapters of the book emerges a fairly coherent account of a “scientism” that many would adopt and defend.

This brand of scientism is a thesis about epistemology, asserting that the ways by which we find things out form a coherent and unified whole, and rejecting the idea that knowledge is divided into distinct domains, each with a different “way of knowing”. The best knowledge and understanding is produced by combining and synthesizing different approaches and disciplines, asserting that they must mesh seamlessly. Continue reading

The cosmological multiverse and falsifiability in science

The cosmological “multiverse” model talks about regions far beyond the observable portion of our universe (set by the finite light-travel distance given the finite time since the Big Bang). Critics thus complain that it is “unfalsifiable”, and so not science. Indeed, philosopher Massimo Pigliucci states that instead: “… the notion of a multiverse should be classed as scientifically-informed metaphysics”.

Sean Carroll has recently posted an article defending the multiverse as scientific (arXiv paper; blog post). We’re discussing here the cosmological multiverse — the term “multiverse” is also used for concepts arising from string theory and from the many-worlds interpretation of quantum mechanics, but the arguments for and against those are rather different. Continue reading

How not to defend humanistic reasoning

Sometimes the attitudes of philosophers towards science baffle me. A good example is the article Defending Humanistic Reasoning by Paul Giladi, Alexis Papazoglou and Giuseppina D’Oro, recently in Philosophy Now.

Why did Caesar cross the Rubicon? Because of his leg movements? Or because he wanted to assert his authority in Rome over his rivals? When we seek to interpret the actions of Caesar and Socrates, and ask what reasons they had for acting so, we do not usually want their actions to be explained as we might explain the rise of the tides or the motion of the planets; that is, as physical events dictated by natural laws. […]

The two varieties of explanation appear to compete, because both give rival explanations of the same action. But there is a way in which scientific explanations such as bodily movements and humanistic explanations such as motives and goals need not compete.

This treats “science” as though it stops where humans start. Science can deal with the world as it was before humans evolved, but at some point humans came along and — for unstated reasons — humans are outside the scope of science. This might be how some philosophers see things but the notion is totally alien to science. Humans are natural products of a natural world, and are just as much a part of what science can study as anything else.

Yes of course we want explanations of Caesar’s acts in terms of “motivations and goals” rather than physiology alone — is there even one person anywhere who would deny that? But nothing about human motivations and goals is outside the proper domain of science. Continue reading