Author Archives: Coel

Neutral standards for advertisers are not a blasphemy law

The right to offend religious sentiment is a necessary part of a free society where we are not subject to religion. So, when Humanists UK protest about a recent ruling by the UK’s Advertising Standards Authority, I might be expect to join in. But I’m not.

“Demi Lovato must have the right to blaspheme”, declares Stephen Knight, a writer I usually agree with. But a right to blaspheme is not the same as the right to display a blasphemous poster on an advertising billboard in a public place. I will strongly defend the publication of Charlie Hebdo cartoons, but that doesn’t mean they would be appropriate on billboards in Piccadilly Circus.

The standard for acceptable advertisements is tighter than for publication in one’s own magazine or on one’s own website. That’s because the public encounters adverts when going about their daily life. They aren’t opting in to seeing them, as they would be when picking up a copy of Charlie Hebdo. As a result, the Advertising Standards Authority requires that adverts not cause “serious or widespread” offence, taking into account “prevailing standards in society”. The threshold will, of course, be subjective, but the ASA take their role seriously, including the polling of public opinion.

The advert in question, for an album by Demi Lovato, depicts the singer in a sexualised pose while wearing bondage gear. That alone makes the advert questionable. According to ASA polling, objections to sexualised images are the most common form of objection to adverts. Then there is the album’s name, “Holy Fvck”, alluding to a swear word that many would regard as inappropriate for a public billboard. Further, the combination of these items with a crucifix, a symbol strongly associated with Christianity, would indeed offend people.

This perfume advert was banned by the ASA owing to the perceived sexualisation of a young model.

Perhaps that offence is the whole point of the album’s title and artwork, which would be absolutely fine in itself. “Corruption of youth” is, after all, the raison d’etre of rock music. But the ASA ruling is not an imposition of a blasphemy law, no-one is objecting to the album or to its art-work, only to its display on an advertising billboard. And in that the ASA is just upholding a generally applicable standard.

I reject the idea that religious groups have any special right to not be offended, but, equally, nor should offense to religious sentiment be discounted as less important than offense for any other reason. If the ASA would ban an advert because it is offensive to Manchester United football fans, then it is appropriate to ban an advert that is offensive to Christians.

Adverts by the fashion group French Connection UK were widely accepted but did arouse comment.

Humanists UK are calling this a “de facto blasphemy law”, and are petitioning the ASA that “religious offence should never be grounds to ban an advert”.

But do they think that offensiveness should never be a consideration in banning an advert, or are they asking for religious offence to be a type of offence that is specially disregarded?

I can’t agree with either. Yes there needs to be a threshold of offensiveness for adverts, and, since religious people are our fellow citizens, their sentiments should count equally to those of others. Traditionally, of course, religion has had way too much special privilege (and still does), but we shouldn’t now advocate for religious beliefs to be treated worse than other beliefs.

Provided that the ASA try to apply the rules neutrally — so that religious and non-religious forms of offence are weighed equally — I’m comfortable with their ruling that this advert fell on the wrong side of the line.

Human brains have to be deterministic (though indeterminism would not give us free will anyhow)

Are human brains deterministic? That is, are the decisions that our brain makes the product of the prior state of the system (where that includes the brain itself and the sensory input into the brain), or does quantum indeterminacy lead to some level of uncaused randomness in our behaviour? I’ll argue here that our brains must be largely deterministic, prompted by being told that this view is clearly wrong.

First, I’ll presume that quantum mechanics is indeed indeterministic (thus ignoring hidden-variable and Everettian versions). But the fact that the underlying physics is indeterministic does not mean that devices built out of quantum-mechanical stuff must also be indeterministic. One can obtain a deterministic device simply by averaging over a sufficient number of low-level particle events. Indeed, that’s exactly what we do when we design computer chips. We build them to be deterministic because we want them to do what we program them to do. In principle, quantum fluctuations in a computer chip could affect its output behaviour, but in practice a minimum of ~50 electrons are involved in each chip-junction “event”, which is sufficient to average over probabilistic behaviour such that the likelihood of a quantum fluctuation changing the output is too small to be an issue, and thus the chip is effectively deterministic. Again, we build them like that because we want to control their behaviour. The same holds for all human-built technology.

There may be some instances where genuine non-deterministic randomness might be useful. An example is running a Monte-Carlo simulation (a technique, widely used in science and engineering, of computing a simulation that allows for all possibilities). But, even here, in practice, one usually uses deterministic pseudo-random-number generators, simply because our computers — despite being built on quantum mechanics — don’t actually do genuine randomness.

Our brains are also built to do a job. They are the product of a genetic recipe, a recipe that is the product of evolution. In evolutionary terms the job of a brain is to make real-time decisions based on that genetic recipe and on the local circumstances, as informed through the senses. And brains are hugely expensive in evolutionary terms, consuming 20 percent of the body’s energy and (for example) forcing big compromises in female anatomy and making childbirth dangerous.

It follows that brains could not have evolved unless they were strongly selected for (they cannot just be byproduct “spandrels”), which means they must serve the interests of the genes that specify the recipe, and that means that brain behaviour (the choices they make) must be strongly influenced by the genes. And, since those choices can be being made decades after the childhood brain develops out of the genetic recipe, it follows that there must be a deterministic chain of causation that holds over generational timescales.

To head off misunderstandings, the above is not saying that behaviour is entirely specified by genes. (Whenever anyone argues for genetics being a major factor in anything, it is often attacked as being the claim that genetics is the only factor; the reality is that everything is always a product of both genes and environment.) Obviously, the brain’s job is to make decisions reflecting the local circumstances, but how to react to particular circumstances must be strongly influenced by genes, otherwise the brain could not have evolved. Nor is this saying that the environment and stochastic factors have no effect on how the genetic recipe plays out and as the brain develops; of course they do. And nor is this saying that the brain’s neural network is static and unchanging. Of course it isn’t (memories, for example, and changes in the network). But this argument does show that there must be a deterministic chain between genes and brain behaviour that holds over multi-decade timescales. It must be the case that, in general, “that behavioural decision could have been different if, three decades ago, that gene had been different”. That includes behavioural decisions that are regarded as “free will” decisions — which presents no problem if one adopts a compatibilist interpretation of free will.

The above argument doesn’t fully rule out a minor role for genuine randomness based on quantum indeterminacy. I would guess that, were a quantum dice-throwing module within the brain of evolutionary benefit to an animal, then such a module could have evolved. But it’s hard to see why it would be evolutionarily beneficial. Just as we make technology to do what we want it to, genes will make brains that behave in ways they program for. That will hold especially for the large component of brain function that is simply about regulating body functions, not producing “behavioural choices”, and for the primitive brains producing fairly simple behaviour, such as in worms. This means that the neural-network junctions (synapses) will have evolved to be deterministic. This is achieved by a neural signal, an “action potential”, being an on–off event (so that it is insensitive to small changes), with a sufficient number of ions needed to trigger one (such that quantum indeterminacy is averaged over). This is pretty much the same way that humans make computer chips to be deterministic. Since larger and more complex brains work in a similar way, just with vastly more neurons and neural connections, it follows that they also will be deterministic.

Another point: our brains are simply too big and too complex to be about producing quantum “dice throwing” decisions. A device producing an indeterminate output would have to be small (again, averaging over anything more than a couple of dozen ions gives you a deterministic output). Yet our brains have 100 billion neurons made of 1026 molecules. What is all that for, and how did it evolve, if our decisions are effectively a dice throw? The only answer is that it evolved to process the information from our senses, and make decisions based on that, and making decisions based on input information is (by definition) a deterministic process.

Lastly, twin studies tell us that behavioural traits are highly heritable, with typical heritabilities of 50%. Again, this requires a causal chain between gene expression at the stage of brain development and behavioural choices made decades later. (There’s also a large role for the environment, of course, but being affected by environment is just as much a deterministic process as being affected by genes.)

Anyhow, I was told that I was wrong, and that quantum indeterminacy plays a role in our brains, especially when it comes to “free will”, and was pointed to a review paper arguing this by neurobiologist Björn Brembs of the Universität Regensburg.

Obviously a professor of neurobiology knows more about brains than I do, but, for now at least, I’m sticking to the above arguments. So what counters does Professor Brembs give? The paper first points out that the underlying physics is indeterministic. I agree, though, as above, that does not necessitate that brains are. The main argument presented, however, is the need for an animal to be unpredictable. Clearly, a gazelle running from a cheetah will benefit if the cheetah cannot predict which way it will dart. This will hold for any animal vulnerable to predation.

I agree on the need for unpredictability, but this does not require quantum indeterminacy. The world is simply way too complex for any of us to be able to predict it, even if that were “in principle” possible given enough information and intelligence. All that matters is that, in practice, predators stand no chance of making such predictions. The nematode worm (Caenorhabditis elegans) has only 302 cells in its nervous system. But even there, different individuals have those 302 cells wired up differently, owing to inevitable differences as the embryo developed. And if I gave you a complete map of the neural network of a mere 302 cells, could you look at it and predict how it would respond to various stimuli? I certainly couldn’t. The cheetah hasn’t a hope in hell of predicting the exact behaviour of the gazelle’s brain, even if that brain is entirely deterministic, and even if it had a complete and accurate neural-level map of that gazelle’s brain (which of course it doesn’t), and even if it had complete knowledge of the gazelle’s sensory experiences (which it also doesn’t).

So you don’t need indeterminacy to have in-practice unpredictability; the world is just too complex. And, while a prey animal wants unpredictability, it does not want to make dice-throwing decisions. There are some directions to dart — straight into the paws of the cheetah — that would be a rather bad choice. The gazelle still wants to make best use of all the information from its senses, and that requires a deterministic neural network.

And that’s about it: the above argument, that “predictability is one thing that will make sure that a competitor will be out of business soon”, and thus that “deterministic behaviour can never be evolutionarily stable” is pretty much the only argument that Professor Brembs presents for brains being indeterministic. He does argue at length for brains needing to produce “behavioural variability”, and that they need to flexible and adaptable and responsive to their environments. I agree entirely. But this is a separate issue from them being non-deterministic. Indeed, being responsive to their environments is itself a deterministic concept. The whole point of quantum indeterminacy is that it is not the result of anything, and so is independent of local conditions.

As an example, Brembs argues that:

“… the temporal structure of the variability in spontaneous turning manoeuvres both in tethered and in free-flying fruitflies could not be explained by random system noise. Instead, a nonlinear signature was found, suggesting that fly brains operate at criticality, meaning that they are mathematically unstable, which, in turn, implies an evolved mechanism rendering brains highly susceptible to the smallest differences in initial conditions and amplifying them exponentially. Put differently, fly brains have evolved to generate unpredictable turning manoeuvres.”

But sensitivity to small differences and a non-linear response is not the same thing as being non-deterministic. Deterministic systems often behave like that. A roulette wheel is non-linear and unpredictable in practice, even if it is deterministic. As another example, modern fighter jets are designed to be aerodynamically unstable, in order to be highly manoeuvrable — just like the fruit fly — and it would be straightforward to write a black-box computer code to produce a flight plan that another human (having no sight of the code) could not then predict.

I do wonder whether a rejection of determinism might be motivated by the false presumption that a “deterministic” response must inevitably be a simple, linear and inflexible response that is not responsive to the local circumstances? But this is not so. Responding to local circumstances is (by definition) a deterministic process, and behavioural variation and flexibility is exactly what you’d get from a complex but deterministic neural network.

But Professor Brembs wants a role for indeterminacy as an ingredient for his conception of free will.

While some argue that unpredictable (or random) choice does not qualify for their definition of free will, it is precisely the freedom from the chains of causality that most scholars see as a crucial prerequisite for free will.

I consider this to be misguided. As a compatibilist, I assert that the only “free will” that we have is entirely compatible with determinism. We have a “will” and often we are “free” to act on it. And yes, that will is indeed a product of the prior state of the system. The sort of “will” that arises independently of the prior state of the system does not exist (unless one wants to argue for dualistic souls that tell matter how to move).

But many people dislike that conclusion; they reject dualism but want to rescue a “will” that is “free from the chains of causality”. They hope that some mixture of causation and indeterminism might do that. Thus Brembs argues (as have others) for a “two-stage model of free will”:

One stage generates behavioural options and the other one decides which of those actions will be initiated. Put simply, the first stage is ‘free’ and the second stage is ‘willed’. […] freedom arises from the creative and indeterministic generation of alternative possibilities, which present themselves to the will for evaluation and selection.

This may be a failure of my imagination but I don’t see how this helps. Yes it gives freedom (from the causal chain) and it gives will, but the part that is free is not willed and the part that is willed is not free from causation. So it doesn’t give a “free will” if that is a will that is free from the causal chain.

The “free” part is simply generating a list of possibilities. The “will” part, the bit that is doing the choosing from the list, is still a computational process. Non-deterministic choice making could not have evolved.

Invoking the first stage serves only to salve an intuitive dislike of the idea that we are bound to the same processes of cause-and-effect that govern the rest of nature. People consider this to be beneath human dignity. But human intuition is unreliable, and a mechanism invoked purely to salve human dignity is unlikely to be how things actually are. If you think that a dice-throwing component is needed to generate options that could not otherwise be available, then you’re likely underestimating the degree to which the world is so complex that a deterministic neural network would already produce ample behavioural flexibility.

In short, I suggest that appealing to quantum indeterminacy in any discussion of free will is a red herring that has intuitive appeal but that cannot be made into a coherent account of a will that is uncaused. We should, instead, accept that there is nothing wrong with our wills being caused.

But it seems that Brembs is motivated by a dislike of the idea that we are at the end of a causal chain:

I hope to at least start a thought process that abandoning the metaphysical concept of free will does not automatically entail that we are slaves of our genes and our environment, forced to always choose the same option when faced with the same situation.

So what would be the problem if that were indeed the case?

In fact, I am confident I have argued successfully that we would not exist if our brains were not able to make a different choice even in the face of identical circumstances and history.

I don’t agree that the argument has been successfully made. And, anyhow, in practice, “identical circumstances and history” never recur. Walk around outside and look around you. The local environment that you are looking at is made up of something like 1032 individual atoms. That is so many that, if you were to point to them individually at a rate of one every second, it would take you 100,000,000,000,000 times the age of universe before you’d finished. The idea of “identical circumstances”, with every one of those atoms being in the same place and behaving the same way, is a valid philosophical thought experiment but is not of practical significance. The world already contains enough complexity and variation that there is no need to invoke quantum indeterminacy in order for a cheetah to be unable to predict the darting of a gazelle (if the cheetah counted neurons in the gazelle’s brain at a rate of one a second it would take it a mere 3000 years).

It makes no practical difference at all whether a different decision arises because of a quantum dice throw, or because the circumstances were slightly different. I don’t see why the former is preferable, or has more “moral” salience, or is more in accord with our human dignity (not that that actually matters; since the universe is not obliged to accord with our preferences).

Our Earth is not at the centre of the universe. The universe as a whole does not have a purpose. It was not created for us, we are just one product of an evolutionary process. And, being a product of the universe, we are material beings produced by and governed by the same laws of cause and effect that describe everything else.

Confusion about free will, reductionism and emergence

Psychology Today has just published: “Finding the Freedom in Free Will, with the subtitle: “New theoretical work suggests that human agency and physics are compatible”. The author is Bobby Azarian, a science writer with a PhD in neuroscience. The piece is not so much wrong — I actually agree with the main conclusions — but is, perhaps, rather confused. Too often discussion in this area is bedevilled by people meaning different things by the same terms. Here is my attempt to clarify the concepts. Azarian starts:

Some famous (and brilliant) physicists, particularly those clearly in the reductionist camp, have gone out of their way to ensure that the public believes there is no place for free will in a scientific worldview.

He names Sabine Hossenfelder and Brian Greene. The “free will” that such physicists deny is “dualistic soul” free will, the idea that a decision is made by something other than the computational playing out of the material processes in the brain. And they are right, there is no place for that sort of “free will” in a scientific worldview.

Azarian then says:

Other famous scientists—physicists, neuroscientists, and complexity scientists among them—have a position virtually opposite of Greene’s and Hossenfelder’s …

He names, among others, David Deutsch and the philosopher Dan Dennett. But the conception of “free will” that they espouse is indeed just the computational playing out of the material brain. Such brain activity generates a desire to do something (a “will”) and one can reasonably talk about a person’s “freedom” to act on their will. Philosophers call this a “compatibilist” account of free will.

Importantly, and contrary to Azarian’s statement, this position is not the opposite to Greene’s and Hossenfelder’s. They are not disagreeing on what the brain is doing nor about how the brain’s “choices” are arrived at. Rather, the main difference is in whether they describe such processes with the label “free will”, and that is largely an issue of semantics.

All the above named agree that a decision is arrived by the material processes in the brain, resulting from the prior state of the system. Any discussion of “free will” needs to distinguish between the dualistic, physics-violating conception of free will and the physics-compliant, compatibilist conception of free will. They are very different, and conflating them is just confused.

Do anti-agency reductionists believe that the theories of these scientists are not worth serious consideration?

No, they don’t! What they do think is that, when talking to a public who might interpret the term “free will” in terms of a dualistic soul operating independently of material causes, that it might be better to avoid the term “free will”. People can reasonably disagree on that, but, again, this issue of semantics is distinct from whether they agree on what the brain is doing.

Origins-of-life researcher Sara Walker, who is also a physicist, explains why mainstream physicists in the reductionist camp often take what most people would consider a nonsensical position: “It is often argued the idea of information with causal power is in conflict with our current understanding of physical reality, which is described in terms of fixed laws of motions and initial conditions.”

Is that often argued? Who? Where? This is more confusion, this time about “reductionism”. The only form of “reductionism” that reductionists actually hold to can be summed up as follows:

Imagine a Star-Trek-style transporter device that knows only about low-level entities, atoms and molecules, and about how they are arranged relative to their neighbouring atoms. This device knows nothing at all about high-level concepts such as “thoughts” and “intentions”.

If such a device made a complete and accurate replica of an animal — with every molecular-level and cellular-level aspect being identical — would the replica then manifest the same behaviour? And in manifesting the same behaviour, would it manifest the same high-level “thoughts” and “intentions”? [At least for a short-time; this qualification being necessary because, owing to deterministic chaos, two such systems could then diverge in behaviour.]

If you reply “yes” then you’re a reductionist. [Whereas someone who believed that human decisions are made by a dualistic soul would likely answer “no”.]

Note that the pattern, the arrangement of the low-level entities is absolutely crucial to this thought experiment and is central to how a reductionist thinks. A Star Trek transporter does not just deliver the atoms and molecules in a disordered heap, and then expect the heap to jump up and fight Klingons.

The caricature of a reductionist is someone who would see no difference between a living human brain and a human brain put through a food blender. It shouldn’t need saying that such a view is so obviously wrong that no-one thinks like that.

What is the difference between a tree and an elephant? It is not the component parts. Trees and elephants are made of the same atoms (carbon, oxygen, nitrogen and hydrogen make up 96% of an organism’s weight, with the rest being a few other types of atoms). After all, elephants are made up of what they eat, which is trees. Indeed, by carefully controlling the water supply to a tree, one could, in principle, make its constituent atoms identical to those of a same-weight elephant.

So the difference between a tree and an elephant — and the cause of their different behaviour — is solely in the medium-scale and large-scale arrangement of the component parts. No reductionist (other than those constructed out of straw) disagrees.

Azarian then spends time contrasting a fully deterministic view of physics with the possibility of quantum-mechanical indeterminacy. This is more confusion. That topic is irrelevant here. Whether the state of the system at time t+1 is entirely specified by the state at time t, or whether there is also quantum dice throwing, is utterly irrelevant to concepts of “will”, “intention”, “agency” and “freedom” because quantum dice throwing doesn’t give you any of those.

Indeed, Azarian accepts this, saying: “quantum indeterminism alone does not help the notion of free will much since a reality with some randomness is not the same as one where an agent has control over that randomness”. But he then argues (quoting George Ellis) that: “While indeterminacy does not provide a mechanism for free will, it provides the “wiggle room” not afforded by Laplace’s model of reality; the necessary “causal slack” in the chain of cause-and-effect that could allow room for agency, …”.

I agree that we can sensibly talk about “agency”, and indeed we need that concept, but I don’t see any way in which quantum indeterminacy helps, not even in providing “wiggle room”. [And note that, after invoking this idea, Azarian does not then use it in what follows.]

If we want to talk about agency — which we do — let’s talk about the agency of a fully deterministic system, such as that of a chess-playing computer program that can easily outplay any human.

What else “chooses” the move if not the computer program? Yes, we can go on to explain how the computer and its program came to be, just as we can explain how an elephant came to be, but if we want to ascribe agency and “will” to an elephant (and, yes, we indeed do), then we can just as well ascribe the “choice” of move to the “intelligent agent” that is the chess-playing computer program. What else do you think an elephant’s brain is doing, if not something akin to the chess-playing computer, that is, assessing input information and computing a choice?

But, Azarian asserts:

The combination of the probabilistic nature of reality and the mechanism known as top-down causation explains how living systems can affect physical reality through intentional action.

Top-down causation is another phrase that means different things to different people. As I see it, “top-down causation” is the assertion that the above Star-Trek-style replication of an animal would not work. I’ve never seen an explanation of why it wouldn’t work, or what would happen instead, but surely “top-down causation” has to mean more than “the pattern, the arrangement of components is important in how the system behaves”, because of course it is!

Top-down causation is the opposite of bottom-up causation, and it refers to the ability of a high-level controller (like an organism or a brain) to influence the trajectories of its low-level components (the molecules that make up the organism).

Yes, the high-level pattern does indeed affect how the low-level components move. An elephant’s desire for leaves affects where the molecules in its trunk are. But equally, the low-level components and their arrangement give rise to the high-level behaviour. So this is just another way of looking at the system, an equally valid way of looking at the system that is. But it is not “the opposite of bottom-up causation”. Both views have to be fully valid at the same time. Unless you’re asserting that the Star-Trek-style replication would not work. And if you’re doing that, why wouldn’t it?

This means that biological agents are not slaves to the laws of physics and fundamental forces the way non-living systems are.

Well, no, this is wrong in two ways. First, the view that the low-level description gives rise to the high-level behaviour is still equally valid and correct, so the biological agents are still “slaves to the laws of physics and fundamental forces”. That is, unless you’re advocating something way more novel and weird than you’ve said so far. And, second, this would apply just as much to non-living systems that have purpose such as the chess-playing computer.

One might reply, but that “purpose” only arises as a product of the Darwinian evolution of life, and I may readily agree with that, but that’s a somewhat different distinction.

With the transition to life, macroscopic physical systems became freed from the fixed trajectories that we see with the movement of inanimate systems, which are predictable from simple laws of motion.

If that is saying only that life evolved to be more complex than non-living things, and thus their behaviour is more complex (and can be sensibly described as “purposeful”), then yes, sure, agreed.

The emergence of top-down causation is an example of what philosophers call a “strong emergence” because there is a fundamental change in the kind of causality we see in the world.

Well, no. Provided you agree that the Star-Trek-style replication would work, then this counts as weak emergence. “Strong” emergence has to be something more than that, entailing the replication not working.

Again, words can mean different things to different people, but the whole point of “strong” emergence is that “reductionism” fails, and reductionism (the sort that anyone actually holds to, that is) is summed up by the Star-Trek-style replication thought experiment.

[I’m aware that philosophers might regard the view summed up by that thought experiment as being “supervenience physicalism”, and assert that “reductionism” entails something more, but that’s not what “reductionism” means to physicists.]

And if by “a fundamental change in the kind of causality we see in the world” one means that Darwinian evolution leads to hyper-complex entities to which we can usefully ascribe purpose, intention and agency, then ok, yes, that is indeed an important way of understanding the world.

But it is not really “a fundamental change in the kind of causality we see in the world” because the bottom-up view, the view that these are collections of atoms behaving in accordance with the laws of physics, remains fully valid at the same time!

I’m actually fully on board with viewing living creatures as entities that have intentions and which exhibit purpose, and fully agree that such high-level modes of analysis are, for many purposes, the best and only ways of understanding their behaviour.

But it is wrong to see this as a rejection of reductionism or of a bottom-up way of seeing things. Both perspectives are true simultaneously and are fully compatible. I agree with the view that Azarian is arriving at here, but regard his explanation of how it is arrived at as confused.

… agency emerges when the information encoded in the patterned interactions of molecules begins telling the molecules what to do.

True, but it always did. Even within physics, the pattern, the arrangement of matter determines how it behaves, and hence the arrangement will “tell molecules what to do”. Even in a system as simple as a hydrogen atom, the arrangement of having the electron spin aligned with the proton spin will lead to different behaviour from the arrangement where the spins are anti-aligned.

Towards the end of the piece, Azarian’s and my views are perhaps converging:

The bottom-up flow of causation is never disrupted — it is just harnessed and directed toward a goal. We still inhabit a cause-and-effect cosmos, but now the picture is more nuanced, with high-level causes being every bit as real as the low-level ones.

Agreed. Azarian then explains that, when it comes to biological systems, what matters for behaviour is the macrostate of the system (the high-level “pattern”) rather than the microstate (all the low-level details).

Yes, agreed. But that is also just as true in physics. Analysing a physical system in terms of its macrostate (where many different microstates could constitute the same macrostate), is exactly the mode of analysis of “statistical mechanics”, which is at the heart of modern physics.

Azarian quotes neurogeneticist Kevin Mitchell in (correctly) saying that:

The macroscopic state as a whole does depend on some particular microstate, of course, but there may be a set of such microstates that corresponds to the same macrostate …

But he goes slightly wrong in quoting neuroscientist Erik Hoel saying:

Recent research has argued exactly this by demonstrating the possibility of causal emergence: when a macroscale contains more information and does more causal work than its underlying microscale.

The macrostate cannot contain more information (and cannot “do more causal work”) than the underlying microstate, since one can reconstruct the macrostate from the microstate. Again, that is the point of the Star Trek thought experiment, and if that is wrong then we’ll need to overturn a lot of science. (Though no-one has ever given a coherent account of why it would be wrong, or of how things would work instead.)

So, where are we? I basically agree with the view that Azarian arrives at. So maybe we’re just disagreeing about what terms such as “reductionism”, “strong/weak emergence” and “top-down” causation actually mean. It wouldn’t be the first time! As I see it, though, this view is not a new and novel way of thinking, but is pretty much what the hidebound, reductionist physicists (like myself) have been advocating all along. None of us ever thought that a living human brain behaves the same as that same brain put through a food blender.

Why did Psychology Today publish woo?

Psychology Today sets itself high standards. “We are proud to be a trusted source for clinical and scientific information … we hold this content to the highest standards”, it says. “All expert author content is reviewed, edited and fact-checked for accuracy, objectivity and to ascertain that the author has relevant domain expertise”.

So why did it publish a recent interview of Jeffrey Kripal by Dinesh Sharma, a piece filled with what can be fairly summed up as “woo”.

Dinesh Sharma’s background is in the social sciences and Jeffrey Kripal is a theologian. The editor, whose job it is to uphold the above standards, was Tyler Woods, whose degree was in politics and English. None of them has any standing in the physical sciences, which might explain why the article goes badly wrong when it starts talking about physics. Let’s take it bit by bit.

Materialism has been waning in influence in the scientific community, …

Well, no, that’s not true. Materialism is the dominant paradigm in the physical sciences.

The decline of materialist philosophy has been rooted in 1) the belief in “intelligent design,” that God exists, …

Well that’s a bit of a give-away, right from the start. But no, that idea has very little traction in modern science.

… 2) unsatisfactory explanations for mental and conscious phenomena and the “mind-body problem”;

Granted, materialist science has not properly explained consciousness. But non-materialist conceptions have not done any better; they can’t explain consciousness either.

… and 3) recent developments in 20th century quantum physics.

Here comes the woo. We don’t fully understand quantum mechanics … therefore whatever woo idea the author wants to promote. The argument really is no better than that.

Thomas Nagle’s Mind and Cosmos is a recent example of the waning of the materialist paradigm.

That’s “Nagel” not “Nagle”, and it’s a book by a philosopher that was roundly panned by scientists who regarded it as showing that Nagel did not understand the science he was commenting on. The ideas he presented have pretty much no traction in science (though they are popular among theologians touting intelligent design).

The recent studies of psychedelic substances have shown that mind is irreducible to matter. The “mystical experiences” at the heart of individual transformations have led to an acceptance of the mind-altering power of psychoactive medicinal plants …

Eh? So plants — material things made up of chemicals — have the ability to alter the processes going on in the brain, and that’s an argument against materialism? Really? Yes, physical stuff (for example alcohol) can alter the state of a physical thing (our brain), and so affect how it functions. So?

To justify this, Sharma quotes Michael Pollan (a journalist with no scientific background), saying that:

… psychedelic therapy … depends for its success not strictly on the action of a chemical but on the powerful psychological experience that the chemical can occasion.

So a chemical can cause changes in the brain that have lasting effects. Again, where is the argument against materialism?

I [Sharma] reached out to a colleague, Jeffrey Kripal, an expert in the history of religion, to enlighten us on the connection between science and spirituality, mind and matter, …

I’m not convinced that that’s the right choice of expertise! But, anyhow, to the interview:

Sharma: You say Western knowledge systems are at a precipice of making a ‘flip’. This is actually the case in physics. But the new physics is being constrained within the domain of the hard sciences, not permeating the larger culture, due to the politics of knowledge.

This talk of a “flip” would be news to most actual physicists. And note that, despite the editorial boasts, neither of them has “relevant domain expertise” to discuss physics.

Kripal: I think we are at a crossroads. Our social and spiritual imaginations have not caught up with the quantum reality our mathematics, our physics, and frankly our technologies all use and suppose.

I’m not sure what that’s trying to get at. But it contains the phrase “quantum reality” to give an impression of profundity.

Sharma: Are you looking to “flip” the “materialistic paradigm” dominant in the academy since the enlightenment period?

Kripal: Well, yes, of course, but the book is not about me doing anything. It’s about a larger cultural, philosophical, and scientific shift that is happening all around us. I am just reporting.

No, there is no shift away from the materialist paradigm in science.

Sharma: I like your phrase, “science only studies the things it can study.” Thus, it can be defined by what is selectively excluded from the sciences?

OK, so what topics are selectively excluded from scientific study? No specifics are given, no justification for this claim.

Kripal: Science works so well because it gets to say what it will study, and what it will not. We are not so fortunate, or we are more fortunate, in the humanities. We study human beings, who never really fit into our paradigms or our models, …

But science also studies human beings! And they fit just fine into our paradigms of biology and evolution.

Kripal: What I am trying to say in the book is that human beings have all kinds of strange, quantum-like experiences, and we should not ignore or discount them just because they do not play by the rules of our scientific or humanistic games.

Again, using the word “quantum” to make it sound all sciency. And what actually is a “quantum-like experience”? And in what way is science supposed to be ignoring these “quantum-like experiences”?

Sharma: What are precognitive dreams that you think are prophetic or tapping into another realm of time?

Any actual evidence that deams are “tapping into another realm of time?” Kripal gives only a reference to woo-meister Eric Wargo: “Eric basically argues that there is no such thing as the unconscious; that the unconscious is consciousness transposed in time”. That is unevidenced woo, not science.

Kripal: … the biological sciences have a long way to go. They have real hang-ups around vitalism and teleology, for example. I think both of those are real mistakes—they might be pragmatic and useful mistakes, but they are still wrong.

This is a rejection of science by a theologian, who is rejecting science because he does not like it theologically.

Kripal: Life is not reducible to chemistry. Evolution evolves itself over and over again toward obvious goals (like the eye).

Eyes evolve multiple times because they are useful, not because they are a “goal”. This sort of theologically-motivated rejection of the scientific account of evolution started in Darwin’s day and is still rumbling on. It will likely do so as long as there are theologians. None of these critiques are ever found to have substance. Is this really what Psychology Today wants to be publishing?

Is science “plagued” by “rife” harassment and discrimination?

Given society-wide soul-searching over issues of race and gender today, many scientific institutions are conducting surveys to assess the prevalence of harassment, bullying and discrimination within their fields. In principle this is a good thing, since scientific institutions should, of course, be open to all, and the work environment should feel welcoming. But we also need a degree of rigour when designing and interpreting such surveys.

Nature is the world’s leading scientific magazine (though being a commercial product it also has a predilection for click-bait). A recent headline claims that science is “plagued” by discrimination.

The survey that led to this conclusion is summarised in this table.

So two thirds of scientists respond that they have not experienced nor seen bullying, nor harassment, nor discrimination — not even one incident — in the whole of their current job, a length of typically two to eight years. Does that amount to their institutions being “plagued” by such behaviour?

The survey does not distinguish between observing one such incident in, say, three years, and bullying being a weekly occurrence, and yet that would make a vast difference to the experience of the working environment (note, also, that the survey is worldwide, with most respondents being scientists from Western countries, but many being from other countries with different cultures, which complicates any interpretation).

OK, one might reply, but one-third of scientists have experienced at least one such incident, and surely that’s too many?

Yes, but we can then ask, how do we define “bullying” and “harassment”, what is the threshold? Where is the line between a fair-enough critical remark from a line manager, and “bullying”? Where is the line between a line manager reminding someone about a task, and “harassment”?

In the above survey the threshold is entirely up to the respondent. And that’s a problem since it makes the reporting very subjective. In the same way that the number of drivers exceeding a speed limit by 3 mph will vastly outnumber those exceeding it 30 mph, the number of incidents that only just cross the threshold will vastly outnumber the very serious incidents. So the rate of “bullying and harassment” will depend a lot on the adopted threshold. Without any attempt to define and standardise a threshold, such surveys lack value.

That is not to deny that, in some institutions, there have been very serious cultures of bullying and harassment that have gone on for years. But, that is different from occasional minor clashes that are inevitable when bringing humans together in a work environment, and we need to be clear which of these we are discussing.

One might reply that, if someone perceives an incident as bullying and harassment, then that means that it is. That’s a fair point, but in making policies about such matters we also need to be concerned with what is reasonable, and so cannot just take personal perception as the only criterion.

How people perceive interactions in day-to-day life can vary a lot from person to person, and — importantly — can tend to vary across different groups and different cultures.

The Royal Astronomical Society (the UK society for professional astronomers, of which I am a member) produced such a survey earlier this year. In reporting that bullying and harassment are “rife” in UK astronomy, Nature summarised the survey with this table:

Worryingly, this suggests that minority groups have it much worse, and that was certainly the tenor of the RAS’s own interpretation.

And yet, again, this survey is based on perceptions, and might there be systematic differences in how members of the different groups perceive things? Could members of the different groups tend to adopt a different threshold as to what counts as “harassment”? As the above speeding analogy illustrates, even a small difference in threshold would have a large effect on the reported rate.

Given the current “mood music” regarding matters race and sex in STEM and in wider society, it would not be surprising if, to some extent, narrative-based expectations then fed into perceptions.

Yet such questions are not being asked, either in the design or the interpretation of such surveys. It’s just taken as given that all perceptions and reporting are accurate and unbiased, such that the above tables are faithful representations of how things actually are.

The assumption is that if an incident was perceived as “bullying” then it was bullying, and if one group reports a higher rate then they are indeed being bullied more often. Such assumptions accord with the primacy nowadays granted to “lived experience”, and yet we know that human perception is often hugely unreliable and biased. That’s why scientific trials adopt, for example, control samples and double-blind procedures in order to minimise subjective human evaluations.

No-one would take someone’s self-report of their own likability, agreeableness, leadership capabilities, sense of humour, alcohol intake, or charitable giving, as being reliable guides. And, in a personality clash or a minor dispute at work, both parties will regard themselves as being the one in the right, with the other party being the unreasonable one. This is just an inevitable feature of human interactions.

One of the few to begin querying the assumptions behind such surveys is Wilfred Reilly, a Professor of Political Science at Kentucky State University. He first asks his students at what rate they experience mildly-negative interactions with other people. Interestingly, he finds that white students and black students (this is the American South) report much the same rate. He then asks them what fraction of such incidents they think resulted from racial bias on the part of the other person. As expected the white students say few, but the black students say about half. Which means that black students are perceiving a racial element where — given that the overall rate is the same — there cannot be any. In short, black students are perceiving as racial “micro-aggressions” incidents that are just normal human interactions that happen just as much to whites.

The suggestion is backed up by personal testimony, for example a (black) South African who lived in the US reflects on her past attitudes:

The worldview that I had assumed was awfully cynical, as I filtered all my daily interactions through the lens of racial power dynamics. Any mistreatment that I perceived from strangers or friends I often interpreted as a diluted form of racism called a microaggression.

Thus we cannot assume, in surveys such as the above table from the Royal Astronomical Society, that “people of colour” tend — on average — to adopt the same threshold for labelling an incident as “bullying, harassment or discrimination” as white people. Nor can we assume that men in general are adopting the same threshold as women, or LGBT+ people (again on average) as straight people. And if we can’t assume that then we can’t treat the reported rates as being comparable. Though I’m sure they even raising this issue will be treated by some as heresy, an improper questioning of people’s “lived experience”.

Professor Reilly’s studies concerned race; a recent study from the University of California, San Diego, concerns sexual harassment. The authors (Rupa Jose, James H Fowler & Anita Raj) find that the rate at which women report sexual harassment depends on whether they are politically conservative or politically liberal. But do conservative women actually suffer less harassment, or is that because they tend to adopt a different threshold for what constitutes “harassment”? We don’t know. The study concludes: “Research is needed to determine if political differences are due to reporting biases or differential vulnerabilities”.

All of which means that we need a lot more thought and academic rigour in surveys of harassment and bullying. One can fairly reply that such surveys are relatively new and are a good first step. Yes, that’s true, but if we treat the outcome of such surveys as mattering — which we should — then we need to do them well.

Replying to Adam Frank and defending scientism

“I am a passionate scientist who is passionate about science, but I also think scientism is a huge mistake”, writes Adam Frank, an astrophysicist at the University of Rochester, in an article in The Big Think. As another astrophysicist, who has called this blog “defending scientism”, I am inspired to reply.

Adam Frank, Professor of Astrophysics and advocate of science.

Such disputes can boil down to what one means by the word “scientism”. Professor Frank quotes one definition as “the view that science is the best or only objective means by which society should determine normative and epistemological values”. On that definition I also would reject scientism (indeed I don’t think that anyone does advocate that position). Science cannot prescribe values or aims. Science is descriptive, not prescriptive, it gives you knowledge, not normativity (instead, values, aims and normativity can only come from humans).

But Frank also expounds:

In the philosophy that would come to underpin scientism, “objective” came to mean something more like “the world without us.” In this view, science was a means of gaining access to a perfectly objective world that had nothing to do with humans. It gave us a “God’s eye view” or a “perspective-less perspective.” Science, according to this philosophy, revealed to us the “real world,” which was the world independent of us. Therefore, its truths were “deeper” than others, and all aspects of our experience must, eventually, reduce down to the truths that science reveals. This is scientism.

I’m close to agreement with this version of scientism. Science does indeed attempt to reveal a real, objective world that is independent of us, and to a large measure it succeeds (though we can never attain any absolute certainty). And yes, science does give us truths about the world (as best as we humans can access them) that are more reliable and more extensive and thus “deeper” than other attempts to describe the world. But no, it is not true that “all aspects of our experience must, eventually, reduce down to the truths that science reveals”. Science is solely about attaining the best and truest description of the world (which includes ourselves) that we can. It doesn’t even pretend to encompass “aspects of our experience” other than that (indeed I’m not even sure what this claim would mean, and, again, I don’t think this is a view that anyone actually holds).

Professor Frank’s objection to scientism is that:

[Scientism] is just metaphysics, and there are lots and lots of metaphysical beliefs […] that you can adopt about reality and science depending on your inclinations. […] Scientism claims to be the only philosophy that can speak for science, but that is simply not the case. There are lots of philosophies of science out there.

So, according to Frank, scientism is just metaphysics, there is no evidence for it, and so adopting it comes down to personal choice, the very opposite of science. Effectively, science does not point to scientism.

I don’t find this critique convincing. In essence, “scientism” is a unity-of-knowledge thesis that the real world is a seamless, self-consistent whole, and thus that the best description of it will also be (or should, at least, aim towards being) a seamless, self-consistent whole. That is, there should be a “consilience”, or self-consistent meshing of different areas of knowledge and of ways of attaining that knowledge. Scientism is a rejection of the idea that there are distinct and different “ways of knowing”, each appropriate to different and distinct “domains” of knowledge.

But is that claim only a metaphysical belief, whose adoption is merely a “faith”? I submit that, no it is not. Instead, it’s the product of doing science and seeing what works best. Science rejects the supernatural, not as an a priori commitment, but because models of reality work better without it. In medieval times “the heavens” and the earthly world were regarded as different domains to which different rules applied, but then Newton invented a law of gravity that applied both to the Earth-bound fall of an apple and to the orbit of the Moon, unifying the two domains. Even then, the worldview of a scientist could involve a God (invoked by Newton, for example, to keep planetary orbits stable), but, as science progressed, it was found that we “had no need of that hypothesis“. And it had been thought that living animals were utterly distinct from inanimate matter, but nowadays the disciplines of physics and chemistry transition seamlessly through “biochemistry” into the disciplines of biology and ecology. And any proper account of sociology needs to mesh with evolutionary psychology.

Through that progression we have found no sharp divides, no deep epistemological ravines that science cannot cross. The strategy of unifying different areas of knowledge has always proven the more successful.

Thus science does indeed point to scientism, and the unity-of-knowledge view is a product of science. It is not simply one of a number of un-evidenced metaphysical possibilities, it is the one to which the history and current trajectory of science points.

And the idea is refutable. If you want to reject scientism then put forward an alternative “way of knowing” that gives reliable knowledge about the world and yet is clearly distinct from the methods of science. Be sure to include a demonstration that such knowledge is indeed valid and reliable, and thus comparable to scientific knowledge, but without using science in that demonstration.

Here’s GJ 367b, an iron planet smaller and denser than Earth

This is an article I wrote for The Conversation about a new exoplanet, for which I was a co-author on the discovery paper. One reason for reproducing it here is that I can reverse any edit that I didn’t like!

As our Solar System formed, 4.6 billion years ago, small grains of dust and ice swirled around, left over from the formation of our Sun. Through time they collided and stuck to each other. As they grew in size, gravity helped them clump together. One such rock grew into the Earth on which we live. We now think that most of the stars in the night sky are also orbited by their own rocky planets. And teams of astronomers worldwide are trying to find them.

The latest discovery, given the catalogue designation GJ 367b, has just been announced in the journal Science by a team led by Dr Kristine Lam of the Institute of Planetary Research at the German Aerospace Center.

The first signs of it were seen in data from NASA’s Transiting Exoplanet Survey Satellite (TESS). Among the millions of stars being monitored by TESS, one showed a tiny but recurrent dip in its brightness. This is the tell-tale signature of a planet passing in front of its star every orbit (called a “transit”), blocking some of the light. The dip is only 0.03 percent deep, so shallow that it is near the limit of detection. That means that the planet must be small, comparable to Earth.

But Dr Lam also wanted to know the planet’s mass. To do that her team set about observing the host star at every opportunity with HARPS, an instrument attached to a 3.6-metre telescope at the European Southern Observatory in Chile, that was specially designed to find planets. It does this by detecting a slight shift in the wavelength of the host star’s light, caused by the gravitational pull of the planet. It took over 100 observations to detect that shift, meaning that the planet, in addition to being small, must also have a low mass.

Artist’s impression of GJ 367b (Credit: Patricia Klein)

Eventually, as observations accumulated, the numbers were tied down: GJ 367b has a radius of 72 percent of Earth’s radius (to a precision of 7 percent), and a mass of 55 percent of Earth’s mass (to a precision of 14 percent). That demonstrates that astronomers can both find Earth-sized planets around other stars, and then measure their properties. The measurements tell us that this planet is denser than Earth. Whereas Earth has a core of iron surrounded by a rocky mantle, this planet must be nearly all iron, making it similar to our Solar System’s Mercury.

Mercury orbits our Sun every 88 days. Blasted by fierce sunlight, the “daytime” side is bare rock heated to 430 degrees Celsius. GJ 367b is even more extreme. The recurrent transits tell is that it orbits its star in only 8 hours. Being so close, the daytime side will be a furnace heated to 1400 Celsius, such that even rock would be molten. Perhaps GJ 367b was once a giant planet with a vast gaseous envelope, like Neptune. Over time, that gaseous envelope would have boiled off, leaving only the bare core that we see today. Or perhaps, as it formed, collisions with other proto-planets stripped off a mantle of rock, leaving only the iron core.

GJ 367b is, of course, way too hot to be habitable. But it shows that we can find and characterise rocky, Earth-sized planets. The task now is to find them further from their star, in the “habitable zone”, where the surface temperature would allow water to exist as a liquid. That is harder. The further a planet is from its star, the less likely it is to transit, and the longer the recurrence time between transits, making them harder to detect. Further, orbiting further out, the gravitational tug on the host star is reduced, making the signal harder to detect.

But GJ 367b’s host star is a red dwarf, a star much dimmer than our Sun. And, with less heating from starlight, the habitable zone around red dwarfs is much closer in. NASA’s Kepler spacecraft has already found planets in the habitable zone of red-dwarf stars, and the TESS survey promises to find many more.

The next step is to ask whether such planets have atmospheres, what those atmospheres are made of, and whether they contain water vapour. Even there, answers may soon be forthcoming, given the imminent launch of the James Webb Space Telescope. If JWST is pointed at a star when a planet is in transit, it can detect the starlight shining through the thin smear of atmosphere surrounding the planet, and that might show subtle spectral features caused by molecules in the planetary atmosphere. We’ve already found water vapour in the atmospheres of gas-giant exoplanets. As planet discoveries continue apace, it is becoming feasible that we could, before long, prove the existence of a planet that has an atmosphere and a rocky surface, on which water is running freely.

A rocky, Mercury-like exoplanet.

Everything arises bottom-up

It occurs to me that, as we’ve come to understand things better, often a “top down” conception of how something arises has been replaced by a “bottom up” account.

An obvious example is political authority.  The Medieval concept of a God-appointed ruler issuing commands by divine right has been replaced by agreement that legitimate political authority arises bottom-upwards, from the consent of “we the people”.

Similarly, human rights are sometimes supposed to be absolute principles with which people are “endowed by their Creator”.  But, in reality they are collective agreements, deriving from human advocacy about how we want people to be treated, and thus resting only on their widespread acceptance.  Does that make them more insecure, more alienable?  Maybe (and perhaps that’s why some attempt to treat them as absolute and objective), but that’s all there is to it. 

It’s the same with the wider concept of morality. Many have sought to anchor morality in the solid foundation of either a divinity or objective reason.  But neither works: morality derives from human nature and human values. It bubbles up from each of us, leading to wider societal norms and expectations, rather than being imposed on us from outside. Some see that as producing only a second-rate morality, but wanting there to be an objective morality to which a supra-human authority will hold us doesn’t make such a scheme tenable. 

Likewise, principles of fairness and justice can only be rooted in human evaluations of what is fair or just.  There isn’t anything else, no objective scale against which we can read off a quantification of “justness” or “fairness”, any more than there is for moral “oughtness”. What we call “natural justice” is justice rooted in our human feelings of what is fair.  Beyond human society, nature is literally incapable of knowing or caring about concepts of “fairness”, “justice” or “morality”. These are human concepts arising from ourselves. 

And then there are concepts of meaning and purpose. Some argue that, without a God, there can be no meaning or purpose to life.  They tell us that, unless there is an afterlife, our lives are ultimately pointless. But the only forms of meaning and purpose that exists are the purposes that we create for ourselves and the meanings that we find in our lives. As thinking, feeling, sentient creatures we create purposes and we find things meaningful.  That they are local and time-limited doesn’t make them less real.  

But then sentience and consciousness also bubble up from below, forming out of patterns of non-sentient matter. These local and temporary patterns of material stuff arise as a product of evolution, that creates such patterns (“brains”) to do the job of facilitating survival and reproduction. 

It’s the same with intelligence. The top-down conception that the universe starts with intelligence, which dribbles down from there, is wrong.  Rather, intelligence bubbles up from non-intelligent precursors. Over evolutionary time, successive generations of animals developed greater capabilities to sense their environment, to process the information, and then compute a response.

Of course life itself is the same, arising out of non-life.  We’ve long ditched the dualistic notion of elan vital giving spark to inanimate matter.  Simple molecules can replicate because atoms of the same type act like each other, and so, in simple circumstances, simple collections of matter behave similarly. And it complicates from there as simple structures aggregate into complex ones. And when replicators get sufficiently complicated we call them “life”. 

The above traces social sciences into biology and into biochemistry and simple chemistry.  But maybe the same bottom-up approach also applies to physics.

Richard Feynman starts his Lectures by saying:

If, in some cataclysm, all of scientific knowledge were to be destroyed, and only one sentence passed on to the next generations of creatures, what statement would contain the most information in the fewest words? I believe it is the atomic hypothesis, that all things are made of atoms — little particles that move around in perpetual motion, attracting each other when they are a little distance apart, but repelling upon being squeezed into one another.

And everything builds from there. 

But even atoms are built of particles, and as for what “particles” are, well we are still pretty unclear on what the ultimate ontology is.

How about causation? It’s a fundamental concept on the macroscopic scale that thing happen at time t+1 because of how things were at time t. But even that may be an  emergent property, since causation gets less clear at the microscopic scale. Quantum indeterminacy holds that things occur for no discernible reason. A virtual particle pair can just arise, with no proximate cause. 

Maybe the concept of time is similar. Special Relativity has long destroyed the idea that there is a time that is absolute and the same for everyone.  Maybe time bubbles up and emerges so that we can only talk sensibly about “time” at a macroscopic level.  Such speculations are beyond established physics, but are being advocated by Carlo Rovelli and others.

And lastly there is space.  Again, the conception of space as an inert, static backdrop in which everything else plays out has long been overturned. Relativity tells us that space is distorted and warped by matter, such that it can no longer be thought of a separate from the matter it interacts with.  Speculative theories suggest that space itself may be created at the local, particle level from the quantum entanglement of adjacent particles.   

All of which leaves me wondering whether there is anything left for which a top-down conception is still tenable.  And, further, does the bottom-up nature of physics at the particle level necessitate that all higher-level properties are emergent bottom-up creations? 

Edinburgh University should value academic enquiry above ideology

Jonathan Haidt declared that universities could either be about seeking truth or about seeking social justice, but not both. Nowadays, with academics in the humanities and social sciences skewing heavily left, many have adopted Marx’s dictum: “Philosophers have hitherto only interpreted the world in various ways; the point is to change it”.

And so it is that universities are increasingly declaring contentious and ideological notions as orthodoxy, and then demanding assent. This will only get worse unless people speak up against it, so, as a university academic, here goes:

Like many universities, the University of Edinburgh now has a unit dedicated to promoting “Equality, Diversity and Inclusion”. And who could be against any of those? Note, though, that while declaring that it “promotes a culture of inclusivity”, Edinburgh’s pages make no mention of diversity of ideas or about including those who challenge orthodoxy. And yet surely both of those are necessary in a truth-seeking university?

Edinburgh then seeks to instruct us in a set of catechisms under the heading “Educate yourself”. And note that these are not just the webpages of an advocacy group composed of students at the university, this is official Edinburgh University webpages carrying their imprimatur.

In bold type we are told that: “We need to understand that transphobia is every bit as unacceptable as racism and homophobia”. [Update: since this post was written the page has been taken down.]

So what is “transphobia”?, well: “Transphobia is the hatred, fear, disbelief, or mistrust of trans and gender non-conforming people.” Hatred or fear of trans people? Yes, that is transphobia, and yes that should be deplored. Everyone, on all sides of such debates, agrees that trans people should be treated with respect and enabled to live their lives in dignity and safety.

But “disbelief”? That suggests that you’re not allowed to disagree with claims made by trans people. So, if a trans activist asserts that “trans women are women”, and that they are “just as much women as any other woman”, then you are not allowed to reply: “Well actually, I consider trans women to be biological males who (in line with their genuine, innate nature) wish to live a female gender role”.

Asserting such things could well get you banned from Twitter, but surely they should be allowed in a truth-seeking university? After all, biological sex is real and important. When they transition, a trans person does not change sex, they change gender roles. And the fact of their biological sex can still matter in some areas of life (such as sport, where men are generally better than women so that the performance of elite women can roughly equate to that of the best 14- or 15-yr-old boys). And it really would be bad if a university had so lost sight of its truth-seeking role that it did not allow its members to say such things.

A university, as an employer, can reasonably request that — in work situations — members refer to each other using prefered pronouns. But it is quite wrong to demand adherence to anti-scientific ideology.

The orthodoxy-prescribing web pages continue:

In recent years, however, there has been a resurgence of transphobia in the mainstream and social media, which has fuelled increased transphobic hate incidents in society.

Notice how they give no citations or links to support this claim, as might be expected of university-imprimatur pages collected under the rubric “educate yourself”.

This has largely been linked to proposals to reform the 2004 Gender Recognition Act …

Again, this is assertion backed by no evidence. Have “transphobic hate incidents” really been linked to suggested changes to that Act?

Many people refer to these changes as giving trans people the right to ‘self-ID’, but it is, more correctly, a legally-binding ‘self-declaration’.

OK, though the distinction between “self-identification” and “self-declaration” is not clarified, and note that under self-ID the declaration is only “legally binding” until the next self-declaration.

This increased transphobia has been particularly severe for trans women, …

Again, no statistics, no evidence. All of this would be fine as the advocacy pages of a pressure group, but is such ideological contention really appropriate as the official voice of Edinburgh University?

… who have been the target of high-profile, celebrity campaigns that deny the trans experience …

The mention of “celebrity” is presumably a reference to J. K. Rowling, who Tweeted in support in Maya Forstater who was sacked for maintaining that biological sex is real. (And by the way, have Twitter really capped the number of “likes” on that Tweet at about 220,000? It seems so, and many Twitter users have reported that their “likes” mysteriously undo themselves.)

And no, no-one “denies the trans experience” in the sense of denying that some people do, as a very real part of their nature, identify with the gender role opposite to their biological sex, and may even feel themselves to be of the opposite sex. All that is being “denied” is that that actually makes them the other sex, and that a trans woman actually is “just as much a woman as any other woman”.

… and deliberately suggest trans women pose a threat to cis women by distorting statistics of male violence to imply it is a characteristic of trans women.

Note the accusation of “deliberate distortion” of statistics, an accusation not in any way substantiated. This webpage is a propaganda piece, not something that a university should put its name to.

And given mention of statistics one might expect some citations and links to what the statistics actually are. After all, some self-IDd trans women do pose a threat of sexual assault to women. And as far as I can tell (though I am open to correction on this) the statistics do seem to suggest that the propensity to sexual assault among trans women (that is, biological males) is more in line with that of men generally than that of women. And the rates of sexual assault by men are, of course, twenty times higher than those by women, so that matters.

For clarity, that is not suggesting that trans women are more likely to commit assault than men, but that their propensity is roughly in line with biological males generally, and thus much higher than that of women.

(And again, if Edinburgh have good-quality evidence that this is not the case then it would be helpful if their “educate yourself” pages linked to it. Because, you see, “educating” oneself is about familiarising oneself with actual evidence, not about uncritically imbibing ideology.)

A particular strand of feminism (gender-critical) voices concerns that recognising the rights of trans women will negatively impact the ‘sex-based’ rights of cis women …

It’s rather notable that they put “sex-based” in quotes. This is in line with radical “queer theory” that says that biological sex is not actually real, and that what is real is one’s inner “gender identity”. That is a highly contentious claim that flies in the face of science. Note also the loaded phrasing “recognising the rights of”, as though the rights being claimed were already established and agreed.

And it would seem that those gender-critical feminists have a fair point, don’t they? A doctrine that all that matters is a self-report of ones “gender identity” would indeed “impact the sex-based rights of women”, in those situations (such as women’s sports and women’s prisons) where women have a legitimate and reasonable expectation of sex-based segregation.

… and that predatory men will exploit the proposed right to self-declaration to access women-only spaces or to gain advantage in sports and the workplace. This effectively makes trans women the focus of blame for the actions of predatory men.

Well no, it makes the rules, the self-ID doctrine (not “trans women”) the focus of blame for the actions of predatory men. And again, it’s a fair point, isn’t it? Under self-ID, what is there to stop a predatory man adopting a trans persona in order to obtain ready access to women’s spaces? For that matter, what is there to stop a narcissistic man of mediocre sporting ability adopting the persona of a trans woman in order to play against rather easier competition, and so indulge his self-image as a winner?

Gender-critical feminists have also criticised trans women for perpetuating stereotypes of femininity, another example of harmful gate-keeping of another person’s gender presentation. […] none of us should be commenting on other people’s dress choices and external features or assuming gender identity on that basis …

But, once again, the gender-critical feminists have raised a fair point. If we are to deplore feminine stereotypes, so that we don’t associate “female gender” with styles of dress or mannerisms or appearance, and certainly not with activities or job roles, then what is “gender” actually about?

The whole point of trans ideology, of course, is that ones gender is not associated with biological sex or reproductive anatomy — and nor is it associated with any feminine stereotypes (perish the thought!) — so what is left? The trans activists can only answer that it is associated with an inner “gender identity” what we just know or “experience”.

But this makes no sense: if one does not anchor the terms “man” and “woman” in objectively real biological sex, then how can one even define the terms “man” and “woman”? The trans activists try the feeble: “a woman is anyone who experiences themselves as a woman”, but that just gives an endless recursion and answers nothing.

Again, this whole web page would be fair enough if it were the advocacy claims of a pressure group, but the role of a university and its academics would then be to carefully and dispassionately scrutinise each claim for truth and consistency — and yet here the claims are being presented as orthodoxy to which university members are expected to assent.

While concerns for women’s safety are valid, there is no evidence that trans women pose any more danger than other women.

Well, some studies claim such evidence — though I recognise that this link is to an advocacy group, but if Edinburgh have better evidence then perhaps they could present it? Isn’t that how proper discussion proceeds?

This type of ‘reasonable concern’ is used frequently by trans-hostile groups, such as ultra-right wing campaigners and certain feminists.

One notes the quotation marks around “reasonable concern”, implying that this is just a cover story. One notes the smear tactic of grouping feminists with the “ultra-right wing”. And yet, I’m willing to bet that the majority of moderate, centrist-minded people would accept such concerns as reasonable.

Another ‘reasonable concern’ is alarm at the increase in gender identity services for children, despite evidence that early support for individuals reduces psychological problems and suicide in later life.

And again, there are no citations or links to support that claim. And evidence dragged reluctantly out of the Tavistock clinic owing to the Keira Bell case is that pubery blockers make no overall difference to psychological well-being. This is not settled science, with the only studies having a small number of patients and lacking controls, so it should not be presented as though assent is mandatory.

There is considerable misinformation about what happens in gender identity clinics, deliberately circulated to create fear and moral panic.

Come on, there is no way that a university should have this sort of stuff on its official webpages. The UK high court has recently ruled that concerns about gender-identity clinics are well-justified and has halted puberty-blocking drug treatment in under-16s, calling it “experimental”.

Having said that, it is reasonable, in the face of so much misinformation and hostility, for people to have concerns and to seek information and reassurance. This is different to the use of ‘reasonable concerns’ by transphobic campaigners where accurate information is rejected or distorted in a similar way to the strategy of Islamophobes and anti-Semites.

Oh come on! This reads like an undergraduate activist having a tantrum. Sure, let the undergrads spout activist speak, but the official University Edinbugh pages should be written by grown-ups, especially if they are supposed to be statements of university policy.

Does quantum indeterminism defeat reductionism?

After writing a piece on the role of metaphysics in science, which was a reply to neuroscientist Kevin Mitchell, he pointed me to several of his articles including one on reductionism and determinism. I found this interesting since I hadn’t really thought about the interplay of the two concepts. Mitchell argues that if the world is intrinsically indeterministic (which I think it is), then that defeats reductionism. We likely agree on much of the science, and how the world is, but nevertheless I largely disagree with his article.

Let’s start by clarifying the concepts. Reductionism asserts that, if we knew everything about the low-level status of a system (that is, everything about the component atoms and molecules and their locations), then we would have enough information to — in principle — completely reproduce the system, such that a reproduction would exhibit the same high-level behaviour as the original system. Thus, suppose we had a Star-Trek-style transporter device that knew only about (but everything about) low-level atoms and molecules and their positions. We could use it to duplicate a leopard, and the duplicated leopard would manifest the same high-level behaviour (“stalking an antelope”) as the original, even though the transporter device knows nothing about high-level concepts such as “stalking” or “antelope”.

As an aside, philosophers might label the concept I’ve just defined as “supervenience”, and might regard “reductionism” as a stronger thesis about translations between the high-level concepts such as “stalking” and the language of physics at the atomic level. But that type of reductionism generally doesn’t work, whereas reductionism as I’ve just defined it does seem to be how the world is, and much of science proceeds by assuming that it holds. While this version of reductionism does not imply that explanations at different levels can be translated into each other, it does imply that explanations at different levels need to be mutually consistent, and ensuring that is one of the most powerful tools of science.

Our second concept, determinism, then asserts that if we knew the entire and exact low-level description of a system at time t  then we could — in principle — compute the exact state of the system at time t + 1. I don’t think the world is fully deterministic. I think that quantum mechanics tells us that there is indeterminism at the microscopic level. Thus, while we can compute, from the prior state, the probability of an atom decaying in a given time interval, we cannot (even in principle) compute the actual time of the decay. Some leading physicists disagree, and advocate for interpretations in which quantum mechanics is deterministic, so the issue is still an open question, but I suggest that indeterminism is the current majority opinion among physicists and I’ll assume it here.

This raises the question of whether indeterminism at the microscopic level propagates to indeterminism at the macrosopic level of the behaviour of leopards. The answer is likely, yes, to some extent. A thought experiment of coupling a microscopic trigger to a macroscopic device (such as the decay of an atom triggering a device that kills Schrodinger’s cat) shows that this is in-principle possible. On the other hand, using thermodynamics to compute the behaviour of steam engines (and totally ignoring quantum indeterminism) works just fine, because in such scenarios one is averaging over an Avogadro’s number of partlces and, given that Avogadro’s number is very large, that averages over all the quantum indeterminicity.

What about leopards? The leopard’s behaviour is of course the product of the state of its brain, acting on sensory information. Likely, quantum indeterminism is playing little or no role in the minute-by-minute responses of the leopard. That’s because, in order for the leopard to have evolved, its behaviour, its “leopardness”, must have been sufficiently under the control of genes, and genes influence brain structures on the developmental timescale of years. On the other hand, leopards are all individuals. While variation in leopard brains derives partially from differences in that individual’s genes, Kevin Mitchell tells us in his book Innate that development is a process involving much chance variation. Thus quantum indeterminicity at a biochemical level might be propogating into differences in how a mammal brain develops, and thence into the behaviour of individual leopards.

That’s all by way of introduction. So far I’ve just defined and expounded on the concepts “reductionism” and “determinism” (but it’s well worth doing that since discussion on these topics is bedeviled by people interpreting words differently). So let’s proceed to why I disagree with Mitchell’s account.

He writes:

For the reductionist, reality is flat. It may seem to comprise things in some kind of hierarchy of levels – atoms, molecules, cells, organs, organisms, populations, societies, economies, nations, worlds – but actually everything that happens at all those levels really derives from the interactions at the bottom. If you could calculate the outcome of all the low-level interactions in any system, you could predict its behaviour perfectly and there would be nothing left to explain.

There is never only one explanation of anything. We can always give multiple different explanations of a phenomenon — certainly for anything at the macroscopic level — and lots of different explanations can be true at the same time, so long as they are all mutually consistent. Thus one explanation of a leopard’s stalking behaviour will be in terms of the firings of neurons and electrical signals sent to muscles. An equally true explanation would be that the leopard is hungry.

Reductionism does indeed say that you could (in principle) reproduce the behaviour from a molecular-level calculation, and that would be one explanation. But there would also be other equally true explanations. Nothing in reductionism says that the other explanations don’t exist or are invalid or unimportant. We look for explanations because they are useful in that they enable us to understand a system, and as a practical matter the explanation that the leopard is hungry could well be the most useful. The molecular-level explanation of “stalking” is actually pretty useless, first because it can’t be done in practice, and second because it would be so voluminous and unwieldy that no-one could assimilate or understand it.

As a comparison, chess-playing AI bots are now vastly better than the best humans and can make moves that grandmasters struggle to understand. But no amount of listing of low-level computer code would “explain” why sacrificing a rook for a pawn was strategically sound — even given that, you’d still have all the explanation and understanding left to achieve.

So reductionism does not do away with high-level analysis. But — crucially — it does insist that high-level explanations need to be consistent with and compatible with explanations at one level lower, and that is why the concept is central to science.

Mitchell continues:

In a deterministic system, whatever its current organisation (or “initial conditions” at time t) you solve Newton’s equations or the Schrodinger equation or compute the wave function or whatever physicists do (which is in fact what the system is doing) and that gives the next state of the system. There’s no why involved. It doesn’t matter what any of the states mean or why they are that way – in fact, there can never be a why because the functionality of the system’s behaviour can never have any influence on anything.

I don’t see why that follows. Again, understanding, and explanations and “why?” questions can apply just as much to a fully reductionist and deterministic system. Let’s suppose that our chess-playing AI bot is fully reductionist and deterministic. Indeed they generally are, since we build computers and other devices sufficiently macroscopically that they average over quantum indeterminacy. That’s because determinism helps the purpose: we want the machine to make moves based on an evaluation of the position and the rules of chess, not to make random moves based on quantum dice throwing.

But, in reply to “why did the (deterministic) machine sacrifice a rook for a pawn” we can still answer “in order to clear space to enable the queen to invade”. Yes, you can also give other explanations, in terms of low-level machine code and a long string of 011001100 computer bits, if you really want to, but nothing has invalidated the high-level answer. The high-level analysis, the why? question, and the explanation in terms of clearing space for the queen, all still make entire sense.

I would go even further and say you can never get a system that does things under strict determinism. (Things would happen in it or to it or near it, but you wouldn’t identify the system itself as the cause of any of those things).

Mitchell’s thesis is that you only have “causes” or an entity “doing” something if there is indeterminism involved. I don’t see why that makes any difference. Suppose we built our chess-playing machine to be sensitive to quantum indeterminacy, so that there was added randomness in its moves. The answer to “why did it sacrifice a rook for a pawn?” could then be “because of a chance quantum fluctuation”. Which would be a good answer, but Mitchell is suggesting that only un-caused causes actually qualify as “causes”. I don’t see why this is so. The deterministic AI bot is still the “cause” of the move it computes, even if it itself is entirely the product of prior causation, and back along a deterministic chain. As with explanations, there is generally more than one “cause”.

Nothing about either determinism or reductionism has invalidated the statements that the chess-playing device “chose” (computed) a move, causing that move to be played, and that the reason for sacrificing the rook was to create space for the queen. All of this holds in a deterministic world.

Mitchell pushes further the argument that indeterminism negates reductionism:

For that averaging out to happen [so that indeterminism is averaged over] it means that the low-level details of every particle in a system are not all-important – what is important is the average of all their states. That describes an inherently statistical mechanism. It is, of course, the basis of the laws of thermodynamics and explains the statistical basis of macroscopic properties, like temperature. But its use here implies something deeper. It’s not just a convenient mechanism that we can use – it implies that that’s what the system is doing, from one level to the next. Once you admit that, you’ve left Flatland. You’re allowing, first, that levels of reality exist.

I agree entirely, though I don’t see that as a refutation of reductionism. At least, it doesn’t refute forms of reductionism that anyone holds or defends. Reductionism is a thesis about how levels of reality mesh together, not an assertion that all science, all explanations, should be about the lowest levels of description, and only about the lowest levels.

Indeterminism does mean that we could not fully compute the exact future high-level state of a system from the prior, low-level state. But then, under indeterminism, we also could not always predict the exact future high-level state from the prior high-level state. So, “reductionism” would not be breaking down: it would still be the case that a low-level explanation has to mesh fully and consistently with a high-level explanation. If indeterminacy were causing the high-level behaviour to diverge, it would have to feature in both the low-level and high-level explanations.

Mitchell then makes a stronger claim:

The macroscopic state as a whole does depend on some particular microstate, of course, but there may be a set of such microstates that corresponds to the same macrostate. And a different set of microstates that corresponds to a different macrostate. If the evolution of the system depends on those coarse-grained macrostates (rather than on the precise details at the lower level), then this raises something truly interesting – the idea that information can have causal power in a hierarchical system …

But there cannot be a difference in the macrostate without a difference in the microstate. Thus there cannot be indeterminism that depends on the macrostate but not on the microstate. At least, we have no evidence that that form of indeterminism actually exists. If it did, that would indeed defeat reductionism and would be a radical change to how we think the world works.

It would be a form of indeterminism under which, if we knew everything about the microstate (but not the macrostate) then we would have less ability to predict time t + 1  than if we knew the macrostate (but not the microstate). But how could that be? How could we not know the macrostate? The idea that we could know the exact microstate at time t  but not be able to compute (even in principle) the macrostate at the same time t  (so before any non-deterministic events could have happened) would indeed defeat reductionism, but is surely a radical departure from how we think the world works, and is not supported by any evidence.

But Mitchell does indeed suggest this:

The low level details alone are not sufficient to predict the next state of the system. Because of random events, many next states are possible. What determines the next state (in the types of complex, hierarchical systems we’re interested in) is what macrostate the particular microstate corresponds to. The system does not just evolve from its current state by solving classical or quantum equations over all its constituent particles. It evolves based on whether the current arrangement of those particles corresponds to macrostate A or macrostate B.

But this seems to conflate two ideas:

1) In-principle computing/reproducing the state at time t + 1 from the state at time t (determinism).

2) In-principle computing/reproducing the macrostate at time t from the microstate at time t (reductionism).

Mitchell’s suggestion is that we cannot compute: {microstate at time t } ⇒ {macrostate at time t + 1 }, but can compute: {macrostate at time t } ⇒ {macrostate at time t + 1 }. (The latter follows from: “What determines the next state … is [the] macrostate …”.)

And that can (surely?) only be the case if one cannot compute: {microstate at time t } ⇒ {macrostate at time t }, and if we are denying that then we’re denying reductionism as an input to the argument, not as a consequence of indeterminism.

Mitchell draws the conclusion:

In complex, dynamical systems that are far from equilibrium, some small differences due to random fluctuations may thus indeed percolate up to the macroscopic level, creating multiple trajectories along which the system could evolve. […]

I agree, but consider that to be a consequence of indeterminism, not a rejection of reductionism.

This brings into existence something necessary (but not by itself sufficient) for things like agency and free will: possibilities.

As someone who takes a compatibilist account of “agency” and “free will” I am likely to disagree with attempts to rescue “stronger” versions of those concepts. But that is perhaps a topic for a later post.