Tag Archives: quantum indeterminism

Human brains have to be deterministic (though indeterminism would not give us free will anyhow)

Are human brains deterministic? That is, are the decisions that our brain makes the product of the prior state of the system (where that includes the brain itself and the sensory input into the brain), or does quantum indeterminacy lead to some level of uncaused randomness in our behaviour? I’ll argue here that our brains must be largely deterministic, prompted by being told that this view is clearly wrong.

First, I’ll presume that quantum mechanics is indeed indeterministic (thus ignoring hidden-variable and Everettian versions). But the fact that the underlying physics is indeterministic does not mean that devices built out of quantum-mechanical stuff must also be indeterministic. One can obtain a deterministic device simply by averaging over a sufficient number of low-level particle events. Indeed, that’s exactly what we do when we design computer chips. We build them to be deterministic because we want them to do what we program them to do. In principle, quantum fluctuations in a computer chip could affect its output behaviour, but in practice a minimum of ~50 electrons are involved in each chip-junction “event”, which is sufficient to average over probabilistic behaviour such that the likelihood of a quantum fluctuation changing the output is too small to be an issue, and thus the chip is effectively deterministic. Again, we build them like that because we want to control their behaviour. The same holds for all human-built technology.

There may be some instances where genuine non-deterministic randomness might be useful. An example is running a Monte-Carlo simulation (a technique, widely used in science and engineering, of computing a simulation that allows for all possibilities). But, even here, in practice, one usually uses deterministic pseudo-random-number generators, simply because our computers — despite being built on quantum mechanics — don’t actually do genuine randomness.

Our brains are also built to do a job. They are the product of a genetic recipe, a recipe that is the product of evolution. In evolutionary terms the job of a brain is to make real-time decisions based on that genetic recipe and on the local circumstances, as informed through the senses. And brains are hugely expensive in evolutionary terms, consuming 20 percent of the body’s energy and (for example) forcing big compromises in female anatomy and making childbirth dangerous.

It follows that brains could not have evolved unless they were strongly selected for (they cannot just be byproduct “spandrels”), which means they must serve the interests of the genes that specify the recipe, and that means that brain behaviour (the choices they make) must be strongly influenced by the genes. And, since those choices can be being made decades after the childhood brain develops out of the genetic recipe, it follows that there must be a deterministic chain of causation that holds over generational timescales.

To head off misunderstandings, the above is not saying that behaviour is entirely specified by genes. (Whenever anyone argues for genetics being a major factor in anything, it is often attacked as being the claim that genetics is the only factor; the reality is that everything is always a product of both genes and environment.) Obviously, the brain’s job is to make decisions reflecting the local circumstances, but how to react to particular circumstances must be strongly influenced by genes, otherwise the brain could not have evolved. Nor is this saying that the environment and stochastic factors have no effect on how the genetic recipe plays out and as the brain develops; of course they do. And nor is this saying that the brain’s neural network is static and unchanging. Of course it isn’t (memories, for example, and changes in the network). But this argument does show that there must be a deterministic chain between genes and brain behaviour that holds over multi-decade timescales. It must be the case that, in general, “that behavioural decision could have been different if, three decades ago, that gene had been different”. That includes behavioural decisions that are regarded as “free will” decisions — which presents no problem if one adopts a compatibilist interpretation of free will.

The above argument doesn’t fully rule out a minor role for genuine randomness based on quantum indeterminacy. I would guess that, were a quantum dice-throwing module within the brain of evolutionary benefit to an animal, then such a module could have evolved. But it’s hard to see why it would be evolutionarily beneficial. Just as we make technology to do what we want it to, genes will make brains that behave in ways they program for. That will hold especially for the large component of brain function that is simply about regulating body functions, not producing “behavioural choices”, and for the primitive brains producing fairly simple behaviour, such as in worms. This means that the neural-network junctions (synapses) will have evolved to be deterministic. This is achieved by a neural signal, an “action potential”, being an on–off event (so that it is insensitive to small changes), with a sufficient number of ions needed to trigger one (such that quantum indeterminacy is averaged over). This is pretty much the same way that humans make computer chips to be deterministic. Since larger and more complex brains work in a similar way, just with vastly more neurons and neural connections, it follows that they also will be deterministic.

Another point: our brains are simply too big and too complex to be about producing quantum “dice throwing” decisions. A device producing an indeterminate output would have to be small (again, averaging over anything more than a couple of dozen ions gives you a deterministic output). Yet our brains have 100 billion neurons made of 1026 molecules. What is all that for, and how did it evolve, if our decisions are effectively a dice throw? The only answer is that it evolved to process the information from our senses, and make decisions based on that, and making decisions based on input information is (by definition) a deterministic process.

Lastly, twin studies tell us that behavioural traits are highly heritable, with typical heritabilities of 50%. Again, this requires a causal chain between gene expression at the stage of brain development and behavioural choices made decades later. (There’s also a large role for the environment, of course, but being affected by environment is just as much a deterministic process as being affected by genes.)

Anyhow, I was told that I was wrong, and that quantum indeterminacy plays a role in our brains, especially when it comes to “free will”, and was pointed to a review paper arguing this by neurobiologist Björn Brembs of the Universität Regensburg.

Obviously a professor of neurobiology knows more about brains than I do, but, for now at least, I’m sticking to the above arguments. So what counters does Professor Brembs give? The paper first points out that the underlying physics is indeterministic. I agree, though, as above, that does not necessitate that brains are. The main argument presented, however, is the need for an animal to be unpredictable. Clearly, a gazelle running from a cheetah will benefit if the cheetah cannot predict which way it will dart. This will hold for any animal vulnerable to predation.

I agree on the need for unpredictability, but this does not require quantum indeterminacy. The world is simply way too complex for any of us to be able to predict it, even if that were “in principle” possible given enough information and intelligence. All that matters is that, in practice, predators stand no chance of making such predictions. The nematode worm (Caenorhabditis elegans) has only 302 cells in its nervous system. But even there, different individuals have those 302 cells wired up differently, owing to inevitable differences as the embryo developed. And if I gave you a complete map of the neural network of a mere 302 cells, could you look at it and predict how it would respond to various stimuli? I certainly couldn’t. The cheetah hasn’t a hope in hell of predicting the exact behaviour of the gazelle’s brain, even if that brain is entirely deterministic, and even if it had a complete and accurate neural-level map of that gazelle’s brain (which of course it doesn’t), and even if it had complete knowledge of the gazelle’s sensory experiences (which it also doesn’t).

So you don’t need indeterminacy to have in-practice unpredictability; the world is just too complex. And, while a prey animal wants unpredictability, it does not want to make dice-throwing decisions. There are some directions to dart — straight into the paws of the cheetah — that would be a rather bad choice. The gazelle still wants to make best use of all the information from its senses, and that requires a deterministic neural network.

And that’s about it: the above argument, that “predictability is one thing that will make sure that a competitor will be out of business soon”, and thus that “deterministic behaviour can never be evolutionarily stable” is pretty much the only argument that Professor Brembs presents for brains being indeterministic. He does argue at length for brains needing to produce “behavioural variability”, and that they need to flexible and adaptable and responsive to their environments. I agree entirely. But this is a separate issue from them being non-deterministic. Indeed, being responsive to their environments is itself a deterministic concept. The whole point of quantum indeterminacy is that it is not the result of anything, and so is independent of local conditions.

As an example, Brembs argues that:

“… the temporal structure of the variability in spontaneous turning manoeuvres both in tethered and in free-flying fruitflies could not be explained by random system noise. Instead, a nonlinear signature was found, suggesting that fly brains operate at criticality, meaning that they are mathematically unstable, which, in turn, implies an evolved mechanism rendering brains highly susceptible to the smallest differences in initial conditions and amplifying them exponentially. Put differently, fly brains have evolved to generate unpredictable turning manoeuvres.”

But sensitivity to small differences and a non-linear response is not the same thing as being non-deterministic. Deterministic systems often behave like that. A roulette wheel is non-linear and unpredictable in practice, even if it is deterministic. As another example, modern fighter jets are designed to be aerodynamically unstable, in order to be highly manoeuvrable — just like the fruit fly — and it would be straightforward to write a black-box computer code to produce a flight plan that another human (having no sight of the code) could not then predict.

I do wonder whether a rejection of determinism might be motivated by the false presumption that a “deterministic” response must inevitably be a simple, linear and inflexible response that is not responsive to the local circumstances? But this is not so. Responding to local circumstances is (by definition) a deterministic process, and behavioural variation and flexibility is exactly what you’d get from a complex but deterministic neural network.

But Professor Brembs wants a role for indeterminacy as an ingredient for his conception of free will.

While some argue that unpredictable (or random) choice does not qualify for their definition of free will, it is precisely the freedom from the chains of causality that most scholars see as a crucial prerequisite for free will.

I consider this to be misguided. As a compatibilist, I assert that the only “free will” that we have is entirely compatible with determinism. We have a “will” and often we are “free” to act on it. And yes, that will is indeed a product of the prior state of the system. The sort of “will” that arises independently of the prior state of the system does not exist (unless one wants to argue for dualistic souls that tell matter how to move).

But many people dislike that conclusion; they reject dualism but want to rescue a “will” that is “free from the chains of causality”. They hope that some mixture of causation and indeterminism might do that. Thus Brembs argues (as have others) for a “two-stage model of free will”:

One stage generates behavioural options and the other one decides which of those actions will be initiated. Put simply, the first stage is ‘free’ and the second stage is ‘willed’. […] freedom arises from the creative and indeterministic generation of alternative possibilities, which present themselves to the will for evaluation and selection.

This may be a failure of my imagination but I don’t see how this helps. Yes it gives freedom (from the causal chain) and it gives will, but the part that is free is not willed and the part that is willed is not free from causation. So it doesn’t give a “free will” if that is a will that is free from the causal chain.

The “free” part is simply generating a list of possibilities. The “will” part, the bit that is doing the choosing from the list, is still a computational process. Non-deterministic choice making could not have evolved.

Invoking the first stage serves only to salve an intuitive dislike of the idea that we are bound to the same processes of cause-and-effect that govern the rest of nature. People consider this to be beneath human dignity. But human intuition is unreliable, and a mechanism invoked purely to salve human dignity is unlikely to be how things actually are. If you think that a dice-throwing component is needed to generate options that could not otherwise be available, then you’re likely underestimating the degree to which the world is so complex that a deterministic neural network would already produce ample behavioural flexibility.

In short, I suggest that appealing to quantum indeterminacy in any discussion of free will is a red herring that has intuitive appeal but that cannot be made into a coherent account of a will that is uncaused. We should, instead, accept that there is nothing wrong with our wills being caused.

But it seems that Brembs is motivated by a dislike of the idea that we are at the end of a causal chain:

I hope to at least start a thought process that abandoning the metaphysical concept of free will does not automatically entail that we are slaves of our genes and our environment, forced to always choose the same option when faced with the same situation.

So what would be the problem if that were indeed the case?

In fact, I am confident I have argued successfully that we would not exist if our brains were not able to make a different choice even in the face of identical circumstances and history.

I don’t agree that the argument has been successfully made. And, anyhow, in practice, “identical circumstances and history” never recur. Walk around outside and look around you. The local environment that you are looking at is made up of something like 1032 individual atoms. That is so many that, if you were to point to them individually at a rate of one every second, it would take you 100,000,000,000,000 times the age of universe before you’d finished. The idea of “identical circumstances”, with every one of those atoms being in the same place and behaving the same way, is a valid philosophical thought experiment but is not of practical significance. The world already contains enough complexity and variation that there is no need to invoke quantum indeterminacy in order for a cheetah to be unable to predict the darting of a gazelle (if the cheetah counted neurons in the gazelle’s brain at a rate of one a second it would take it a mere 3000 years).

It makes no practical difference at all whether a different decision arises because of a quantum dice throw, or because the circumstances were slightly different. I don’t see why the former is preferable, or has more “moral” salience, or is more in accord with our human dignity (not that that actually matters; since the universe is not obliged to accord with our preferences).

Our Earth is not at the centre of the universe. The universe as a whole does not have a purpose. It was not created for us, we are just one product of an evolutionary process. And, being a product of the universe, we are material beings produced by and governed by the same laws of cause and effect that describe everything else.

Does quantum indeterminism defeat reductionism?

After writing a piece on the role of metaphysics in science, which was a reply to neuroscientist Kevin Mitchell, he pointed me to several of his articles including one on reductionism and determinism. I found this interesting since I hadn’t really thought about the interplay of the two concepts. Mitchell argues that if the world is intrinsically indeterministic (which I think it is), then that defeats reductionism. We likely agree on much of the science, and how the world is, but nevertheless I largely disagree with his article.

Let’s start by clarifying the concepts. Reductionism asserts that, if we knew everything about the low-level status of a system (that is, everything about the component atoms and molecules and their locations), then we would have enough information to — in principle — completely reproduce the system, such that a reproduction would exhibit the same high-level behaviour as the original system. Thus, suppose we had a Star-Trek-style transporter device that knew only about (but everything about) low-level atoms and molecules and their positions. We could use it to duplicate a leopard, and the duplicated leopard would manifest the same high-level behaviour (“stalking an antelope”) as the original, even though the transporter device knows nothing about high-level concepts such as “stalking” or “antelope”.

As an aside, philosophers might label the concept I’ve just defined as “supervenience”, and might regard “reductionism” as a stronger thesis about translations between the high-level concepts such as “stalking” and the language of physics at the atomic level. But that type of reductionism generally doesn’t work, whereas reductionism as I’ve just defined it does seem to be how the world is, and much of science proceeds by assuming that it holds. While this version of reductionism does not imply that explanations at different levels can be translated into each other, it does imply that explanations at different levels need to be mutually consistent, and ensuring that is one of the most powerful tools of science.

Our second concept, determinism, then asserts that if we knew the entire and exact low-level description of a system at time t  then we could — in principle — compute the exact state of the system at time t + 1. I don’t think the world is fully deterministic. I think that quantum mechanics tells us that there is indeterminism at the microscopic level. Thus, while we can compute, from the prior state, the probability of an atom decaying in a given time interval, we cannot (even in principle) compute the actual time of the decay. Some leading physicists disagree, and advocate for interpretations in which quantum mechanics is deterministic, so the issue is still an open question, but I suggest that indeterminism is the current majority opinion among physicists and I’ll assume it here.

This raises the question of whether indeterminism at the microscopic level propagates to indeterminism at the macrosopic level of the behaviour of leopards. The answer is likely, yes, to some extent. A thought experiment of coupling a microscopic trigger to a macroscopic device (such as the decay of an atom triggering a device that kills Schrodinger’s cat) shows that this is in-principle possible. On the other hand, using thermodynamics to compute the behaviour of steam engines (and totally ignoring quantum indeterminism) works just fine, because in such scenarios one is averaging over an Avogadro’s number of partlces and, given that Avogadro’s number is very large, that averages over all the quantum indeterminicity.

What about leopards? The leopard’s behaviour is of course the product of the state of its brain, acting on sensory information. Likely, quantum indeterminism is playing little or no role in the minute-by-minute responses of the leopard. That’s because, in order for the leopard to have evolved, its behaviour, its “leopardness”, must have been sufficiently under the control of genes, and genes influence brain structures on the developmental timescale of years. On the other hand, leopards are all individuals. While variation in leopard brains derives partially from differences in that individual’s genes, Kevin Mitchell tells us in his book Innate that development is a process involving much chance variation. Thus quantum indeterminicity at a biochemical level might be propogating into differences in how a mammal brain develops, and thence into the behaviour of individual leopards.

That’s all by way of introduction. So far I’ve just defined and expounded on the concepts “reductionism” and “determinism” (but it’s well worth doing that since discussion on these topics is bedeviled by people interpreting words differently). So let’s proceed to why I disagree with Mitchell’s account.

He writes:

For the reductionist, reality is flat. It may seem to comprise things in some kind of hierarchy of levels – atoms, molecules, cells, organs, organisms, populations, societies, economies, nations, worlds – but actually everything that happens at all those levels really derives from the interactions at the bottom. If you could calculate the outcome of all the low-level interactions in any system, you could predict its behaviour perfectly and there would be nothing left to explain.

There is never only one explanation of anything. We can always give multiple different explanations of a phenomenon — certainly for anything at the macroscopic level — and lots of different explanations can be true at the same time, so long as they are all mutually consistent. Thus one explanation of a leopard’s stalking behaviour will be in terms of the firings of neurons and electrical signals sent to muscles. An equally true explanation would be that the leopard is hungry.

Reductionism does indeed say that you could (in principle) reproduce the behaviour from a molecular-level calculation, and that would be one explanation. But there would also be other equally true explanations. Nothing in reductionism says that the other explanations don’t exist or are invalid or unimportant. We look for explanations because they are useful in that they enable us to understand a system, and as a practical matter the explanation that the leopard is hungry could well be the most useful. The molecular-level explanation of “stalking” is actually pretty useless, first because it can’t be done in practice, and second because it would be so voluminous and unwieldy that no-one could assimilate or understand it.

As a comparison, chess-playing AI bots are now vastly better than the best humans and can make moves that grandmasters struggle to understand. But no amount of listing of low-level computer code would “explain” why sacrificing a rook for a pawn was strategically sound — even given that, you’d still have all the explanation and understanding left to achieve.

So reductionism does not do away with high-level analysis. But — crucially — it does insist that high-level explanations need to be consistent with and compatible with explanations at one level lower, and that is why the concept is central to science.

Mitchell continues:

In a deterministic system, whatever its current organisation (or “initial conditions” at time t) you solve Newton’s equations or the Schrodinger equation or compute the wave function or whatever physicists do (which is in fact what the system is doing) and that gives the next state of the system. There’s no why involved. It doesn’t matter what any of the states mean or why they are that way – in fact, there can never be a why because the functionality of the system’s behaviour can never have any influence on anything.

I don’t see why that follows. Again, understanding, and explanations and “why?” questions can apply just as much to a fully reductionist and deterministic system. Let’s suppose that our chess-playing AI bot is fully reductionist and deterministic. Indeed they generally are, since we build computers and other devices sufficiently macroscopically that they average over quantum indeterminacy. That’s because determinism helps the purpose: we want the machine to make moves based on an evaluation of the position and the rules of chess, not to make random moves based on quantum dice throwing.

But, in reply to “why did the (deterministic) machine sacrifice a rook for a pawn” we can still answer “in order to clear space to enable the queen to invade”. Yes, you can also give other explanations, in terms of low-level machine code and a long string of 011001100 computer bits, if you really want to, but nothing has invalidated the high-level answer. The high-level analysis, the why? question, and the explanation in terms of clearing space for the queen, all still make entire sense.

I would go even further and say you can never get a system that does things under strict determinism. (Things would happen in it or to it or near it, but you wouldn’t identify the system itself as the cause of any of those things).

Mitchell’s thesis is that you only have “causes” or an entity “doing” something if there is indeterminism involved. I don’t see why that makes any difference. Suppose we built our chess-playing machine to be sensitive to quantum indeterminacy, so that there was added randomness in its moves. The answer to “why did it sacrifice a rook for a pawn?” could then be “because of a chance quantum fluctuation”. Which would be a good answer, but Mitchell is suggesting that only un-caused causes actually qualify as “causes”. I don’t see why this is so. The deterministic AI bot is still the “cause” of the move it computes, even if it itself is entirely the product of prior causation, and back along a deterministic chain. As with explanations, there is generally more than one “cause”.

Nothing about either determinism or reductionism has invalidated the statements that the chess-playing device “chose” (computed) a move, causing that move to be played, and that the reason for sacrificing the rook was to create space for the queen. All of this holds in a deterministic world.

Mitchell pushes further the argument that indeterminism negates reductionism:

For that averaging out to happen [so that indeterminism is averaged over] it means that the low-level details of every particle in a system are not all-important – what is important is the average of all their states. That describes an inherently statistical mechanism. It is, of course, the basis of the laws of thermodynamics and explains the statistical basis of macroscopic properties, like temperature. But its use here implies something deeper. It’s not just a convenient mechanism that we can use – it implies that that’s what the system is doing, from one level to the next. Once you admit that, you’ve left Flatland. You’re allowing, first, that levels of reality exist.

I agree entirely, though I don’t see that as a refutation of reductionism. At least, it doesn’t refute forms of reductionism that anyone holds or defends. Reductionism is a thesis about how levels of reality mesh together, not an assertion that all science, all explanations, should be about the lowest levels of description, and only about the lowest levels.

Indeterminism does mean that we could not fully compute the exact future high-level state of a system from the prior, low-level state. But then, under indeterminism, we also could not always predict the exact future high-level state from the prior high-level state. So, “reductionism” would not be breaking down: it would still be the case that a low-level explanation has to mesh fully and consistently with a high-level explanation. If indeterminacy were causing the high-level behaviour to diverge, it would have to feature in both the low-level and high-level explanations.

Mitchell then makes a stronger claim:

The macroscopic state as a whole does depend on some particular microstate, of course, but there may be a set of such microstates that corresponds to the same macrostate. And a different set of microstates that corresponds to a different macrostate. If the evolution of the system depends on those coarse-grained macrostates (rather than on the precise details at the lower level), then this raises something truly interesting – the idea that information can have causal power in a hierarchical system …

But there cannot be a difference in the macrostate without a difference in the microstate. Thus there cannot be indeterminism that depends on the macrostate but not on the microstate. At least, we have no evidence that that form of indeterminism actually exists. If it did, that would indeed defeat reductionism and would be a radical change to how we think the world works.

It would be a form of indeterminism under which, if we knew everything about the microstate (but not the macrostate) then we would have less ability to predict time t + 1  than if we knew the macrostate (but not the microstate). But how could that be? How could we not know the macrostate? The idea that we could know the exact microstate at time t  but not be able to compute (even in principle) the macrostate at the same time t  (so before any non-deterministic events could have happened) would indeed defeat reductionism, but is surely a radical departure from how we think the world works, and is not supported by any evidence.

But Mitchell does indeed suggest this:

The low level details alone are not sufficient to predict the next state of the system. Because of random events, many next states are possible. What determines the next state (in the types of complex, hierarchical systems we’re interested in) is what macrostate the particular microstate corresponds to. The system does not just evolve from its current state by solving classical or quantum equations over all its constituent particles. It evolves based on whether the current arrangement of those particles corresponds to macrostate A or macrostate B.

But this seems to conflate two ideas:

1) In-principle computing/reproducing the state at time t + 1 from the state at time t (determinism).

2) In-principle computing/reproducing the macrostate at time t from the microstate at time t (reductionism).

Mitchell’s suggestion is that we cannot compute: {microstate at time t } ⇒ {macrostate at time t + 1 }, but can compute: {macrostate at time t } ⇒ {macrostate at time t + 1 }. (The latter follows from: “What determines the next state … is [the] macrostate …”.)

And that can (surely?) only be the case if one cannot compute: {microstate at time t } ⇒ {macrostate at time t }, and if we are denying that then we’re denying reductionism as an input to the argument, not as a consequence of indeterminism.

Mitchell draws the conclusion:

In complex, dynamical systems that are far from equilibrium, some small differences due to random fluctuations may thus indeed percolate up to the macroscopic level, creating multiple trajectories along which the system could evolve. […]

I agree, but consider that to be a consequence of indeterminism, not a rejection of reductionism.

This brings into existence something necessary (but not by itself sufficient) for things like agency and free will: possibilities.

As someone who takes a compatibilist account of “agency” and “free will” I am likely to disagree with attempts to rescue “stronger” versions of those concepts. But that is perhaps a topic for a later post.