Does quantum indeterminism defeat reductionism?

After writing a piece on the role of metaphysics in science, which was a reply to neuroscientist Kevin Mitchell, he pointed me to several of his articles including one on reductionism and determinism. I found this interesting since I hadn’t really thought about the interplay of the two concepts. Mitchell argues that if the world is intrinsically indeterministic (which I think it is), then that defeats reductionism. We likely agree on much of the science, and how the world is, but nevertheless I largely disagree with his article.

Let’s start by clarifying the concepts. Reductionism asserts that, if we knew everything about the low-level status of a system (that is, everything about the component atoms and molecules and their locations), then we would have enough information to — in principle — completely reproduce the system, such that a reproduction would exhibit the same high-level behaviour as the original system. Thus, suppose we had a Star-Trek-style transporter device that knew only about (but everything about) low-level atoms and molecules and their positions. We could use it to duplicate a leopard, and the duplicated leopard would manifest the same high-level behaviour (“stalking an antelope”) as the original, even though the transporter device knows nothing about high-level concepts such as “stalking” or “antelope”.

As an aside, philosophers might label the concept I’ve just defined as “supervenience”, and might regard “reductionism” as a stronger thesis about translations between the high-level concepts such as “stalking” and the language of physics at the atomic level. But that type of reductionism generally doesn’t work, whereas reductionism as I’ve just defined it does seem to be how the world is, and much of science proceeds by assuming that it holds. While this version of reductionism does not imply that explanations at different levels can be translated into each other, it does imply that explanations at different levels need to be mutually consistent, and ensuring that is one of the most powerful tools of science.

Our second concept, determinism, then asserts that if we knew the entire and exact low-level description of a system at time t  then we could — in principle — compute the exact state of the system at time t + 1. I don’t think the world is fully deterministic. I think that quantum mechanics tells us that there is indeterminism at the microscopic level. Thus, while we can compute, from the prior state, the probability of an atom decaying in a given time interval, we cannot (even in principle) compute the actual time of the decay. Some leading physicists disagree, and advocate for interpretations in which quantum mechanics is deterministic, so the issue is still an open question, but I suggest that indeterminism is the current majority opinion among physicists and I’ll assume it here.

This raises the question of whether indeterminism at the microscopic level propagates to indeterminism at the macrosopic level of the behaviour of leopards. The answer is likely, yes, to some extent. A thought experiment of coupling a microscopic trigger to a macroscopic device (such as the decay of an atom triggering a device that kills Schrodinger’s cat) shows that this is in-principle possible. On the other hand, using thermodynamics to compute the behaviour of steam engines (and totally ignoring quantum indeterminism) works just fine, because in such scenarios one is averaging over an Avogadro’s number of partlces and, given that Avogadro’s number is very large, that averages over all the quantum indeterminicity.

What about leopards? The leopard’s behaviour is of course the product of the state of its brain, acting on sensory information. Likely, quantum indeterminism is playing little or no role in the minute-by-minute responses of the leopard. That’s because, in order for the leopard to have evolved, its behaviour, its “leopardness”, must have been sufficiently under the control of genes, and genes influence brain structures on the developmental timescale of years. On the other hand, leopards are all individuals. While variation in leopard brains derives partially from differences in that individual’s genes, Kevin Mitchell tells us in his book Innate that development is a process involving much chance variation. Thus quantum indeterminicity at a biochemical level might be propogating into differences in how a mammal brain develops, and thence into the behaviour of individual leopards.

That’s all by way of introduction. So far I’ve just defined and expounded on the concepts “reductionism” and “determinism” (but it’s well worth doing that since discussion on these topics is bedeviled by people interpreting words differently). So let’s proceed to why I disagree with Mitchell’s account.

He writes:

For the reductionist, reality is flat. It may seem to comprise things in some kind of hierarchy of levels – atoms, molecules, cells, organs, organisms, populations, societies, economies, nations, worlds – but actually everything that happens at all those levels really derives from the interactions at the bottom. If you could calculate the outcome of all the low-level interactions in any system, you could predict its behaviour perfectly and there would be nothing left to explain.

There is never only one explanation of anything. We can always give multiple different explanations of a phenomenon — certainly for anything at the macroscopic level — and lots of different explanations can be true at the same time, so long as they are all mutually consistent. Thus one explanation of a leopard’s stalking behaviour will be in terms of the firings of neurons and electrical signals sent to muscles. An equally true explanation would be that the leopard is hungry.

Reductionism does indeed say that you could (in principle) reproduce the behaviour from a molecular-level calculation, and that would be one explanation. But there would also be other equally true explanations. Nothing in reductionism says that the other explanations don’t exist or are invalid or unimportant. We look for explanations because they are useful in that they enable us to understand a system, and as a practical matter the explanation that the leopard is hungry could well be the most useful. The molecular-level explanation of “stalking” is actually pretty useless, first because it can’t be done in practice, and second because it would be so voluminous and unwieldy that no-one could assimilate or understand it.

As a comparison, chess-playing AI bots are now vastly better than the best humans and can make moves that grandmasters struggle to understand. But no amount of listing of low-level computer code would “explain” why sacrificing a rook for a pawn was strategically sound — even given that, you’d still have all the explanation and understanding left to achieve.

So reductionism does not do away with high-level analysis. But — crucially — it does insist that high-level explanations need to be consistent with and compatible with explanations at one level lower, and that is why the concept is central to science.

Mitchell continues:

In a deterministic system, whatever its current organisation (or “initial conditions” at time t) you solve Newton’s equations or the Schrodinger equation or compute the wave function or whatever physicists do (which is in fact what the system is doing) and that gives the next state of the system. There’s no why involved. It doesn’t matter what any of the states mean or why they are that way – in fact, there can never be a why because the functionality of the system’s behaviour can never have any influence on anything.

I don’t see why that follows. Again, understanding, and explanations and “why?” questions can apply just as much to a fully reductionist and deterministic system. Let’s suppose that our chess-playing AI bot is fully reductionist and deterministic. Indeed they generally are, since we build computers and other devices sufficiently macroscopically that they average over quantum indeterminacy. That’s because determinism helps the purpose: we want the machine to make moves based on an evaluation of the position and the rules of chess, not to make random moves based on quantum dice throwing.

But, in reply to “why did the (deterministic) machine sacrifice a rook for a pawn” we can still answer “in order to clear space to enable the queen to invade”. Yes, you can also give other explanations, in terms of low-level machine code and a long string of 011001100 computer bits, if you really want to, but nothing has invalidated the high-level answer. The high-level analysis, the why? question, and the explanation in terms of clearing space for the queen, all still make entire sense.

I would go even further and say you can never get a system that does things under strict determinism. (Things would happen in it or to it or near it, but you wouldn’t identify the system itself as the cause of any of those things).

Mitchell’s thesis is that you only have “causes” or an entity “doing” something if there is indeterminism involved. I don’t see why that makes any difference. Suppose we built our chess-playing machine to be sensitive to quantum indeterminacy, so that there was added randomness in its moves. The answer to “why did it sacrifice a rook for a pawn?” could then be “because of a chance quantum fluctuation”. Which would be a good answer, but Mitchell is suggesting that only un-caused causes actually qualify as “causes”. I don’t see why this is so. The deterministic AI bot is still the “cause” of the move it computes, even if it itself is entirely the product of prior causation, and back along a deterministic chain. As with explanations, there is generally more than one “cause”.

Nothing about either determinism or reductionism has invalidated the statements that the chess-playing device “chose” (computed) a move, causing that move to be played, and that the reason for sacrificing the rook was to create space for the queen. All of this holds in a deterministic world.

Mitchell pushes further the argument that indeterminism negates reductionism:

For that averaging out to happen [so that indeterminism is averaged over] it means that the low-level details of every particle in a system are not all-important – what is important is the average of all their states. That describes an inherently statistical mechanism. It is, of course, the basis of the laws of thermodynamics and explains the statistical basis of macroscopic properties, like temperature. But its use here implies something deeper. It’s not just a convenient mechanism that we can use – it implies that that’s what the system is doing, from one level to the next. Once you admit that, you’ve left Flatland. You’re allowing, first, that levels of reality exist.

I agree entirely, though I don’t see that as a refutation of reductionism. At least, it doesn’t refute forms of reductionism that anyone holds or defends. Reductionism is a thesis about how levels of reality mesh together, not an assertion that all science, all explanations, should be about the lowest levels of description, and only about the lowest levels.

Indeterminism does mean that we could not fully compute the exact future high-level state of a system from the prior, low-level state. But then, under indeterminism, we also could not always predict the exact future high-level state from the prior high-level state. So, “reductionism” would not be breaking down: it would still be the case that a low-level explanation has to mesh fully and consistently with a high-level explanation. If indeterminacy were causing the high-level behaviour to diverge, it would have to feature in both the low-level and high-level explanations.

Mitchell then makes a stronger claim:

The macroscopic state as a whole does depend on some particular microstate, of course, but there may be a set of such microstates that corresponds to the same macrostate. And a different set of microstates that corresponds to a different macrostate. If the evolution of the system depends on those coarse-grained macrostates (rather than on the precise details at the lower level), then this raises something truly interesting – the idea that information can have causal power in a hierarchical system …

But there cannot be a difference in the macrostate without a difference in the microstate. Thus there cannot be indeterminism that depends on the macrostate but not on the microstate. At least, we have no evidence that that form of indeterminism actually exists. If it did, that would indeed defeat reductionism and would be a radical change to how we think the world works.

It would be a form of indeterminism under which, if we knew everything about the microstate (but not the macrostate) then we would have less ability to predict time t + 1  than if we knew the macrostate (but not the microstate). But how could that be? How could we not know the macrostate? The idea that we could know the exact microstate at time t  but not be able to compute (even in principle) the macrostate at the same time t  (so before any non-deterministic events could have happened) would indeed defeat reductionism, but is surely a radical departure from how we think the world works, and is not supported by any evidence.

But Mitchell does indeed suggest this:

The low level details alone are not sufficient to predict the next state of the system. Because of random events, many next states are possible. What determines the next state (in the types of complex, hierarchical systems we’re interested in) is what macrostate the particular microstate corresponds to. The system does not just evolve from its current state by solving classical or quantum equations over all its constituent particles. It evolves based on whether the current arrangement of those particles corresponds to macrostate A or macrostate B.

But this seems to conflate two ideas:

1) In-principle computing/reproducing the state at time t + 1 from the state at time t (determinism).

2) In-principle computing/reproducing the macrostate at time t from the microstate at time t (reductionism).

Mitchell’s suggestion is that we cannot compute: {microstate at time t } ⇒ {macrostate at time t + 1 }, but can compute: {macrostate at time t } ⇒ {macrostate at time t + 1 }. (The latter follows from: “What determines the next state … is [the] macrostate …”.)

And that can (surely?) only be the case if one cannot compute: {microstate at time t } ⇒ {macrostate at time t }, and if we are denying that then we’re denying reductionism as an input to the argument, not as a consequence of indeterminism.

Mitchell draws the conclusion:

In complex, dynamical systems that are far from equilibrium, some small differences due to random fluctuations may thus indeed percolate up to the macroscopic level, creating multiple trajectories along which the system could evolve. […]

I agree, but consider that to be a consequence of indeterminism, not a rejection of reductionism.

This brings into existence something necessary (but not by itself sufficient) for things like agency and free will: possibilities.

As someone who takes a compatibilist account of “agency” and “free will” I am likely to disagree with attempts to rescue “stronger” versions of those concepts. But that is perhaps a topic for a later post.

15 thoughts on “Does quantum indeterminism defeat reductionism?

  1. Schlafly

    I have trouble with your definition of reductionism. You say if you knew everything about a system, you could reproduce it. Knowing everything includes location, but not behavior at time t+1.

    Knowing location is not even meaningful, depending on your quantum interpretation. And if you don’t know what the system is going to do, then what makes you think you know everything?

    You later say “there cannot be a difference in the macrostate without a difference in the microstate.” That is really the heart of reductionism. It is central to all of modern science, but is really just an assumption that could turn out to be false.

    Reply
    1. Coel Post author

      Hi Schlafly,

      You later say “there cannot be a difference in the macrostate without a difference in the microstate.” That is really the heart of reductionism. It is central to all of modern science, but is really just an assumption that could turn out to be false.

      Agreed, it could turn out to be false, and that would then overturn reductionism. So far, though, it seems to be the way the world is.

  2. Brent Meeker

    I agree with your analysis of Mitchell’s idea. But I think you pose to strong a definition of reductionism, “Let’s start by clarifying the concepts. Reductionism asserts that, if we knew everything about the low-level status of a system (that is, everything about the component atoms and molecules and their locations), then we would have enough information to — in principle — completely reproduce the system, such that a reproduction would exhibit the same high-level behaviour as the original system.” The problem with this is that even if the macrostate can be computed from the microstate, indeterminism means the reproduced system may not exhibit the the same high-level behavior. Microscopic inherently random events, like K40 decay in your bloodstream, may become amplified to change high-level behavior (e.g. Schroedinger’s cat).

    Another problem, which Mitchell raises but I agree draws inconsistent conclusions from, is that complete knowledge of microstates is impossible given quantum mechanics. Holevo’s theorem and the no-cloning theorem imply that you can never know more than half the information to reproduce a micro state. A modern version of metaphysics needs to recognize this.

    Reply
    1. Coel Post author

      Hi Brent,

      The problem with this is that even if the macrostate can be computed from the microstate, indeterminism means the reproduced system may not exhibit the the same high-level behavior.

      Yes, agreed, indeterminism means that two identical systems would diverge in their behaviour. But that’s just as much the case if we start from the macroscopic description as if we start from the microscopic description. What I’m arguing is that, so long as we can replicate the macroscopic state from the microstate (and if we can’t then, well, that’s rather weird), then we can never be in-principle worse off knowing the microstate but not the macrostate.

  3. Marvin Edwards

    I haven’t read Mitchell, but I agree with you that the notion that “a cause that has prior causes is not really a cause” does not work. It is a test that none of the prior causes can pass. Therefore it is an invalid requirement.

    Reductionism does not work because there are at least 3 distinct classes of causal mechanisms: physical, biological, and rational. Each operates at a different level of organization and each has its own rules. It is quite possible that quantum mechanics is a fourth level, operating deterministically by a set of rules that we simply do not yet understand. But, in any case, the fact that each level operates by its own rules means that we cannot predict the behavior of a higher level from the behavior at a lower level, as we can demonstrate here.

    Inanimate objects behave “passively” in response to physical forces. Place a bowling ball on the side of a hill and it will always roll down hill. Its behavior is entirely governed by the law of gravity.

    Living organism behave “purposefully”, they are driven biologically to survive, thrive, and reproduce. Place a squirrel on the same hill and he may go up, down, or any other direction where he expects to find his next acorn, or perhaps a mate. The squirrel’s behavior is not governed by gravity, but by its biological needs.

    Intelligent species behave “deliberately”. Their evolved brain is able to imagine alternate possibilities, evaluate their options, and choose what they will do next. This “choosing what they will do” is called “free will”, because it is literally a freely chosen “I will”.

    How do we know that we cannot derive deliberate behavior from physical laws? Well, the simplest demonstration is that we cannot derive the laws of traffic from the laws of physics. The laws of traffic clearly govern the behavior of intelligent species, like us, but in America everyone drives on the right side of the road, while in England everyone drives on the left. Yet the laws of physics are identical in both places.

    Reductionism does not work. While physics is able to explain why a cup of water flows down hill, it has no clue as to why a similar cup of water, heated, and mixed with a little coffee, hops into a car and goes grocery shopping.

    Reply
    1. Coel Post author

      Hi Marvin,

      … the fact that each level operates by its own rules means that we cannot predict the behavior of a higher level from the behavior at a lower level,…

      Are you then disagreeing with my thought experiment about a Star-Trek-style transporter device and a leopard? Suppose the transporter device knows everything about low-level atoms and molecules, and uses that information to replicate a leopard. Would not the replicated leopard be a leopard, manifesting leopard behaviour? If so, then we can — in principle — predict leopard behaviour from the low-level state.

      Reductionism does not work. While physics is able to explain why a cup of water flows down hill, it has no clue as to why a similar cup of water, heated, and mixed with a little coffee, hops into a car and goes grocery shopping.

      But, as I argued in the article, no-one proposes a version of reductionism that demands that the high-level behaviour must be translatable into the language of the low-level behaviour. That’s not what anyone who argues for reductionism means by the term.

    2. Marvin Edwards

      We assume the transporter would copy the material as currently arranged, and then start up the processes wherever they left off. This would copy all three levels of causal mechanisms.

      I think we exist in the processing. When the processing stops, our life stops, and all we have then is an inanimate lump of matter. So, its not just the atoms, but what they happen to be doing at that time and place, that must be transported to the new location.

      All three causal mechanisms may be presumed to be deterministic, such that any event might be fully explained by some combination of physical, biological, and rational mechanisms.

      The importance of the rational causal mechanism (where we find free will) is that our behavior is calculated using our beliefs. Michael Gazzaniga pointed out the importance of beliefs this way:

      “Sure, we are vastly more complicated than a bee. Although we both have automatic responses, we humans have cognition and beliefs of all kinds, and the possession of a belief trumps all the automatic biological process and hardware, honed by evolution, that got us to this place. Possession of a belief, though a false one, drove Othello to kill his beloved wife, and Sidney Carton to declare, as he voluntarily took his friend’s place at the guillotine, that it was a far, far better thing he did than he had ever done.”

      Gazzaniga, Michael S.. Who’s in Charge?: Free Will and the Science of the Brain (pp. 2-3). HarperCollins. Kindle Edition.

    3. Coel Post author

      Let’s presume that the transporter copies — in an instant — every atom and molecule, and their positions and how each atom is moving (or copies all the wavefunctions, if one prefers). [Obviously this is impossible, but this is a thought experiment.]

      If the replicated system would then go on to manifest all the high-level behaviour, then that’s all that is needed for reductionism to hold — at least, the forms of reductionism that people defend.

      You’re right, a lot of what is important is in the patterns and processes, but if one copies all the atoms, including what each atom is doing (how it is moving), then the patterns and processes — including Othello’s thoughts — are also replicated.

    4. Marvin Edwards

      “but if one copies all the atoms, including what each atom is doing (how it is moving), then the patterns and processes — including Othello’s thoughts — are also replicated.”

      That seems indisputable. But you’ve explicitly discarded the most significant claim of reductionism, that everything can be explained using only the laws of physics, which remains a false claim.

    5. Coel Post author

      But you’ve explicitly discarded the most significant claim of reductionism, that everything can be explained using only the laws of physics, which remains a false claim.

      But reductionism only claims that the low-level account is AN explanation. It does not say that that is the only explanation, nor that any and all explanations can be arrived at using only physics.

      The fact that, if one entirely replicates the low-level state, then the high-level behaviour is manifest, is indeed ONE explanation. That is, it provides one “if a, b, c … , therefore …” statement.

      Reductionism most definitely does not say that, if you use only the laws of physics and the langauge of physics, then you can arrive at all the high-level-language explanations of the “the leopard was hungry” form. That form of reductionism is so obviously and trivially wrong that it’s not what anyone means by reductionism.

    6. Brent Meeker

      “Reductionism most definitely does not say that, if you use only the laws of physics and the langauge of physics, then you can arrive at all the high-level-language explanations of the “the leopard was hungry” form. That form of reductionism is so obviously and trivially wrong that it’s not what anyone means by reductionism.”

      I’m not so sure about that. I think you could, in principle, arrive at the prediction, “This bunch of atoms is going to act like a leopard that is hungry.” Since the laws of physics include quantum indeterminacy you can’t predict exactly what the leopard will do, but the empty stomach and nerve signals and hormone levels will tell you it’s hungry.

    7. Brent Meeker

      “Suppose the transporter device knows everything about low-level atoms and molecules, and uses that information to replicate a leopard. Would not the replicated leopard be a leopard, manifesting leopard behaviour? If so, then we can — in principle — predict leopard behaviour from the low-level state.”

      There’s a difference between predicting that the duplicate will behave like a leopard and being able to predict it’s actual, specific behavior. I think the former is true: being a leopard and having leopard behavior is a classical thing. But the not the latter. A leopard has K40 atoms in his blood and even if you duplicated all of his environment, the duplicates behavior would quickly diverge from the original. That’s the sense in which QM breaks deteminism. I don’t think that breaks reductionism, because it just add randomness and nobody thinks randomness is that extra magic that produces emergence.

    8. Coel Post author

      Hi Brent,

      That’s the sense in which QM breaks deteminism. I don’t think that breaks reductionism, because it just add randomness and nobody thinks randomness is that extra magic that produces emergence.

      Yes, I think we’re pretty much agreed here — QM breaks determinism, but does not break reductionism.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s