Tag Archives: reductionism

Confusion about free will, reductionism and emergence

Psychology Today has just published: “Finding the Freedom in Free Will, with the subtitle: “New theoretical work suggests that human agency and physics are compatible”. The author is Bobby Azarian, a science writer with a PhD in neuroscience. The piece is not so much wrong — I actually agree with the main conclusions — but is, perhaps, rather confused. Too often discussion in this area is bedevilled by people meaning different things by the same terms. Here is my attempt to clarify the concepts. Azarian starts:

Some famous (and brilliant) physicists, particularly those clearly in the reductionist camp, have gone out of their way to ensure that the public believes there is no place for free will in a scientific worldview.

He names Sabine Hossenfelder and Brian Greene. The “free will” that such physicists deny is “dualistic soul” free will, the idea that a decision is made by something other than the computational playing out of the material processes in the brain. And they are right, there is no place for that sort of “free will” in a scientific worldview.

Azarian then says:

Other famous scientists—physicists, neuroscientists, and complexity scientists among them—have a position virtually opposite of Greene’s and Hossenfelder’s …

He names, among others, David Deutsch and the philosopher Dan Dennett. But the conception of “free will” that they espouse is indeed just the computational playing out of the material brain. Such brain activity generates a desire to do something (a “will”) and one can reasonably talk about a person’s “freedom” to act on their will. Philosophers call this a “compatibilist” account of free will.

Importantly, and contrary to Azarian’s statement, this position is not the opposite to Greene’s and Hossenfelder’s. They are not disagreeing on what the brain is doing nor about how the brain’s “choices” are arrived at. Rather, the main difference is in whether they describe such processes with the label “free will”, and that is largely an issue of semantics.

All the above named agree that a decision is arrived by the material processes in the brain, resulting from the prior state of the system. Any discussion of “free will” needs to distinguish between the dualistic, physics-violating conception of free will and the physics-compliant, compatibilist conception of free will. They are very different, and conflating them is just confused.

Do anti-agency reductionists believe that the theories of these scientists are not worth serious consideration?

No, they don’t! What they do think is that, when talking to a public who might interpret the term “free will” in terms of a dualistic soul operating independently of material causes, that it might be better to avoid the term “free will”. People can reasonably disagree on that, but, again, this issue of semantics is distinct from whether they agree on what the brain is doing.

Origins-of-life researcher Sara Walker, who is also a physicist, explains why mainstream physicists in the reductionist camp often take what most people would consider a nonsensical position: “It is often argued the idea of information with causal power is in conflict with our current understanding of physical reality, which is described in terms of fixed laws of motions and initial conditions.”

Is that often argued? Who? Where? This is more confusion, this time about “reductionism”. The only form of “reductionism” that reductionists actually hold to can be summed up as follows:

Imagine a Star-Trek-style transporter device that knows only about low-level entities, atoms and molecules, and about how they are arranged relative to their neighbouring atoms. This device knows nothing at all about high-level concepts such as “thoughts” and “intentions”.

If such a device made a complete and accurate replica of an animal — with every molecular-level and cellular-level aspect being identical — would the replica then manifest the same behaviour? And in manifesting the same behaviour, would it manifest the same high-level “thoughts” and “intentions”? [At least for a short-time; this qualification being necessary because, owing to deterministic chaos, two such systems could then diverge in behaviour.]

If you reply “yes” then you’re a reductionist. [Whereas someone who believed that human decisions are made by a dualistic soul would likely answer “no”.]

Note that the pattern, the arrangement of the low-level entities is absolutely crucial to this thought experiment and is central to how a reductionist thinks. A Star Trek transporter does not just deliver the atoms and molecules in a disordered heap, and then expect the heap to jump up and fight Klingons.

The caricature of a reductionist is someone who would see no difference between a living human brain and a human brain put through a food blender. It shouldn’t need saying that such a view is so obviously wrong that no-one thinks like that.

What is the difference between a tree and an elephant? It is not the component parts. Trees and elephants are made of the same atoms (carbon, oxygen, nitrogen and hydrogen make up 96% of an organism’s weight, with the rest being a few other types of atoms). After all, elephants are made up of what they eat, which is trees. Indeed, by carefully controlling the water supply to a tree, one could, in principle, make its constituent atoms identical to those of a same-weight elephant.

So the difference between a tree and an elephant — and the cause of their different behaviour — is solely in the medium-scale and large-scale arrangement of the component parts. No reductionist (other than those constructed out of straw) disagrees.

Azarian then spends time contrasting a fully deterministic view of physics with the possibility of quantum-mechanical indeterminacy. This is more confusion. That topic is irrelevant here. Whether the state of the system at time t+1 is entirely specified by the state at time t, or whether there is also quantum dice throwing, is utterly irrelevant to concepts of “will”, “intention”, “agency” and “freedom” because quantum dice throwing doesn’t give you any of those.

Indeed, Azarian accepts this, saying: “quantum indeterminism alone does not help the notion of free will much since a reality with some randomness is not the same as one where an agent has control over that randomness”. But he then argues (quoting George Ellis) that: “While indeterminacy does not provide a mechanism for free will, it provides the “wiggle room” not afforded by Laplace’s model of reality; the necessary “causal slack” in the chain of cause-and-effect that could allow room for agency, …”.

I agree that we can sensibly talk about “agency”, and indeed we need that concept, but I don’t see any way in which quantum indeterminacy helps, not even in providing “wiggle room”. [And note that, after invoking this idea, Azarian does not then use it in what follows.]

If we want to talk about agency — which we do — let’s talk about the agency of a fully deterministic system, such as that of a chess-playing computer program that can easily outplay any human.

What else “chooses” the move if not the computer program? Yes, we can go on to explain how the computer and its program came to be, just as we can explain how an elephant came to be, but if we want to ascribe agency and “will” to an elephant (and, yes, we indeed do), then we can just as well ascribe the “choice” of move to the “intelligent agent” that is the chess-playing computer program. What else do you think an elephant’s brain is doing, if not something akin to the chess-playing computer, that is, assessing input information and computing a choice?

But, Azarian asserts:

The combination of the probabilistic nature of reality and the mechanism known as top-down causation explains how living systems can affect physical reality through intentional action.

Top-down causation is another phrase that means different things to different people. As I see it, “top-down causation” is the assertion that the above Star-Trek-style replication of an animal would not work. I’ve never seen an explanation of why it wouldn’t work, or what would happen instead, but surely “top-down causation” has to mean more than “the pattern, the arrangement of components is important in how the system behaves”, because of course it is!

Top-down causation is the opposite of bottom-up causation, and it refers to the ability of a high-level controller (like an organism or a brain) to influence the trajectories of its low-level components (the molecules that make up the organism).

Yes, the high-level pattern does indeed affect how the low-level components move. An elephant’s desire for leaves affects where the molecules in its trunk are. But equally, the low-level components and their arrangement give rise to the high-level behaviour. So this is just another way of looking at the system, an equally valid way of looking at the system that is. But it is not “the opposite of bottom-up causation”. Both views have to be fully valid at the same time. Unless you’re asserting that the Star-Trek-style replication would not work. And if you’re doing that, why wouldn’t it?

This means that biological agents are not slaves to the laws of physics and fundamental forces the way non-living systems are.

Well, no, this is wrong in two ways. First, the view that the low-level description gives rise to the high-level behaviour is still equally valid and correct, so the biological agents are still “slaves to the laws of physics and fundamental forces”. That is, unless you’re advocating something way more novel and weird than you’ve said so far. And, second, this would apply just as much to non-living systems that have purpose such as the chess-playing computer.

One might reply, but that “purpose” only arises as a product of the Darwinian evolution of life, and I may readily agree with that, but that’s a somewhat different distinction.

With the transition to life, macroscopic physical systems became freed from the fixed trajectories that we see with the movement of inanimate systems, which are predictable from simple laws of motion.

If that is saying only that life evolved to be more complex than non-living things, and thus their behaviour is more complex (and can be sensibly described as “purposeful”), then yes, sure, agreed.

The emergence of top-down causation is an example of what philosophers call a “strong emergence” because there is a fundamental change in the kind of causality we see in the world.

Well, no. Provided you agree that the Star-Trek-style replication would work, then this counts as weak emergence. “Strong” emergence has to be something more than that, entailing the replication not working.

Again, words can mean different things to different people, but the whole point of “strong” emergence is that “reductionism” fails, and reductionism (the sort that anyone actually holds to, that is) is summed up by the Star-Trek-style replication thought experiment.

[I’m aware that philosophers might regard the view summed up by that thought experiment as being “supervenience physicalism”, and assert that “reductionism” entails something more, but that’s not what “reductionism” means to physicists.]

And if by “a fundamental change in the kind of causality we see in the world” one means that Darwinian evolution leads to hyper-complex entities to which we can usefully ascribe purpose, intention and agency, then ok, yes, that is indeed an important way of understanding the world.

But it is not really “a fundamental change in the kind of causality we see in the world” because the bottom-up view, the view that these are collections of atoms behaving in accordance with the laws of physics, remains fully valid at the same time!

I’m actually fully on board with viewing living creatures as entities that have intentions and which exhibit purpose, and fully agree that such high-level modes of analysis are, for many purposes, the best and only ways of understanding their behaviour.

But it is wrong to see this as a rejection of reductionism or of a bottom-up way of seeing things. Both perspectives are true simultaneously and are fully compatible. I agree with the view that Azarian is arriving at here, but regard his explanation of how it is arrived at as confused.

… agency emerges when the information encoded in the patterned interactions of molecules begins telling the molecules what to do.

True, but it always did. Even within physics, the pattern, the arrangement of matter determines how it behaves, and hence the arrangement will “tell molecules what to do”. Even in a system as simple as a hydrogen atom, the arrangement of having the electron spin aligned with the proton spin will lead to different behaviour from the arrangement where the spins are anti-aligned.

Towards the end of the piece, Azarian’s and my views are perhaps converging:

The bottom-up flow of causation is never disrupted — it is just harnessed and directed toward a goal. We still inhabit a cause-and-effect cosmos, but now the picture is more nuanced, with high-level causes being every bit as real as the low-level ones.

Agreed. Azarian then explains that, when it comes to biological systems, what matters for behaviour is the macrostate of the system (the high-level “pattern”) rather than the microstate (all the low-level details).

Yes, agreed. But that is also just as true in physics. Analysing a physical system in terms of its macrostate (where many different microstates could constitute the same macrostate), is exactly the mode of analysis of “statistical mechanics”, which is at the heart of modern physics.

Azarian quotes neurogeneticist Kevin Mitchell in (correctly) saying that:

The macroscopic state as a whole does depend on some particular microstate, of course, but there may be a set of such microstates that corresponds to the same macrostate …

But he goes slightly wrong in quoting neuroscientist Erik Hoel saying:

Recent research has argued exactly this by demonstrating the possibility of causal emergence: when a macroscale contains more information and does more causal work than its underlying microscale.

The macrostate cannot contain more information (and cannot “do more causal work”) than the underlying microstate, since one can reconstruct the macrostate from the microstate. Again, that is the point of the Star Trek thought experiment, and if that is wrong then we’ll need to overturn a lot of science. (Though no-one has ever given a coherent account of why it would be wrong, or of how things would work instead.)

So, where are we? I basically agree with the view that Azarian arrives at. So maybe we’re just disagreeing about what terms such as “reductionism”, “strong/weak emergence” and “top-down” causation actually mean. It wouldn’t be the first time! As I see it, though, this view is not a new and novel way of thinking, but is pretty much what the hidebound, reductionist physicists (like myself) have been advocating all along. None of us ever thought that a living human brain behaves the same as that same brain put through a food blender.

Does quantum indeterminism defeat reductionism?

After writing a piece on the role of metaphysics in science, which was a reply to neuroscientist Kevin Mitchell, he pointed me to several of his articles including one on reductionism and determinism. I found this interesting since I hadn’t really thought about the interplay of the two concepts. Mitchell argues that if the world is intrinsically indeterministic (which I think it is), then that defeats reductionism. We likely agree on much of the science, and how the world is, but nevertheless I largely disagree with his article.

Let’s start by clarifying the concepts. Reductionism asserts that, if we knew everything about the low-level status of a system (that is, everything about the component atoms and molecules and their locations), then we would have enough information to — in principle — completely reproduce the system, such that a reproduction would exhibit the same high-level behaviour as the original system. Thus, suppose we had a Star-Trek-style transporter device that knew only about (but everything about) low-level atoms and molecules and their positions. We could use it to duplicate a leopard, and the duplicated leopard would manifest the same high-level behaviour (“stalking an antelope”) as the original, even though the transporter device knows nothing about high-level concepts such as “stalking” or “antelope”.

As an aside, philosophers might label the concept I’ve just defined as “supervenience”, and might regard “reductionism” as a stronger thesis about translations between the high-level concepts such as “stalking” and the language of physics at the atomic level. But that type of reductionism generally doesn’t work, whereas reductionism as I’ve just defined it does seem to be how the world is, and much of science proceeds by assuming that it holds. While this version of reductionism does not imply that explanations at different levels can be translated into each other, it does imply that explanations at different levels need to be mutually consistent, and ensuring that is one of the most powerful tools of science.

Our second concept, determinism, then asserts that if we knew the entire and exact low-level description of a system at time t  then we could — in principle — compute the exact state of the system at time t + 1. I don’t think the world is fully deterministic. I think that quantum mechanics tells us that there is indeterminism at the microscopic level. Thus, while we can compute, from the prior state, the probability of an atom decaying in a given time interval, we cannot (even in principle) compute the actual time of the decay. Some leading physicists disagree, and advocate for interpretations in which quantum mechanics is deterministic, so the issue is still an open question, but I suggest that indeterminism is the current majority opinion among physicists and I’ll assume it here.

This raises the question of whether indeterminism at the microscopic level propagates to indeterminism at the macrosopic level of the behaviour of leopards. The answer is likely, yes, to some extent. A thought experiment of coupling a microscopic trigger to a macroscopic device (such as the decay of an atom triggering a device that kills Schrodinger’s cat) shows that this is in-principle possible. On the other hand, using thermodynamics to compute the behaviour of steam engines (and totally ignoring quantum indeterminism) works just fine, because in such scenarios one is averaging over an Avogadro’s number of partlces and, given that Avogadro’s number is very large, that averages over all the quantum indeterminicity.

What about leopards? The leopard’s behaviour is of course the product of the state of its brain, acting on sensory information. Likely, quantum indeterminism is playing little or no role in the minute-by-minute responses of the leopard. That’s because, in order for the leopard to have evolved, its behaviour, its “leopardness”, must have been sufficiently under the control of genes, and genes influence brain structures on the developmental timescale of years. On the other hand, leopards are all individuals. While variation in leopard brains derives partially from differences in that individual’s genes, Kevin Mitchell tells us in his book Innate that development is a process involving much chance variation. Thus quantum indeterminicity at a biochemical level might be propogating into differences in how a mammal brain develops, and thence into the behaviour of individual leopards.

That’s all by way of introduction. So far I’ve just defined and expounded on the concepts “reductionism” and “determinism” (but it’s well worth doing that since discussion on these topics is bedeviled by people interpreting words differently). So let’s proceed to why I disagree with Mitchell’s account.

He writes:

For the reductionist, reality is flat. It may seem to comprise things in some kind of hierarchy of levels – atoms, molecules, cells, organs, organisms, populations, societies, economies, nations, worlds – but actually everything that happens at all those levels really derives from the interactions at the bottom. If you could calculate the outcome of all the low-level interactions in any system, you could predict its behaviour perfectly and there would be nothing left to explain.

There is never only one explanation of anything. We can always give multiple different explanations of a phenomenon — certainly for anything at the macroscopic level — and lots of different explanations can be true at the same time, so long as they are all mutually consistent. Thus one explanation of a leopard’s stalking behaviour will be in terms of the firings of neurons and electrical signals sent to muscles. An equally true explanation would be that the leopard is hungry.

Reductionism does indeed say that you could (in principle) reproduce the behaviour from a molecular-level calculation, and that would be one explanation. But there would also be other equally true explanations. Nothing in reductionism says that the other explanations don’t exist or are invalid or unimportant. We look for explanations because they are useful in that they enable us to understand a system, and as a practical matter the explanation that the leopard is hungry could well be the most useful. The molecular-level explanation of “stalking” is actually pretty useless, first because it can’t be done in practice, and second because it would be so voluminous and unwieldy that no-one could assimilate or understand it.

As a comparison, chess-playing AI bots are now vastly better than the best humans and can make moves that grandmasters struggle to understand. But no amount of listing of low-level computer code would “explain” why sacrificing a rook for a pawn was strategically sound — even given that, you’d still have all the explanation and understanding left to achieve.

So reductionism does not do away with high-level analysis. But — crucially — it does insist that high-level explanations need to be consistent with and compatible with explanations at one level lower, and that is why the concept is central to science.

Mitchell continues:

In a deterministic system, whatever its current organisation (or “initial conditions” at time t) you solve Newton’s equations or the Schrodinger equation or compute the wave function or whatever physicists do (which is in fact what the system is doing) and that gives the next state of the system. There’s no why involved. It doesn’t matter what any of the states mean or why they are that way – in fact, there can never be a why because the functionality of the system’s behaviour can never have any influence on anything.

I don’t see why that follows. Again, understanding, and explanations and “why?” questions can apply just as much to a fully reductionist and deterministic system. Let’s suppose that our chess-playing AI bot is fully reductionist and deterministic. Indeed they generally are, since we build computers and other devices sufficiently macroscopically that they average over quantum indeterminacy. That’s because determinism helps the purpose: we want the machine to make moves based on an evaluation of the position and the rules of chess, not to make random moves based on quantum dice throwing.

But, in reply to “why did the (deterministic) machine sacrifice a rook for a pawn” we can still answer “in order to clear space to enable the queen to invade”. Yes, you can also give other explanations, in terms of low-level machine code and a long string of 011001100 computer bits, if you really want to, but nothing has invalidated the high-level answer. The high-level analysis, the why? question, and the explanation in terms of clearing space for the queen, all still make entire sense.

I would go even further and say you can never get a system that does things under strict determinism. (Things would happen in it or to it or near it, but you wouldn’t identify the system itself as the cause of any of those things).

Mitchell’s thesis is that you only have “causes” or an entity “doing” something if there is indeterminism involved. I don’t see why that makes any difference. Suppose we built our chess-playing machine to be sensitive to quantum indeterminacy, so that there was added randomness in its moves. The answer to “why did it sacrifice a rook for a pawn?” could then be “because of a chance quantum fluctuation”. Which would be a good answer, but Mitchell is suggesting that only un-caused causes actually qualify as “causes”. I don’t see why this is so. The deterministic AI bot is still the “cause” of the move it computes, even if it itself is entirely the product of prior causation, and back along a deterministic chain. As with explanations, there is generally more than one “cause”.

Nothing about either determinism or reductionism has invalidated the statements that the chess-playing device “chose” (computed) a move, causing that move to be played, and that the reason for sacrificing the rook was to create space for the queen. All of this holds in a deterministic world.

Mitchell pushes further the argument that indeterminism negates reductionism:

For that averaging out to happen [so that indeterminism is averaged over] it means that the low-level details of every particle in a system are not all-important – what is important is the average of all their states. That describes an inherently statistical mechanism. It is, of course, the basis of the laws of thermodynamics and explains the statistical basis of macroscopic properties, like temperature. But its use here implies something deeper. It’s not just a convenient mechanism that we can use – it implies that that’s what the system is doing, from one level to the next. Once you admit that, you’ve left Flatland. You’re allowing, first, that levels of reality exist.

I agree entirely, though I don’t see that as a refutation of reductionism. At least, it doesn’t refute forms of reductionism that anyone holds or defends. Reductionism is a thesis about how levels of reality mesh together, not an assertion that all science, all explanations, should be about the lowest levels of description, and only about the lowest levels.

Indeterminism does mean that we could not fully compute the exact future high-level state of a system from the prior, low-level state. But then, under indeterminism, we also could not always predict the exact future high-level state from the prior high-level state. So, “reductionism” would not be breaking down: it would still be the case that a low-level explanation has to mesh fully and consistently with a high-level explanation. If indeterminacy were causing the high-level behaviour to diverge, it would have to feature in both the low-level and high-level explanations.

Mitchell then makes a stronger claim:

The macroscopic state as a whole does depend on some particular microstate, of course, but there may be a set of such microstates that corresponds to the same macrostate. And a different set of microstates that corresponds to a different macrostate. If the evolution of the system depends on those coarse-grained macrostates (rather than on the precise details at the lower level), then this raises something truly interesting – the idea that information can have causal power in a hierarchical system …

But there cannot be a difference in the macrostate without a difference in the microstate. Thus there cannot be indeterminism that depends on the macrostate but not on the microstate. At least, we have no evidence that that form of indeterminism actually exists. If it did, that would indeed defeat reductionism and would be a radical change to how we think the world works.

It would be a form of indeterminism under which, if we knew everything about the microstate (but not the macrostate) then we would have less ability to predict time t + 1  than if we knew the macrostate (but not the microstate). But how could that be? How could we not know the macrostate? The idea that we could know the exact microstate at time t  but not be able to compute (even in principle) the macrostate at the same time t  (so before any non-deterministic events could have happened) would indeed defeat reductionism, but is surely a radical departure from how we think the world works, and is not supported by any evidence.

But Mitchell does indeed suggest this:

The low level details alone are not sufficient to predict the next state of the system. Because of random events, many next states are possible. What determines the next state (in the types of complex, hierarchical systems we’re interested in) is what macrostate the particular microstate corresponds to. The system does not just evolve from its current state by solving classical or quantum equations over all its constituent particles. It evolves based on whether the current arrangement of those particles corresponds to macrostate A or macrostate B.

But this seems to conflate two ideas:

1) In-principle computing/reproducing the state at time t + 1 from the state at time t (determinism).

2) In-principle computing/reproducing the macrostate at time t from the microstate at time t (reductionism).

Mitchell’s suggestion is that we cannot compute: {microstate at time t } ⇒ {macrostate at time t + 1 }, but can compute: {macrostate at time t } ⇒ {macrostate at time t + 1 }. (The latter follows from: “What determines the next state … is [the] macrostate …”.)

And that can (surely?) only be the case if one cannot compute: {microstate at time t } ⇒ {macrostate at time t }, and if we are denying that then we’re denying reductionism as an input to the argument, not as a consequence of indeterminism.

Mitchell draws the conclusion:

In complex, dynamical systems that are far from equilibrium, some small differences due to random fluctuations may thus indeed percolate up to the macroscopic level, creating multiple trajectories along which the system could evolve. […]

I agree, but consider that to be a consequence of indeterminism, not a rejection of reductionism.

This brings into existence something necessary (but not by itself sufficient) for things like agency and free will: possibilities.

As someone who takes a compatibilist account of “agency” and “free will” I am likely to disagree with attempts to rescue “stronger” versions of those concepts. But that is perhaps a topic for a later post.

Scientism: Part 4: Reductionism

This is the Fourth Part of a review of Science Unlimited? The Challenges of Scientism, edited by Maarten Boudry and Massimo Pigliucci. See also Part 1: Pseudoscience, Part 2: The Humanities, and Part 3: Philosophy.

Reductionism is a big, bad, bogey word, usually uttered by those accusing others of holding naive and simplistic notions. The dominant opinion among philosophers is that reductionism does not work, whereas scientists use reductionist methods all the time and see nothing wrong with doing so.

That paradox is resolved by realising that “reductionism” means very different things to different people. To scientists it is an ontological thesis. It says that if one exactly replicates all the low-level ontology of a complex system, then all of the high-level behaviour would be entailed. Thus there cannot be a difference in high-level behaviour without there being a low-level difference (if someone is thinking “I fancy coffee” instead of “I fancy tea”, then there must be a difference in patterns of electrical signals swirling around their neurons). Continue reading

How not to defend humanistic reasoning

Sometimes the attitudes of philosophers towards science baffle me. A good example is the article Defending Humanistic Reasoning by Paul Giladi, Alexis Papazoglou and Giuseppina D’Oro, recently in Philosophy Now.

Why did Caesar cross the Rubicon? Because of his leg movements? Or because he wanted to assert his authority in Rome over his rivals? When we seek to interpret the actions of Caesar and Socrates, and ask what reasons they had for acting so, we do not usually want their actions to be explained as we might explain the rise of the tides or the motion of the planets; that is, as physical events dictated by natural laws. […]

The two varieties of explanation appear to compete, because both give rival explanations of the same action. But there is a way in which scientific explanations such as bodily movements and humanistic explanations such as motives and goals need not compete.

This treats “science” as though it stops where humans start. Science can deal with the world as it was before humans evolved, but at some point humans came along and — for unstated reasons — humans are outside the scope of science. This might be how some philosophers see things but the notion is totally alien to science. Humans are natural products of a natural world, and are just as much a part of what science can study as anything else.

Yes of course we want explanations of Caesar’s acts in terms of “motivations and goals” rather than physiology alone — is there even one person anywhere who would deny that? But nothing about human motivations and goals is outside the proper domain of science. Continue reading

Reductionism and Unity in Science

One problem encountered when physicists talk to philosophers of science is that we are, to quote George Bernard Shaw out of context, divided by a common language. A prime example concerns the word “reductionism”, which means different things to the two communities.

In the 20th Century the Logical Positivist philosophers were engaged in a highly normative program of specifying how they thought academic enquiry and science should be conducted. In 1961, Ernest Nagel published “The Structure of Science”, in which he discussed how high-level explanatory concepts (those applying to complex ensembles, and thus as used in biology or the social sciences) should be related to lower-level concepts (as used in physics). He proposed that theories at the different levels should be closely related and linked by explicit and tightly specified “bridge laws”. This idea is what philosophers call “inter-theoretic reductionism”, or just “reductionism”. It is a rather strong thesis about linkages between different levels of explanation in science.

To cut a long story short, Nagel’s conception does not work; nature is not like that. Amongst philosophers, Jerry Fodor has been influential in refuting Nagel’s reductionism as applied to many sciences. He called the sciences that cannot be Nagel-style reduced to lower-level descriptions the “special sciences”. This is a rather weird term to use since all sciences turn out to be “special sciences” (Nagel-style bridge-law reductionism does not always work even within fundamental particle physics, for which see below), but the term is a relic of the original presumption that a failure of Nagel-style reductionism would be the exception rather than the rule.

For the above reasons, philosophers of science generally maintain that “reductionism” (by which they mean the Nagel’s strong thesis) does not work, and on that they are right. They thus hold that physicists (who generally do espouse and defend a doctrine of reductionism) are naive in not realising that.

“The underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known, and the difficulty is only that the exact application of these laws leads to equations much too complicated to be soluble.”     — Paul Dirac, 1929 [1]

The problem is, the physicists’ conception of reductionism is very different. Physicists are, for the most part, blithely unaware of the above debate within philosophy, since the ethos of Nagel-style reductionism did not come from physics and was never a live issue within physics. Physicists have always been pragmatic and have adopted whatever works, whatever nature leads them to. Thus, where nature leads them to Nagel-style bridge laws physicists will readily adopt them, but on the whole nature is not like that.

The physicists’ conception of “reductionism” is instead what philosophers would call “supervenience physicalism”. This is a vastly weaker thesis than Nagel-style inter-theoretic reduction. The physicists’ thesis is ontological (about how the world is) in contrast to Nagel’s thesis which is epistemological (about how our ideas about the world should be). Continue reading

Basics of scientism: the web of knowledge

scientism A common criticism of science is that it must make foundational assumptions that have to be taken on faith. It is, the critic asserts, just one world view among other, equally “valid”, world views that are based on different starting assumptions. Thus, the critic declares, science adopts naturalism as an axiom of faith, whereas a religious view is more complete in that it also allows for supernaturalism.

This argument assumes a linear view of knowledge, in which one starts with basic assumptions and builds on them using reason and evidence. The fundamentals of logic, for example, are part of the basic assumptions, and these cannot be further justified, but are simply the starting points of the system.

Under scientism this view is wrong. Instead, all knowledge should be regarded as a web of inter-related ideas, that are adopted in order that the overall web best models the world that we experience through sense data.

Any part of this web of ideas can be examined and replaced, if replacing it improves the overall match to reality. Even basic axioms of maths and logic can be evaluated, and thus they are ultimately accepted for empirical reasons, namely that they model the real world.

This view of knowledge was promoted by the Vienna Circle philosophers such as Otto Neurath, who gave the metaphor of knowledge being a raft floating at sea, where any part of it may be replaced. As worded by Quine: Continue reading