Reductionism and Unity in Science

One problem encountered when physicists talk to philosophers of science is that we are, to quote George Bernard Shaw out of context, divided by a common language. A prime example concerns the word “reductionism”, which means different things to the two communities.

In the 20th Century the Logical Positivist philosophers were engaged in a highly normative program of specifying how they thought academic enquiry and science should be conducted. In 1961, Ernest Nagel published “The Structure of Science”, in which he discussed how high-level explanatory concepts (those applying to complex ensembles, and thus as used in biology or the social sciences) should be related to lower-level concepts (as used in physics). He proposed that theories at the different levels should be closely related and linked by explicit and tightly specified “bridge laws”. This idea is what philosophers call “inter-theoretic reductionism”, or just “reductionism”. It is a rather strong thesis about linkages between different levels of explanation in science.

To cut a long story short, Nagel’s conception does not work; nature is not like that. Amongst philosophers, Jerry Fodor has been influential in refuting Nagel’s reductionism as applied to many sciences. He called the sciences that cannot be Nagel-style reduced to lower-level descriptions the “special sciences”. This is a rather weird term to use since all sciences turn out to be “special sciences” (Nagel-style bridge-law reductionism does not always work even within fundamental particle physics, for which see below), but the term is a relic of the original presumption that a failure of Nagel-style reductionism would be the exception rather than the rule.

For the above reasons, philosophers of science generally maintain that “reductionism” (by which they mean the Nagel’s strong thesis) does not work, and on that they are right. They thus hold that physicists (who generally do espouse and defend a doctrine of reductionism) are naive in not realising that.

“The underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known, and the difficulty is only that the exact application of these laws leads to equations much too complicated to be soluble.”     — Paul Dirac, 1929 [1]

The problem is, the physicists’ conception of reductionism is very different. Physicists are, for the most part, blithely unaware of the above debate within philosophy, since the ethos of Nagel-style reductionism did not come from physics and was never a live issue within physics. Physicists have always been pragmatic and have adopted whatever works, whatever nature leads them to. Thus, where nature leads them to Nagel-style bridge laws physicists will readily adopt them, but on the whole nature is not like that.

The physicists’ conception of “reductionism” is instead what philosophers would call “supervenience physicalism”. This is a vastly weaker thesis than Nagel-style inter-theoretic reduction. The physicists’ thesis is ontological (about how the world is) in contrast to Nagel’s thesis which is epistemological (about how our ideas about the world should be).

The ontological doctrine of supervenience physicalism does have one epistemological implication, however, namely that explanations at different levels need to be fully consistent with each other. That requirement for consistency (the simple absence of anything actually contradictory) is a vastly weaker doctrine than Nagel’s quest for “bridge laws”, yet it is still sufficiently profound that it underpins all of science and indeed constitutes a powerful tool for doing science.

Physicists regard supervenience-physicalism reductionism as an obviously true account of how nature is. Thus physicists often regard philosophers as being pretty weird for disputing and rejecting “reductionism”, not realising that a philosopher is likely interpreting the term rather differently. Thus the two camps could well be in pretty good agreement, while simultaneously regarding each other as wrong.

With that introduction, what I want to do here is present an account of reductionism within science, and explain why the concept underpins science and why it is such an important tool of science. For clarity, what I am talking about below is “reductionism” as physicists understand the term (so none of the below is about Nagel-style reductionism).

“It is quite true that, to the best of my judgment, the argumentation which applies to brutes holds equally good of men; and, therefore, that all states of consciousness in us, as in them, are immediately caused by molecular changes of the brain-substance.”     — T. H. Huxley, 1874 [2]

“Physicalism” is the idea that everything is made up of the sort of stuff studied by physics, namely particles such as electrons, protons and neutrinos. Everything else is aggregations and patterns of the low-level physical stuff. Physicalism thus denies the existence of a non-material “soul” or any stuff unlike that studied by physics.

“Supervenience” then says that the properties of the ensemble result from the properties of the low-level particles and the ways in which those particles interact with each other. Thus, if one were to replicate (or perfectly simulate) all of the low-level aspects of a system, then the ensemble-level behaviour would be entailed. Indeed, simulating the low-level aspects of a system in order to understand the behaviour of the ensemble is a common way of doing science, particularly nowadays when one can throw computing power at a problem.

The combination of “physicalism” and “supervenience” is generally referred to by scientists as “reductionism”, and is the notion that I discuss here. [3]

One can regard a particle-by-particle account of everything in an ensemble — a “low-level account” — as a complete description of the system in the sense that any ensemble-level behaviour would be reproducible from the low-level description. Thus, if one were to produce an exact atom-by-atom replication of a human being, then that replicated human would behave the same way, given the exact same environment.

However, such accounts would be so large and unwieldy that they are impossible to achieve and impossible to work with. That’s because there are of order 1,000,000,000,000,000,000,000,000 atoms in any object that you might pick up in your hand. Nobody wants to or could work with an account that laboriously listed and specified each of those atoms. Thus, we mostly work with “higher level” accounts, which regard ensembled patterns of particles as objects in their own right, and so describe the ensemble in terms of the properties and behaviour of the composite objects.

For example, we would describe a metal in terms such as “density”, “hardness”, “ductility”, and “conductivity”, rather than particle by particle. Such properties are properties of the ensemble, not of the constituents; indeed terms such as “ductility” and “conductivity” are not even defined as applied to a single particle.

A higher-level account is vastly more concise than a lower-level account. It achieves that by ignoring most of the information about the ensemble, and focusing only on the salient features, namely those features that the account is constructed to address. For example we ask about a metal’s conductivity when we want to build a device that will conduct electricity.

The reduced-information, higher-level accounts are inevitably not the whole truth — that’s the whole point of them — and thus they can at best be only approximately true. For example, it might be useful, and work well enough, to treat a fluid as a constant-density, continuous substance, and to simply ignore the atomic-level structure that such an account averages over. It isn’t fully true, but it can be sufficiently true to be useful.

Since the lower-level accounts and the higher-level accounts are about the same ensemble, all accounts at the different levels of description need to be fully consistent with each other. Further, if we are to fully understand the ensemble, then we need to be able to see how the different levels link to each other.

Science is often a matter of understanding how an account at one level relates to accounts one-level higher and one-level lower. When understanding a given level, it will rarely be the case that accounts from several-levels higher or several-levels lower will be useful, owing to the unwieldiness issue. But, if we can understand how a phenomenon arises from an account one level down, and then understand that level in terms of the level below that, then we have a “hierarchical reduction” of levels that binds the entire picture together.

The idea that the world consists of patterns of patterns of patterns of low-level particles is a profound statement about nature, and is central to modern science. Indeed, Richard Feynman, began his classic Feynman Lectures of Physics by saying: [4]

If, in some cataclysm, all of scientific knowledge were to be destroyed, and only one sentence passed on to the next generation of creatures, what statement would contain the most information in the fewest words? I believe it is the atomic hypothesis that all things are made of atoms — little particles that move around in perpetual motion, attracting each other when they are a little distance apart, but repelling upon being squeezed into one another. In that one sentence, you will see, there is an enormous amount of information about the world, if just a little imagination and thinking are applied.

Having said all the above, let’s have some caveats and clarifications:

(1) “Reductionism” seems to be correct in describing how material is made up of molecules, which are made up of atoms, which are made up of nuclei and electrons, with the nuclei being made up of neutrons and protons, which are themselves made of quarks. But, it is not obvious that one can regard “space” as being composed of particles. This is the perennial problem that our theory of particles (quantum mechanics) is inconsistent with our theory of space and time (general relativity). I don’t know the answer to this one so I’ll just ignore it here.

(2) Given quantum indeterminism, a given low-level description can give rise to more than one high-level outcome, especially when differences are amplified by deterministic chaos. Thus, two exactly identical replicated systems will gradually diverge in their macroscopic behaviour. [5]

Nevertheless, the fact that reductionist tools do work very well in science tells us that, for many purposes, determinism is a good-enough approximation. Anyhow, there is nothing to stop one including some quantum dice-throwing in the low-level description, and seeing how that gives rise to higher-level phenomena. [6]

(3) If one were to take a strictly literal interpretation of quantum mechanics, one might claim that the only valid way of proceeding is to write down a single wavefunction for the entire universe, with that wavefunction containing information about every particle in the universe in one go. For example, Bell’s theorem tells us that particles have non-local effects. Thus, one might claim that reductionism is in-principle impossible, and that one can only ever consider the entire ensemble.

Of course no-one ever does write down the wavefunction for every particle in the universe. De facto, wavefunctions “decohere” sufficiently that one can ignore quantum entanglement and treat parts of an ensemble in isolation. If that were not true then no-one could ever calculate anything (and yet quantum mechanics is adopted expressly because one can make calculations that match reality to excellent precision). Thus scientific reductionism is very much a pragmatic doctrine, adopted because it works.

(4) Since the whole point of high-level accounts is that they are reduced-information accounts, it follows that any number of different low-level states (microstates) can map to the same high-level behaviour (macrostate). This is often called “multiple realisability”.

For example, when considering a gas in a container, the concepts “temperature” and “pressure” are useful summaries of the macrostate of the gas, but there are a vast number of different low-level lists of the positions and velocities of each gas molecule that would map to the same macrostate (more precisely, the concept “entropy” gives a careful accounting of how many microstates map to a given macrostate, and for everyday-world objects that number is very large).

(5) Since the whole point of high-level accounts is to be simplified and concise (and therefore useful), high-level accounts can at best be only approximately true. Thus, an account of, say, “the causes of the First World War” cannot be both concise and the whole truth; if it is concise then it can only be an approximation to the truth.

(6) In order for a low-level description to be complete we need a list of constituent particles and accounts of how such particles behave (the “dynamical rules”, often called “laws of physics”). But, we also need all the “starting conditions”, an account of where each particle is and what it is doing. In other words our complete low-level account entails a vast amount of local and historical contingency, beyond the basic laws of physics, and it is that which makes low-level accounts so unwieldy.

(7) Of course we never can have a complete low-level description (even leaving aside the limitations from quantum mechanics). Our knowledge will always be seriously incomplete, and anyhow a complete description would be too large to use. Thus in practice we will always work with a low-level description that is seriously incomplete and only an approximation to the truth.

In linking levels together, science is therefore linking approximately true and incomplete low-level descriptions to partially true high-level descriptions. But then science always is a matter of pragmatically doing the best one can.

(8) We do not know, a priori, whether the high-level descriptions will be naturally simple or naturally complex, and thus we cannot say, a priori, whether there will be concise explanatory statements linking behaviour at the different levels. It is up to nature to tell us how it behaves, and we can’t impose preconceptions about what sort of explanations we will accept. [7]

An example of complexity at the high level is the aforementioned “causes of the First World War”, which could not be reduced to a simple and concise lower-level explanations (what it could be reduced to, in principle, is a vast number of historically contingent statements).

In contrast, an example of relative simplicity at the ensemble level, even a huge ensemble of ∼ 1053 or more different particles, is the statement that planets and stars are approximately spherical.

[An aside: One can argue that there can be great complexity only at the mid-size level dominated by the electromagnetic force, since both positive and negative charges are necessary for constructing highly complex structures. All of biology is essentially sculpted by the electromagnetic force. In contrast, gravity dominates at larger ranges, and all large-scale patterns are essentially sculpted by gravity. Since gravity only adds (there is no negative mass) such patterns are necessarily simpler.]

(9) There is a standing joke about academics who say “… and ninthly …”, but this ninth clarification is the very important point that this form of reductionism is about ontology, about how nature actually is. In that sense, the lowest level quarks, leptons and photons of particle physics can be regarded as primary or fundamental, with everything else being a pattern of those particles, or a pattern of patterns of patterns of those particles.

But that is not a statement about epistemology, about how we find things out, and thus it is not a statement that, in order to understand something, we should begin at the particle level. As already stated, for most purposes the particle-level description is unusably unwieldy.

More fundamentally, as empiricists, we humans start finding things out by observing the world around us, and the things we see — other people, trees, hills — are themselves high-level patterns (remembering all those noughts above), and thus, epistemologically, our starting point is with high-level accounts of complex patterns of stuff.

From there we have to work downwards, epistemologically, to the levels of molecules and atoms and particles. We can also work upwards in levels of description, to concepts such as “ecosystem” or “continental drift”. Thus, in human epistemological terms, there is nothing “fundamental” or privileged about the low-level account; it is only “fundamental” in ontological terms.

“But phenomena like mind and life do emerge. The rules they obey are not independent truths, but follow from scientific principles at a deeper level; apart from historical accidents that by definition cannot be explained, the nervous systems of George and his friends have evolved to what they are entirely because of the principles of macroscopic physics and chemistry, which in turn are what they are entirely because of principles of standard model of elementary particles. It is not so much that the reductionist world view helps us to understand George himself as that it rules out other sorts of understanding.”     — Steven Weinberg [7]

The job of science is to seek the explanations which give the most explanatory and predictive power about the world around us, starting with the human-scale information that we record with our senses, and moving outwards from there, both to the small scale and the ontologically fundamental, and to the larger scale of the whole universe and to overarching concepts.

When describing ensembles the job of science is to find the higher-level concepts that are most “natural” and thus most useful and explanatory. Thus emergent, higher-level concepts such as “temperature” and “density” are useful to physicists, while concepts such as “animal” and “species” are natural and useful to biologists.

The ontological doctrine of reductionism has one major consequence for empiricism, namely that the higher-level concepts must be in full concordance and consilience with lower levels of description. Thus, we achieve the best understanding when we can see why a description at one level entails the phenomena described at one level higher. Since reductionism says that nature is a unified whole, in which a small number of types of physical particle form nested hierarchical patterns upon patterns upon patterns, we should end up with a unified account of nature.

What one cannot do is have one form of description for one aspect of nature, and then a different and inconsistent description of another aspect of nature (a different but fully-consistent description is of course fine). Where we do have such inconsistency, as with quantum mechanics and general relativity, it is a problem that needs fixing.

Thus science achieves understanding when it links levels together. As an example, Darwin developed his Origin of Species using animal-level and species-level concepts, at a time when nothing was known of the underlying genetics. Today’s molecular-genetic accounts of evolution need to, and in fact do, fully accord with the higher-level accounts, which is one reason why we now have great confidence in today’s accounts of evolution.

The linking together of different levels of thinking, and different ways of thinking, and then trying to make them different aspects of the same thing, is a hugely powerful tool of science. Thus, if one wants to know how a star will evolve over time, one codes a lot of lower-level physics (properties of gases and nuclear reactions) into a computer, and then simply watches what the ensemble does. By simulating on a computer, speeded up hugely compared to real time, one can “observe” what in nature takes vastly longer than human lifetimes, or what could not fit into a laboratory.

Similarly, if one wants to understand hurricanes or weather systems or indeed climate change, one codes into a computer all the lower-level physics, including all the local-contingency stuff, and then watches what happens as the simulated ensemble evolves. While our low-level description will always be incomplete, it can be sufficiently accurate for the task in hand, such that the high-level phenomenon is adequately modelled. The art of such science is in judging how good the low-level simulation needs to be and how much such limitations are affecting the output.

This is how the physical sciences treat complex systems nowadays, ever since the advent of fast computers, and the technique relies entirely on the inter-consistency of different levels of description. To pick an example, for decades physicists calculating the large-scale properties of the Sun found results inconsistent with our knowledge of small-scale particle physics deduced on Earth. The long-running “solar neutrino problem” was eventually solved when it was realised that neutrinos, previously thought to be massless, actually had a small mass. The requirement that different parts of physics, and indeed science, must mesh seamlessly, thus becomes a powerful tool for improving our understanding. There our knowledge of the high-level account pointed to a flaw in our understanding of the low-level particle physics.

Feynman remarked that any good theoretical physicist knows six different ways of thinking about the same thing, and can thus address a problem from multiple angles. Each approach must be mutually consistent and complementary, in essence being different aspects of the ensemble. Any good physics student, for example, knows that a dynamics problem might be answered by calculating the accelerations of an object, or by bypassing that calculation and applying the principle that the overall energy or momentum must be conserved.

Indeed, finding fundamental equivalences between different ways of thinking about the same thing is one of the glories of physics. A celebrated example was Emmy Noether’s proof that the idea of a conservation law is equivalent to the laws of physics being invariant under a given change (thus conservation of energy is equivalent to physics being the same at different times, whereas conservation of momentum is equivalent to physics being the same in different places).

Another example comes from quantum electrodynamics, where three theorists, Sin-Itiro Tomonaga, Julian Schwinger and Feynman, independently produced apparently different mathematical models for the behaviour of the electron. It was then shown that the three different models were, at a deep level, mathematically equivalent (and the three gentlemen then shared the 1965 Nobel Prize for Physics).

The binding together of different levels of description is one example of thinking about an issue in multiple different ways, and it is only when we can do that, and see how the different descriptions relate to each other, that we properly understand.

As further examples, physicists felt that they had understood the macroscopic concepts “temperature” and “heat” once they could explain them as the Brownian motion of the individual particles. Similarly, other high-level concepts of macroscopic ensembles (collectively known as “thermodynamics”) were shown to arise from low-level concepts applied to particles (called “statistical mechanics”), to the extent that both are now often taught in the same undergraduate lecture course. The second law of thermodynamics was originally an observed fact about steam engines, but now has been given a precise formulation in terms of the statistics of individual particles.

My last example of reductionism in physics concerns the mass of the proton. This is a good illustration of how reductionism works, even at the relatively fundamental level of particle physics.

If we think that we understand quantum chromodynamics, the theory of the quarks and gluons that make up a proton, then we should be able to calculate the mass of the proton, should we not? Yet the calculations are rather hard to do. Just because we know the low-level constituents, and know how they behave, doesn’t mean that it is easy to calculate the outcome. The problem is that gluons can create virtual particles, including other gluons and virtual quark–anti-quark pairs, such that the inside of a proton is a whole mess of virtual particles, and the interactions between that lot dominates the mass of the resulting nucleon.

The only way of calculating that is a laborious accounting of many millions of interactions, all of which affect each other, involving a large amount of supercomputer time. When a whole team of particle physicists eventually calculated the proton mass from lower-level theory it was both a notable triumph and a check of our understanding of quantum chromodynamics. [8]

Even in particle physics, even when dealing with the relatively simple system of a lone proton, there are no simple “bridge laws” relating its mass to the masses of its component parts. Even for that, one needs a fiendishly complex calculation. The only way to attain the answer is to brute-simulate it, throwing in everything one knows about the lower level and then asking a supercomputer to keep track of all the complications. Humans might desire neat and simple explanations and rules, that would allow us to calculate the mass of a proton from the lower-level QCD in a few lines of algebra, but nature has its own ideas about whether to oblige.

“The astonishing fact is that similar mathematics applies so well to planets and to clocks. It needn’t have been this way. We didn’t impose it on the Universe. That’s the way the Universe is. If this is reductionism, so be it.”     — Carl Sagan [9]

In conclusion, the concept of reductionism is one of the most powerful tools in science. Indeed the Medawars called it “the most successful research stratagem ever devised”. [10]

The requirement to bind together different layers of description is a powerful guide towards better theories. The need for full consistency between the levels probes our understanding and highlights any flaws or inadequacies. But, further, one can only claim to have understood a phenomenon if one can explain how it arises from the lower-level aspects of the system.

In short, science is successful because it has adopted reductionism. If we want to understand any aspect of our world, the best approach is to attempt to explain it in terms of lower-level behaviour. When we can’t do that we can’t claim to understand. For example, we currently don’t know how the “collapse of the wavefunction” arises in quantum mechanics in terms of a level below that (if there is one!), and thus we don’t yet properly understand quantum mechanics.

A vastly inferior approach is to regard any topic as compartmentalised and to attempt to understand it in isolation. Whether we are dealing with the proton’s mass or the human mind, the reductionist approach is an essential part of making progress. But bear in mind that scientific reductionism is purely an ontological thesis, and it gives no guarantees that the epistemology will be simple or easy, as the example of calculating the mass of the proton shows.

Notes:

[1] Paul Dirac, 1929, Proceedings of the Royal Society of London, vol 123, p714.

[2] T. H. Huxley, On the Hypothesis that Animals Are Automata, and Its History, 1874 (online here).

[3] As an example of how physicists understand the term, in two blog posts Sean Carroll associates “reductionism” with what is clearly supervenience physicalism (see Physicalist Anti-Reductionism, and Avignon Day 3: Reductionism).

[4] The Feynman Lectures on Physics are regarded as the most influential physics textbook ever. They are described here and are online here.

[5] This is a caveat regarding the common definition of supervenience as “no high-level difference without a low-level difference”. Given quantum indeterminacy, identical low-level states can diverge in behaviour, leading to a high-level difference.

[6] As an example of reproducing high-level behaviour by invoking dice-throwing, this simulation demonstrates the second law of thermodynamics using a computer “rand” function.

[7] Steven Weinberg labelled as “petty” reductionism the idea that there should always be simple and concise statements linking behaviour at one level to behaviour at another level (i.e, Nagel-style bridge laws). He stated that “petty reductionism is not worth a fierce defence” and remarked that sometimes it works, sometimes it doesn’t. He contrasts “petty” reductionism with “grand” reductionism, which he defends, and which is the thesis being discussed here. (Weinberg, Steven, Reductionism Redux, The New York Review of Books, October 5, 1995.)

[8] See: At Long Last, Physicists Calculate the Proton’s Mass, and Nuclear masses calculated from scratch. See also this article on the proton–neutron mass difference.

[9] Carl Sagan, The Demon-Haunted world: Science as a Candle in the Dark, 1995, Random House.

[10] Medawar, P. B. & Medawar, J. S. 1983, Aristotle to Zoos: a philosophical dictionary of biology. Cambridge, MA: Harvard University Press.

Advertisements

9 thoughts on “Reductionism and Unity in Science

  1. strongforce

    Very well stated. Excellent clarity and form. I wish our friends in philosophy would emulate. I will use your essay as future reference on the topic.

    Reply
  2. richardwein

    Hi Coel. Very good post. I pretty much agree with you, though I would express some things a bit differently.

    One point I’d like to pick up on…

    (3) If one were to take a strictly literal interpretation of quantum mechanics, one might claim that the only valid way of proceeding is to write down a single wavefunction for the entire universe, with that wavefunction containing information about every particle in the universe in one go. For example, Bell’s theorem tells us that particles have non-local effects. Thus, one might claim that reductionism is in-principle impossible, and that one can only ever consider the entire ensemble.

    My response would be to say that “reductionism” (in any sense of that word) need not be committed to the idea that reality is fundamentally particulate. I for one think that it isn’t. I admit I’m pretty ignorant about fundamental physics, but FWIW I think that our particulate models are approximations, and not fundamental.

    Reply
    1. Coel Post author

      Hi Richard,

      My response would be to say that “reductionism” (in any sense of that word) need not be committed to the idea that reality is fundamentally particulate.

      That’s an interesting comment. Certainly science has long proceeded on the basis of nature being particulate — with the one counter-example of the nature of gravity. I’d be interested in how one could conceive of reductionism or supervenience in a scenario where nature was not particulate.

    2. richardwein

      Hi Coel,

      I’d be interested in how one could conceive of reductionism or supervenience in a scenario where nature was not particulate.

      Sorry I haven’t got around to replying sooner. It’s a little difficult to respond without knowing why you think that supervenience does require particles to be fundamental.

      I would say that we model reality at various levels of abstraction, and roughly speaking the way things are at higher levels (i.e. as described by appropriate higher level models) is determined by the way things are at lower levels (i.e. as described by appropriate lower level models). I say “roughly speaking”, because our models are only approximations. If there is a perfectly precise “final theory” to be discovered (about which I’m undecided), then I suppose that the way things are at that final, fundamental level determines the way things are at all the higher levels. That’s what supervenience and physicalism mean to me.

      Reality could be fundamentally continuous, but in a way that makes particle-based models a very good approximation. Isn’t the quantum wave function a continuous model below the level of particles? (I admit to knowing very little about quantum theory, so I could easily be mistaken about that.)

    3. Coel Post author

      Hi Richard,

      It’s a little difficult to respond without knowing why you think that supervenience does require particles to be fundamental.

      As I see it, a “low level” description is one that describes a system piece by piece, particle by particle, and a “high level” description is one that describes the ensemble. This would, it seems to me, only work if macroscopic objects were ensembles of microscopic objects.

      If a macroscopic object had no component parts, and could only be described as a whole, then I’m not sure what it would mean to apply supervenience physicalism to it.

      Reality could be fundamentally continuous, but in a way that makes particle-based models a very good approximation.

      Yes, ok, though if the particle-based models are a very good approximation then, in some sense, reality would be particulate. To be clear here, one doesn’t need to regard the particles as “ultimate”, or point-like (they could still have finite extent), but, for supervenience physicalism to work, it would need to make sense to describe large object in terms of a large number of much-smaller objects.

      Isn’t the quantum wave function a continuous model below the level of particles?

      Yes, that’s true. Formally wavefunctions and quantum fields are spatially extended and stretch to infinity. In practice, though, their effects are fairly localised when doing a calculation. (One can argue about the ontological status of wavefunctions and quantum fields, and ask whether they are “entities” or just mathematical constructs that can be used to calculate results that match observations.)

  3. Philosopher Eric

    I agree entirely Coel — technically this great dispute must simply concern separate definitions for the term “reduction.” But then why has something with such a simple solution, been permitted to get so out of hand? Well perhaps because separate groups do naturally tend to challenge each other. Observe that modern philosophy should carry quite a chip on its shoulder, given its dearth of generally accepted understandings — this should make for quick provocation from dismissive scientists. But these blanket dismissals should be reprimanded no less, I think, since philosophers have at least been asking such questions (though yes, failing to reach generally accepted answers).

    The only hope I see to resolve this standoff, should come through earnest diplomacy. Otherwise I suspect that the defensiveness of philosophers will bar them from thinking in innovative enough ways to reach various generally accepted understandings. And if it’s true that all of reality remains ontologically connected, then this should bode I’ll for “special” sciences like psychology which seem quite in need of philosophical breakthroughs.

    Reply
  4. basicrulesoflife

    “Thus, if one were to produce an exact atom-by-atom replication of a human being, then that replicated human would behave the same way, given the exact same environment.”
    “(2) Given quantum indeterminism, a given low-level description can give rise to more than one high-level outcome, especially when differences are amplified by deterministic chaos. Thus, two exactly identical replicated systems will gradually diverge in their macroscopic behaviour. [5]”
    The first sentence seems to be dubious. The second gives an excellent answer for discussions about copies behavior in AI.
    Imants Vilks

    Reply
  5. Pingback: Reductionism and Unity in Science | Basic Rules of Life

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s