Scientia Salon recently published my article advocating that mathematics is best regarded as a part of science. In reply to “scientism week”, Massimo Pigliucci wrote an article criticising “the return of radical empiricism”. The collision of “scientism week” with “anti-scientism week” generated a lot of energy and comments!

Massimo Pigliucci’s article is well worth reading, being a clear exposition of the relevant ideas. He traces the issues back to Hume’s famous fork, in which Hume declares that:

All the objects of human reason or enquiry may naturally be divided into two kinds, to wit, Relations of Ideas, and Matters of fact and real existence.

The “relations of ideas” category is taken to include mathematics and logic, where knowledge is “discoverable by the mere operation of thought”, while the “matters of fact” category contains science, where knowledge derives from empirical data.

Kant rejected Hume’s empiricism and sought to establish the primacy of reason. He adopted the term “*a priori*” for knowledge that does not derive from experience, in contrast to “*a posteriori*” knowledge which does. A related concept is that of “analytic” statements, which follow from the definitions of the terms, contrasting with “synthetic” statements that describe how the world is.

This notion of a fundamental epistemological divide holds today, and is at the heart of resistance to the idea that mathematics, logic and science are a unified whole.

In reading Pigliucci’s article I agree with much of what he says, but, to me, he seems to miss the main arguments for the essential unity of the different domains of knowledge. I will thus outline how I see the roots of empiricism, and then consider the supposed divide between knowledge “from reasoning” versus knowledge “from observation”.

Let’s start with a primitive animal evolving by natural selection. Such an animal might develop senses to capture information about the world around it. That could help it in many ways, by learning about food or predators or potential mates. The sensory information enters the animal, but what to do with it? Simply feeding the signal to a motor organ might suffice for the simplest of behaviours (moving towards light, say), but as the animal gets more complex it is advantageous to develop more complicated responses to sensory input.

Thus, the animal evolves a neural network between the sensory inputs and the motor outputs. This is a complex wiring pattern that allows multiple input signals to be processed in complex ways, enabling a complex set of responses. The neural network is said to make “a decision” about “what to do” given any set of inputs. Indeed, the whole point of brains is to make such decisions.

In order for highly complex animals to make good decisions, that neural-network decision-making brain must contain knowledge about how the world works. It needs to have internal information that, for example, amounts to knowing that the prey animal bounding away is good to eat, that the lion with sharp teeth would be best avoided, and that the big man with a spear might harm you if he gets angry.

That information is encoded in our neural network in the pattern of synaptic connections between neurons, all hundred thousand billion of them. It gets there by a process with two entwined aspects: a genetic recipe, and a development program. During childhood and throughout our life the brain is bombarded with sensory information from the surrounding world, information which trains the neural network about the world.

But that alone would get you nowhere (as is obvious if you consider streaming the same sensory information at an inert rock). You also need a capability and recipe for what to do with the information, and that is provided by the genes. The genes get that capability from natural selection. Consider random mutations that affect the wiring of the brain: whenever a mutation comes along that causes the network to make better decisions (“better” in terms of survival and reproduction) it will tend to become fixed in the population.

It is reasonably obvious that in order to make good decisions the “world model” contained in the neural network must be a good match to the real world. It needn’t be a perfect match, and it could be a poor match about aspects of the world that had no relevance to survival and reproduction over our evolutionary history (which is likely why we find quantum mechanics very counter-intuitive), but it would have to be a good-enough match to the real world as we experience it in everyday life.

Thus we have a “world model” encoded in a complex neural network, with that network taking in sensory information and making decisions that are then passed to our muscles to act upon.

That world model is derived entirely from contact with the real world, from information attained by sensory organs, and by the natural selection of ideas that best match the real world. All of our intuitive knowledge is encoded in that neural network, and those are the only processes by which it got there.

Two points then follow from how neural networks work. First, in a neural network, knowlege cannot be rigidly compartmentalised. Changes in one connection of a neural network have an influence over a wide area of network. No idea is localised, but rather all ideas are encoded in a distributed web. Thus the ideas all interact and cannot be regarded as independent.

Second, any aspect of the wiring pattern of a neural network can in principle be changed, and thus any of the ideas encoded in it can be changed. It is not the case that knowledge must be hierarchical, starting with a foundation of basic truths, and then with other truths being built on those, in multiple layers where the earlier layers become set and unchangeable. A neural network does not work by layers, it works as a highly interconnected web. Thus there are no “basic” versus “non-basic” truths in the architecture of a neural network.

The ideas of a “world model” thus act as a team, and any part of the team can be changed if the overall result is better. As an analogy, a football coach can swap out any player of his team to produce a better overall performance. “Better performance” here means better modelling of the world.

How is this swapping out achieved? Partially, it is achieved by evolution. If a mutation changes the recipe such that the wiring is changed such that the “world model” performs better, then the mutation will become fixed in the population. But it is also achieved over our childhood and through lifetime experience.

One more concept: The representation of the real world inside the brain will not, of course, be a duplicate of the real world, it will be a model of it. That means it is a set of abstracted ideas that, put together, give a sufficiently good simulation of the world.

What do we mean by an abstracted concept? One can regard the raw sensory input from our visual sensors as a pixel-by-pixel list of photon-arrival events. Obviously this information is unwieldy and useless in its raw form, so it is processed into “concepts”, which are condensed versions of the full information stream. Ideas such as “earth”, “sky”, “tree”, “person”, “distance”, “time”, “weight” are all concepts that compact down external-world completeness into manageable and usable abstractions.

One can only do that if the natural world contains regularities. Examples include the day/night and yearly cycles. Another example is that of species: if we have a concept of “sheep” then we don’t need a representation of each individual sheep. Further examples are what we call “laws of nature” (if you walk off a cliff you’ll fall and go splat) and ideas such as mathematics (enabling you to count sheep) and logic (basic ideas that are necessary to understand the world, even if some of them are so obvious that you don’t consciously think of them).

How do we gain knowledge in our day-to-day life? The method, in its most formal version, is called the scientific method, though this really is just a refinement of the sort of thing we always do to learn things.

If we want to “know” something we simply consult our “world model”, our neural network. That is what it is for, to tell us about how the world works, to enable us to make decisions. So we simply “think about it” and we know the answer. But how do we test whether our “world model” is correct, and that our “thinking about it” is giving a good answer? Easy, we compare the output of “thinking about it” to what does actually happen! If it doesn’t match we update our model, swapping out some aspect and replacing it.

The most rigorous test of the model is by predicting things we don’t already know. So, we predict a property of the world. We obtain whatever information about the world that we need, process that through our “world model”, and then predict, say, that a solar eclipse will happen next week. We can then test whether the prediction works. If it doesn’t there is something wrong with our model (or, possibly with our observed information, so we do it a few times to try to check that it really is the model at fault). So we change the model — swap out a player — and try again.

There is a bit more to it than that, but, in essence, that is the process of science. (One might object that in predicting eclipses we don’t just “think about it”, but also use external aids to thinking such as pen and paper and calculators, and memory stores such as books, but these devices only give us logical consequences of what humans put into them, the external aids are not originating the ideas).

In order to predict that eclipse time we need a good understanding of physics. But to compute anything at all in physics we need maths. And we also need logic, since both maths and physics would fall apart without logic.

So we combine our logic and maths and physics (including all the stuff that we regard as too obvious to need stating explicitly) and combine them all together in our world model and then calculate the prediction.

This process tests the physics, but it also tests the maths and the logic. If the maths and the logic did not hold in the real world then the prediction would turn out false. By that process we arrive at correct maths and logic (where by “correct” I mean best modelling the real world).

That is how we came to the systems of maths and logic that we now have, from the fact that they work in the real world. Of course mathematicians and logicians have since formalised those systems and distilled the mathematical and logical knowledge down into sets of axioms — axioms that capture deep regularities in the way that the world works.

Anyone who doubts that maths and logic are part of the above process, and thus are empirically tested by the above process, is invited to take the negation of axioms of maths and logic (say, the negation of modus ponens, or the negation of Peano’s axioms) and work from there, fully self-consistently, to produce predictions of eclipse timings. It is obvious that that process would not work (you wouldn’t even have counting numbers) and thus that axioms of maths and logic are adopted exactly because they match real-world behaviour.

At this point, I anticipate an objection along the lines that, yes, we swap *physical* laws in and out, in order to improve the overall model, but we never do that with axioms of *maths*, do we? We take those as a given.

My reply is that, yes, that is what we do *now*. But that is only because we’ve already completed the task of arriving at the correct axioms of basic maths (whereas we have not completed the task of arriving at the correct laws of physics). Which just means that the axioms of maths — at least all of those necessary to model day-to-day physical phenomena such as eclipses — were easier to get right! We humans, collectively, have previously been through that process of arriving at correct maths, by some combination of natural selection of genetic recipes for correct maths, experience of the world, and communication of what other humans have worked out.

It follows from the above that axioms of maths and logic and laws of physics all have the same epistemological status. They were all adopted as concepts that model real-world behaviour. There is no other source for any of them (or indeed any other type of knowledge) other than from our contact with the real world, and thus from our experience of how the world behaves. Thus there is no basis for asserting any big epistemological distinction between them.

### Going the whole hog

Quine’s famous paper on the synthetic/analytic divide starts by considering the statement that “No unmarried man is married”, which he declares to be “logically true”.

But it this genuinely *a priori* knowledge, true by logic alone, and entirely distinct from empiricism? In essence it is the basic logical law of non-contradiction, that something cannot be both itself and not-itself. But consider the following:

(1) No unmarried man is married.

(2) No dead cat is alive.

(3) No spin-up electron is spin-down.

At least some interpretations of quantum mechanics hold that (2) can be false, and all interpretations of quantum mechanics hold that (3) can be false. Indeed, the negation of (3) — that an electron can be in a quantum superposition of spin-up and spin-down states — appears to be an empirical fact about how our world actually is.

At the very least, this shows that none of those three statements can simply be assumed as a necessary property of all possible universes, and thus if any are true about our universe then that is an empirical fact about our universe. Indeed, the evidence is that (3) is false and that (1) is true only because unmarried men are large enough (compared to the Planck’s-constant scale of quantum mechanics) that quantum decoherence must always be complete.

This train of thought suggests that radical empiricism does hold and that there are no genuinely *a priori* truths that are independent of empirical facts. What other justification do we have for declaring the law of non-contradiction, other than that it appears to hold in our world?

### The divide between “by reason” and “by observation”

Let’s return to Hume’s fork, and the divide between knowledge arrived at by reasoning, and knowledge arrived at by observation. If we accept what I’ve argued above, does anything remain of this distinction?

Knowledge “by reasoning” would then mean knowledge arrived at by consulting our neural-network model of how things work, complete with its encoded logic and mathematics. That is indeed a very good way of knowing; indeed having the world-model available to consult, in order to use the knowledge as an input to decision making, is exactly why evolution has equipped us with brains.

Thus, in order to know how many sheep we would have if we combine our own flock of 15 sheep with our brother’s flock of 17, we do not have to merge the flocks and then count them, we can simply reason it.

The brain is the product of a long history of contact with the world (over evolutionary time and over our own development from an embryo) and has distilled all of that observational knowledge into a handy ready-reckoner that we can consult to get an answer without actually having to do the observation.

Thus the knowledge that the combined flock will be 32 sheep is then not empirical, since it was attained by reasoning rather than observation. Is that, then, *a priori* knowledge?

Now let’s consider that we are standing on the edge of a cliff. We know that if we step forward we will fall and go splat. Yet, we have not observed that event. Our knowledge of it again comes from reasoning, from consulting our ready-reckoner brain. The knowledge here is again about how the world works, and is again a product of a long history of experience of the world.

I don’t see any epistemological difference between this knowledge about the effect of gravity, and the above knowledge that 15 + 17 = 32. The first is from laws of physics and the second from axioms of maths. Yet, traditionally, maths would go in the “*a priori*” bin and our knowledge of gravity in the “*a posteriori*” bin.

Now let’s suppose that we have encountered the big man with the spear. We know that if we go and thump him on the nose he might be angry and thus that thumping him may well be a bad idea. Yet, we have not observed that man before, and have not observed what happens when he is thumped on the nose. Therefore we have no empirical knowledge of him. What we’ve done is consult our ready-reckoner brain, and the fact that people get angry when thumped on the nose is one of those regularities of nature that has been encoded, by long experience of nature, in the brain.

So, our knowlege that thumping the guy is a bad idea is “by reasoning” and not by observation. It is “by reasoning” in exactly the same way that knowing how to add the numbers 15 and 17 is “by reasoning”, namely it is done by thinking with our world-model brain — though that world-model brain is of course entirely a product of contact with the world.

Now let’s suppose we encounter a new type of animal that we didn’t know about and which behaves weirdly, perhaps it hops whereas we Europeans are used to animals that walk. The first time you see it you learn — empirically — that it hops. From then on, though, the knowledge that this animal hops is incorporated in the world model. So if you later see another animal that looks to be of the same species you know that it hops, even before you have seen that animal hopping. The knowledge of the first animal hopping was observational and your knowlege that other animals of the same species then hop is from reason.

How let’s turn to observational physics, the very epitome of empirical science. And physics has empirically observed the Higgs’ Boson. Err, has it? Have you ever seen a Higgs Boson? Have you ever seen a picture of it?

What you may have seen is a piece of paper with plot on it, a plot that shows a line computed from a model, and compares it with data points with error bars. Those data points are records of detections, not of the Higgs’ Boson directly, but of various decay products that we theorise to occur, if the Higgs does exist. This involves some empiricism, but it also involves a huge amount of theorising and analytic maths-like deduction from our overall world model.

Indeed, the train of deductive reasoning is so long, and involves so much reference to other parts of the world model, that anyone not trained in physics will struggle to assimilate it. Thus the discovery of the Higgs is roughly 5% new observation and 95% deductive reasoning from the world model. Of course all of that world model is fashioned, ultimately, out of the contact with the empirical world — but that is just as much true of our mathematics.

As a last example, take the temperature at the centre of the sun. That is surely an empirical matter, and we can state as a fact that the temperature is 16 million Kelvin (subject to the usual scientific caveats about error bars and provisionality). Yet we don’t know that directly. What we do is measure lots of other aspects of the sun, such as its mass (which is also not known directly, but deduced from other information that ultimately traces back to empirical data) and then deduce the temperature of the sun by feeding everything into a model.

The lesson I take from the above discussion is that all knowledge includes both observation and reason, both *a priori* and *a posteriori* elements, and that any distinction between the two is unimportant.

Yes, you can make a distinction between the animal that you have personally observed to hop and its conspecific that you merely presume to hop. But, really, since all of our knowledge is such an entwined mixture of both reason and observation, insisting on such a distinction seems perverse.

One can distinguish between proximate empirical knowlege, where you have recently observed something, and distant empirical knowlege that is now just stored abstractly in your ready-reckoner device, ready to be consulted when needed, but I don’t see why that distinction matters much from the point of view of fundamental epistemology, especially since we know nothing at all except as processed through the world model in our ready reckoner. Without that we would not even “see” a kangaroo, we’d just have a list of photon-arrival event times from each photo-sensitive cell in our eyes.

Everything we know comes ultimately from contact with the empirical world, but is so entwined with our modelling and theorising about the world that the distinction between knowledge “by reasoning” and knowledge “by observation” simply dissolves.

Trying to establish the primacy of either empiricism or rationality as the basis of knowledge will not work. At the basis of knowledge is, instead, Darwin’s dangerous idea of natural selection. Darwinian evolution fashioned our reasoning, and did so by contact with the empirical world. Reasoning always was the product of the world we experience, and the two aspects of knowledge “by reasoning” and “by observation” always were inseparably entwined right from their origins in our evolutionary past.

Anton SzautnerExcellent post, as usual!

“The representation of the real world inside the brain will not, of course, be a duplicate of the real world, it will be a model of it. That means it is a set of abstracted ideas that, put together, give a sufficiently good simulation of the world.”The conceptual model is also much more easily changed (modified or upgraded) than the real world it is ostensibly meant to represent. Fortunately, it’s much harder to change the world to comply with the preferences and prejudices inside the head than it is to ‘change one’s mind’ – which most often just means tweaking a particular detail in the conceptual model of the world one adopts as the totality of one’s understanding of the real world. People who appreciate the distinction acknowledge the responsibility of constantly testing their model against the evidence that comes from the real thing; the process of increasing our knowledge and improving our understanding is performed by refinements and polishes to our conceptual model of the world.

Unfortunately, a persistent if not growing proportion of the population does not recognize the distinction: they readily identify their impressions of the world AS the real world. Its a very stubborn but common conceit. Another sector have declared that the real world is a product of their own mental creativity, something they can shape at will. They often scorn the very idea of an empirical world; their contact with it is not to acquire information from it, but to ‘fix it’: world-shaping inevitably becomes an exercise in shaping OTHER people’s models to conform to their own little worldviews.

Yet humans are compelled to inhabit a real world, whatever humans think. Humans have survived at all only because they are able to adjust their conceptual model of how the world works, not because anybody anywhere has ‘changed the world’. Though the laws of nature are discoverable and exploitable, they are certainly not subject to modification, invention or termination.

It seems to me that these conditions may hold important clues to understanding the persistence of superstition, supernaturalism and denialism, and how those attitudes are so easily cultivated by a variety of vested interests.

The distinction between the conceptual model of the world inside of our heads and the real empirical world outside of our heads carries a significance that cannot be overstated. There certainly is no longer any excuse to overlook its importance to any question that examines how we deal with an empirical reality that exists independently of – and quite indifferently to – our conceptual models of it.

bulkington27Thanks for this excellent summary of Hume’s argument Scalise! You might enjoy watching this video.

Yonatan FishmanI agree with much of what you write in this excellent post. I just wanted to raise a minor point regarding the following sentence:

“What other justification do we have for declaring the law of non-contradiction, other than that it appears to hold in our world?”

One can agree that we acquire our belief in the law of non-contradiction (LNC) from experience (perhaps it is also genetically hard-wired into us- which means it comes from evolutionary ‘experience’). And there are non-classical (paraconsistent/dialethic) logics which tolerate violations of the LNC. Others, however, have argued that LNC must hold in all possible worlds. For to deny the LNC means that the LNC may be both true and false. However. if the LNC can be both true and false, then it cannot be exclusively false (contrary to what is assumed in the denial of the LNC). Thus, arguably, denying the LNC is self-refuting.

There is a great deal of philosophical literature on this issue, which is far from settled. My point does not conflict in any way with your general argument, but I thought that it should be considered given your sentence above. It seems to me that whether or not the LNC holds in all possible worlds is largely independent of the question of where our knowledge of the LNC comes from.

CoelPost authorHi Yonantan,

I’m not sure I follow the logic here:

Your first line says that denying LNC means that LNC may be both true and false. Your next line says that denying LNC means that LNC must be exclusively false.

Can’t we start from not knowing whether LNC is true or false (or indeed both!), and arrive at a position about its truth based on empirical experience?

Yonatan FishmanAlso please consider the following papers which critique the idea that quantum mechanics demonstrates the existence of genuine contradictions:

Contradiction, Quantum Mechanics, and the

Square of Opposition

Jonas R. Becker Arenhart Décio Krause

March 27, 2014

Abstract

We discuss the idea that superpositions in quantum mechanics may involve contradictions or contradictory properties. A state of superposition such as the one comprised in the famous Schrödinger’s cat, for instance, is sometimes said to attribute contradictory properties to the cat: being dead and alive at the same time. If that were the case, we would be facing a revolution in logic and science, since we would have

one of our greatest scientific achievements showing that real contradictions exist. We analyze that claim by employing the traditional square of opposition. We suggest that it is difficult to make sense of the idea of contradiction in the case of quantum superpositions. From a metaphysical point of view the suggestion also faces obstacles, and we present some of them.

Potentiality and Contradiction in Quantum Mechanics

Jonas R. B. Arenhart, Décio Krause

(Submitted on 7 Jun 2014)

Following J.-Y.B\’eziau in his pioneer work on non-standard interpretations of the traditional square of opposition, we have applied the abstract structure of the square to study the relation of opposition between states in superposition in orthodox quantum mechanics in \cite{are14}. Our conclusion was that such states are \ita{contraries} (\ita{i.e.} both can be false, but both cannot be true), contradicting previous analyzes that have led to different results, such as those claiming that those states represent \ita{contradictory} properties (\ita{i. e.} they must have opposite truth values). In this chapter we bring the issue once again into the center of the stage, but now discussing the metaphysical presuppositions which underlie each kind of analysis and which lead to each kind of result, discussing in particular the idea that superpositions represent potential contradictions. We shall argue that the analysis according to which states in superposition are contrary rather than contradictory is still more plausible.

http://arxiv.org/abs/1406.1835

CoelPost authorThanks for your comments Yonatan, I’ll have a look at the papers you point to.

Yonatan FishmanCoel, thanks for pointing out the need for clarification:

To deny the LNC means that you assert it to be exclusively false. However, if it is exclusively false (across the board), then it is possible for the LNC itself to be both true and false at the same time. However, this contradicts the initial assertion that it is exclusively false. Thus, if it is exclusively false, then it can be both true and false. This situation verges upon incoherence. Arguably, if a person asserts that ‘A and/is not-A’ they have simply failed to understand the meaning of the term ‘A’ (e.g., ‘a bachelor is not an unmarried man’).

I certainly agree that the LNC can be confirmed by experience (indeed it is confirmed at every instant of our lives). However, if the LNC is true in all possible worlds then there is no possible experience which can refute it, i.e., it is indefeasible. I know that some have interpreted quantum mechanical phenomena as providing evidence for violations of the LNC, However, the papers I cited above are skeptical of this interpretation. And in general, the viability and coherence of dialetheism has been questioned. For example, see part V in:

http://www.amazon.com/The-Law-Non-Contradiction-Graham-Priest/dp/0199204195/ref=sr_1_1?ie=UTF8&qid=1414684662&sr=8-1&keywords=law+of+noncontradiction

and

http://www.academia.edu/796044/Meaning_metaphysics_and_contradiction_-_American_Philosophical_Quarterly

CoelPost authorHi Yonatan,

I forgot to say thanks for these links. I agree with your argument that asserting that LNC is exclusively false leads to a contradiction. However, how about starting from a non-commital stance of not making assertions either way. From there, if one then adopts LNC, one does so because it seems to be how the world is.

Pingback: The unity of maths and physics revisited | coelsblog

caveat1ectorHi Coel,

In response to the following: This process tests the physics, but it also tests the maths and the logic. If the maths and the logic did not hold in the real world then the prediction would turn out false. By that process we arrive at correct maths and logic (where by “correct” I mean best modelling the real world). … … It follows from the above that axioms of maths and logic and laws of physics all have the same epistemological status. They were all adopted as concepts that model real-world behaviour. There is no other source for any of them (or indeed any other type of knowledge) other than from our contact with the real world, and thus from our experience of how the world behaves. Thus there is no basis for asserting any big epistemological distinction between them.

Essentially, your point is that you see no meaningful epistemological distinction between Logic/Maths and Physics. But I do think a distinction can be made. I’d describe the relationship between Logic/Maths and Physics this way: Logic/Maths is like the set of all Lego bricks ever made (this set of Lego bricks being the product of Logicians/Mathematicians). Physicists use Lego bricks to build models that correspond to the empirically observable world. When a model is found to incorrectly correspond to the real world, at least one of the following is true:

I: The Lego bricks used were not appropriate for the desired model.

II: The arrangement of bricks were not appropriate.

III: There does not exist Lego bricks that allow for the desired model to be built.

The Physicist works on I and II, while the Logician/Mathematician works on III.

Hence, one can say that the wrong bricks were used (I), or that the bricks were arranged wrongly (II). When III is true, the set of all Lego bricks ever made is said to be insufficient, which may motivate Mathematicians to develop new areas in Mathematics (i.e. create new kinds of Lego bricks) that can fulfill the needs of I and II.

A key point is that Mathematics need not work in the real world. There are areas of Mathematics that have yet to find any real-world application (https://www.quora.com/Is-there-a-modern-branch-or-area-of-pure-mathematics-with-no-presently-known-practical-application). Just because a large body of Mathematics has real-world applications doesn’t prove that _all_ Mathematics will have real-world applications. It could simply mean that the bulk of Mathematics were developed with real-world applications in mind.

Also, I’d add that Logic on a fundamental level could be thought of as what has been hard-wired into our brains (through evolution, etc.; the point is that Logic is thought of as a fixed feature of the human mind, and in this sense “analytic”) that enables us to perform logical operations on logical values. Mathematics then consists of all possible permutations of logical operations and logical values that human minds are capable of conceiving. Our logical minds (the Lego company) are continually inventing new ways of thinking, in the form of Logic/Maths (Lego bricks). Physicists then make use of the available Mathematics to try to model the real world.

Furthermore, regarding the following:

Quine’s famous paper on the synthetic/analytic divide starts by considering the statement that “No unmarried man is married”, which he declares to be “logically true”.

But it this genuinely a priori knowledge, true by logic alone, and entirely distinct from empiricism? In essence it is the basic logical law of non-contradiction, that something cannot be both itself and not-itself. But consider the following:

(1) No unmarried man is married.

(2) No dead cat is alive.

(3) No spin-up electron is spin-down.

The truth of the statement “no unmarried man is married” relies on the assumption that “unmarried” is the logical negation of “married”. If in Physics a cat could be simultaneously “dead” and “alive”, then “the cat is dead” would not be the negation of “the cat is alive”, so that you have incorrectly applied the law of non-contradiction to a case that violates a crucial underlying assumption. I.e., by assigning “the cat is dead” a logical value of FALSE, and “the cat is alive” the logical value of TRUE, you are making the assumption that the two statements are logical negations (hence mutually exclusive). When Physics shows that both statements could simultaneously be true, then you should stop assuming that the two statements are logical negations (stop assuming that they are mutually exclusive), so that you are no longer allowed to use the law of non-contradiction to produce the statement “no dead cat is alive”.

Also, it’s worth saying that as long as there are meaningful differences between the fields of Logic/Maths and Physics, then a distinction can reasonably be made, so that some people may rightly choose to first affirm an analytic-synthetic distinction, then seek to define what they mean by that. This could even mean that people define “Mathematics” differently! (For example, my examples/explanations above provide one possible definition of Mathematics that allows for an analytic-synthetic distinction.) As a matter of fact, there is no universally accepted definition of “Mathematics” …

Finally, consider the “continuum hypothesis” in set theory. The continuum hypothesis was not the first statement shown to be independent of ZFC. An immediate consequence of Gödel’s incompleteness theorem, which was published in 1931, is that there is a formal statement (one for each appropriate Gödel numbering scheme) expressing the consistency of ZFC that is independent of ZFC, assuming that ZFC is consistent. The continuum hypothesis and the axiom of choice were among the first mathematical statements shown to be independent of ZF set theory. These proofs of independence were not completed until Paul Cohen developed forcing in the 1960s. They all rely on the assumption that ZF is consistent. These proofs are called proofs of relative consistency (see Forcing (mathematics)). … … In a related vein, Saharon Shelah wrote that he does “not agree with the pure Platonic view that the interesting problems in set theory can be decided, that we just have to discover the additional axiom. My mental picture is that we have many possible set theories, all conforming to ZFC.” (Shelah 2003).

Simply put, to many Mathematicians, it is perfectly fine to have one version of “Mathematics” in which a particular axiom is true, and another version in which that same axiom is assumed to be false. This logically means that at most only one version of “Mathematics” will perfectly correspond to the real world. By your definition the other versions would be “wrong”, but then the fact remains that the other versions of “Mathematics” are possible in the sense that human minds are able to conceive such alternative versions. To declare that those versions of “Mathematics” that do not correspond perfectly with the real world should not be called “Mathematics”, be simply be a matter of choice.

Of course, there is the open question of whether the human mind will somehow continue to evolve and somehow adopt a more advanced logical system that does not allow such existence of multiple versions of “Mathematics”. But for now, there continues to exist very real and meaningful differences between Logic/Mathematics and Physics.

caveat1ectorSome minor formatting did not survive the commenting, so it’d be slightly better to read my comment reproduced on my blog: https://caveat1ector.wordpress.com/2016/05/31/analytic-synthetic-distinction-between-maths-and-science/

caveat1ectorHaving read your follow-up post (https://coelsblog.wordpress.com/2014/11/18/the-unity-of-maths-and-physics-revisited/), I have something to add:

You justify your “radical empiricism” by asserting that the mind is ultimately a product of experience (evolution, etc.), as seen in the following quote written in one of your comments: [quote]My counter would be to claim that nothing is known entirely independently of experience, and that all knowledge ultimately derives from experience. [/quote]

But the statement that “the mind is a product of real-world experience” doesn’t automatically imply that “all logically-consistent thought conceived by the mind must correspond to the real world”. The mind could easily conceive logically consistent but unrealistic worlds, i.e. fairy tales. We can imagine all kinds of crazy things that completely contradict real-world physics, and describe them by mathematical formulae, even though in practice nobody has any incentive to do so.

CoelPost authorHi caveat,

Yes, I think that’s right, we can conceive of fairly tales and illogical counter-factual worlds. However, we tend to know that these are not real, and our default way of thinking, the one that we do think of as “real”, does correspond to reality to quite an extent. I think the reason for that is our evolutionary heritage, since in evolutionary terms the whole point of a brain is to model the world in order to make real-time decisions.

CoelPost authorHi caveat,

True, but then some physics doesn’t have immediate real-world correspondence. Quite a bit of theoretical physics is wider exploration of the concepts, since better understanding the conceptual frameworks helps understand physics as a whole.

You seem to be rescuing the law of non-contradiction by insisting that anything appearing to violate it cannot be mutually exclusive. But “dead” is generally regarded as incompatible with “alive”, and it is in the classical world. A quantum superposition of mutually exclusive states does seem to be a weird feature of reality that violates classical logic.