“Scientism” is often taken as the claim that science can answer all questions. Of course there are plenty of things that scientists don’t currently know, so the suggestion is, instead, that science could potentially answer all questions, or at least all meaningful questions.
For example the philosopher Julian Baggini says that
“What is disparagingly called scientism insists that, if a question isn’t amenable to scientific solution, it is not a serious question at all.”
Another noted philosopher, Massimo Pigliucci, writes in his book Nonsense on Stilts :
“The term “scientism” encapsulates the intellectual arrogance of some scientists who think that, given enough time and especially financial resources, science will be able to answer whatever meaningful question we may wish to pose …”
I disagree with these definitions (both of course by people critical of scientism), and suggest that scientism is instead the claim that science can answer all questions to which we can know the answer. The point is that there are many questions that are “meaningful”, yet we can never, even in principle, answer them. First let’s distinguish between meaningful and meaningless questions.
Meaningful questions that science can answer
Critics of scientism often assert that historical questions are outside the realm of science, as are questions within axiom-based systems such as mathematics and logic.
Defining science broadly as enquiry based on empirical evidence (which I defend more extensively here), history is just as much an evidence-based subject as any science, and there is no basis for asserting a fundamental demarcation between them. Indeed paleontology, geology and cosmology are all sciences that are largely about the past, and which deduce the past from the evidence that the past has left in the present. And that, surely, is what historians do. Any boundaries between human paleontology, archaeology, and history are arbitrary, and there is no basis for asserting that somewhere along the historical record such study changes from a science to a non-science.
As for mathematics, it is often asserted that mathematical truths derive from reasoning from axioms, and thus are fundamentally different from truths derived from empirical evidence. However, where do these axioms come from? They are not arbitrary, and they are not arrived at ex nihilo, instead the axioms of mathematics are products of our observation of the universe; they are distilled empirical enquiry. Why else have mathematicians chosen their particular axioms, other than the fact that they work? By “work” I mean produce results, such as 2 + 2 = 4, that accord with observations of our empirical world.
Philosophers have puzzled over (in Eugene Wigner’s phrase) the “unreasonable effectiveness of mathematics”, yet much of the puzzle disappears when one realises that mathematics is a distillate of our empirical experience of the universe; why is it surprising if it then works well when applied to the empirical universe?
In an essay Geometry and Experience Einstein asks:
“At this point an enigma presents itself which in all ages has agitated inquiring minds. How can it be that mathematics, being after all a product of human thought which is independent of experience, is so admirably appropriate to the objects of reality? Is human reason, then, without experience, merely by taking thought, able to fathom the properties of real things?”
He then answers with a “no”, and argues that the premise of the above question is false, that mathematics is derived from human experience, and that:
“Geometry thus completed is evidently a natural science; we may in fact regard it as the most ancient branch of physics. Its affirmations rest essentially on induction from experience, but not on logical inferences only”.
We should also remember that “human thought” is a process occurring in our brains, brains that have been molded and programmed by eons of empirical brute facts — namely the operation of evolution by natural selection. Our brains think as they do precisely because, over evolutionary time, that way of thinking has proven to be in accord with the empirical universe and thus useful to our survival and procreation.
Mathematicians arrive at axioms by distilling empirical observation and by using brains that are themselves a record of past empiricism; they then reason from the axioms (using logic that is also a distillation of our experience of the universe, deduced by those same brains) and arrive at new mathematical truths. It is misconceived to then claim that the resulting truths are in no way empirical.
Questions that are not meaningful
A common criticism of scientism is that science can not arrive at moral truths, it can not tell you what you ought to do, or tell you what is morally right or morally wrong. You cannot get an “ought” from an “is”, and science tells you only the “is”. This claim is correct. Does that refute scientism? No it doesn’t, because there are no such things as abstract moral truths.
Science can’t tell you what is morally wrong or morally right, because the very concept of an abstract moral truth has no meaning. Moral sentiments and moral claims are opinions, they are feelings, and they cannot be divorced from the sentient being having the opinion or feeling. Moral rightness and moral wrongness are not fundamental properties of the universe that can be established in the abstract, they are feelings that we have, that evolution has programmed into us as a means of enabling our highly social and cooperative way of life.
If you disagree with this stance, if you insist that morality must have more foundation that this, if you feel this in your gut, then that shows how good a job evolution has done of programming you to be a moral being. And evolution has programmed you, not because morals are a fundamental truth of the universe, but for the entirely pragmatic reason that morals help us get along with our fellows. And they work better if you feel that they are absolute, and have a higher status than mere opinion, and so evolution has programmed us to think that.
Once you properly understand what morals are you realise that the realm of morals is entirely within the domain of science, because the only proper moral questions are of the form “Does Jane think that that act is moral?” or “Do most people regard this act as moral?” or “Why do people think that such and such is immoral?”, and those are questions about highly evolved mammals that science has the correct tools to answer ( I defend this argument more extensively here).
Meaningful questions that science cannot answer
Despite being a defender of scientism I accept that there are many questions, quite meaningful ones, that science cannot answer, even in principle, even with the most advanced technology conceivable. This is compatible with scientism provided that no other form of human enquiry can answer them either. Here are some examples:
Loss of information over time:
What are the names of everyone who ever fought in one of Alexander the Great’s armies? Who fired the arrow that hit King Harold in the eye at Hastings (if that did indeed happen)? What did Julius Caesar eat on the day three days before his fifth birthday? Did he stroke a dog on that day? All of these seem to me to be meaningful questions, in the sense that they would have proper answers, but almost certainly we can never know these answers because the information needed to know them doesn’t exist.
Information is a pattern, and patterns tend to degenerate over time and be destroyed (in accordance with the second law of thermodynamics). Thus, even if we knew everything about our current universe, we likely could not answer the above questions and vast numbers like them.
Finite light-travel time:
Relativity tells us that information cannot be conveyed at faster than the speed of light. Therefore there is an observable horizon, at a distance given by light speed multiplied by time t, and we can obtain no information from beyond that horizon about anything more recent than t ago. So many questions of the form “What is happening now in distant place Zog?” are unanswerable, even in principle. They are still meaningful, however, and in principle we could, later on, get information about what had happened at that time and place. [There is a possible get-out here, owing to quantum “spooky action at a distance”, and it is unclear whether this voids the above argument; we need a proper understanding of quantum entanglement to know for sure.]
The Heisenberg uncertainty principle of quantum mechanics tells us that there are limits to the amount information that we can obtain, in particular it says that pinning down one quantity precisely can inevitably means that another quantity is less accurately knowable. [A caveat: I suspect that this one should properly be in the category of not-meaningful questions, in that knowledge that violates the uncertainty principle would be incompatible with the basic nature of a quantum system.]
The observer effect:
Suppose we want to know the state of an electron. How can we find that out? We can’t just “know” it, we have to prod it somehow, for example by bouncing a photon off it. Yet that photon (and any other prod) has energy, and so will disturb the state of the electron. We can therefore only learn about the prodded state, not the unobserved, un-prodded state of the electron. Most of the time we can use a sufficiently minimal prod that this doesn’t matter, but we have no zero-effect prods, and thus there are fundamental limits on what we can know.
[Philosophers might want to declare that questions about the unobserved, un-prodded state of an electron are meaningless, precisely because we can never know them. Physicists usually prefer to regard the questions as meaningful but unanswerable. However, this topic can get into deep questions about the “collapse of the wave-function” in quantum mechanics that are not fully understood.]
Lack of information-storage capacity:
What are the current locations of every particle in the observable universe? This is unanswerable, since the answer would have to be assimilated, stored and presented somehow, and that would take storage capacity, which would have to be constructed out of particles. It would take many particles of storage to store the answer for each particle, and therefore we could never obtain an answer for “every” particle.
It is meaningful to ask what the weather will be like this time next year or in a thousand years time. However, our ability to answer is severely limited. Deterministic chaos means that in order to extrapolate information further and further into the future you need more and more information to higher and higher accuracies. Yet we are limited as above, by quantum indeterminacy, by the observer effect, and by limitations on information-storage. Also, the finite light-travel time comes into account, in that the future can be affected by effects propagating from regions currently beyond our observable horizon, from which we can’t get current information. The sum of these effects severely limits our ability to answer questions, even in principle, about the future.
So, to summarise, and contrary to Massimo Pigliucci, “scientism” is not the claim that science can answer “whatever meaningful question we may wish to pose”, it is the claim that no non-scientific method of enquiry can obtain an answer that science cannot. Scientism is the view that knowledge of our world is a unified whole, with no fundamental divisions into incompatible domains, and that empirical enquiry is our best and only method of obtaining such knowledge.
Interesting post. There are three things I want to say at this point:
(1) In general: It’s important to remember that science alone can’t answer any questions, if “answer” means ‘give us knowledge of what the answer is.’ The only questions you might expect it to answer would be empirical questions, but science can’t justify trusting empirical observation except circularly. (Why are our sensory organs reliable? Because our sensory organs say they’re reliable. And so on.) And if you admit circular arguments, you let in pretty much anything, so we don’t want that. We need some way independent of science to justify scientific methods and instruments, on pain of circularity.
(2) On math: Science can answer mathematical questions about actuality, but not about modality. Yes, 5 > 3. But most of us would say that five is necessarily greater than three. This doesn’t mean ‘according to our axioms, five is necessarily greater than three’; it means ‘necessarily, if there are five xs, and three ys, there are more xs,’ for any value of x and y. Now, you might deny that we know that, that for all we know, there’s a place in the universe where three is greater than five. But I doubt it. In any case, science can’t answer the question of whether it’s necessarily true that 5 > 3. Similarly for other questions in modality, for example, that unicorns are possible (but non-actual), that they could have existed but didn’t. No scientist has ever observed a merely possible unicorn. So once again, you could deny that we have knowledge of modality; I don’t think that’s an attractive position, but you might bite the bullet. However, even then, that would be a place where science can’t answer a certain question.
(3) On ethics: In your linked post, you talk about an Absolute Shouldness Scale. I detect what seem to me to be three distinct criticisms: “[A] no-one has ever found [such a scale], [B] nor has anyone produced a coherent account of how such a scale could have arisen or [C] even what it would mean.” Now, I could point you to a host of recent work in metaethics that attempts to answer all of these criticisms, and you might do well to consult some of it, because many philosophers think your objections are straightforwardly answerable. (See, e.g., Shafer-Landau, Moral Realism (2004), Huemer, Ethical Intuitionism (2005), Oddie, Value, Reality, and Desire (2005), Cuneo, The Normative Web (2007), and Enoch, Taking Morality Seriously (2011).) I know we’re all busy people, but I worry that to be so confident in your position, it would be better to have considered some of their efforts. In any case, I’ll adopt some of their insights as follows.
(3A) We learn ethical knowledge through self-evidence (Shafer-Landau 2004), intuition (Huemer 2005), desire (Oddie 2005), somehow but it’s clear that we do (Cuneo 2007), or indispensability (cf. mathematics as evidence for numbers) (Enoch 2011). Why should we trust any of those things? Well, without intuition, we can’t really justify anything else; it provides the way out of the circularity mentioned above. (How do you know whether a belief is justified, that is, the right or appropriate or reasonable belief for you to have? I can’t see any way other than a priori.) So rejecting intuition, I think, leads to global skepticism. Something very similar is basically Cuneo’s point; that our reasons for rejecting an Absolute Shouldness Scale in ethics apply just as well to rejecting an Absolute Shouldness Scale about our beliefs. So you’re left with the position that no belief is any more justified than any other. In turn, your opponent’s position is exactly as justified as yours, which is not a nice position to be in.
(3B) Ethical truths are often thought to be brute facts; they have no explanation. I see why this would be a problem if they were physical, contingent facts, but they probably aren’t. (Why think they need explanations? Do you think all facts need explanations? That leads to an infinite regress.) I don’t know about you, but it’s just not clear to me that ethical truths would need any kind of explanation. (If you have an argument that they would, you should present it.)
(3C) I think most people have an intuitive idea of what the moral ‘should’ and ‘shouldn’t’ and the axiological ‘good’ and ‘bad’ mean. Babies learn the terms at a very young age. In terms of the robust ethical realists, ‘S morally should X’ is intended to be objective: it’s true independently of who and where S is, and independently of S’s opinions or attitudes. Perhaps you don’t have a conception of the moral ‘should,’ but many people (and most philosophers) would claim to have such a conception, and you bear a pretty heavy burden of proof to argue that you know better than they do what’s in their minds.
Hi Tom, thanks for taking the time to comment.
Yes, this is a common criticism of scientism, and someday soon I should write a whole blog post on it. In brief my reply is this: What do we mean by “trusting empirical observation”? Do we mean “trust empirical observation to tell us about empirical reality”? If so, then, yes, we can trust that, because “observation” here is the same thing as “reality”, or putting this another way I can define “reality” to mean “what we observe”. Or, rather, the only sensible definition of “empirical reality” is what we observe.
So, the question then becomes “can we trust empirical reality”? Err, trust it in what way? Can we trust it to be empirical reality? Tautologically, yes. Can we trust it to be “metaphysical truth”? No we can’t, but we don’t need to. We can simply stop at empirical reality, and not make any further claim about it. So the claim of scientism is that science (aka empirical observation) tells us about empirical reality. That is not circular, indeed it is more a tautology (and therefore true).
If we are happy to stop there (and not make any further claims about anything beyond empirical reality, no claims about any meta-reality), then we have fully justified ourselves. And I see no problem with stopping there, after all empirical reality is what matters to us, being (by definition) what we experience/observe. So, yes, science can answer questions about the empirical world.
Of course, the whole “empirical reality” might be some Matrix-style simulation embedded in some meta-reality, and there may be no empirical traces of that meta-reality, and thus science could tell us nothing about that meta-reality. But that’s ok, so long as I’m limiting any claims to our “observed reality” (i.e. the simulation), I’m still justified in what I am claiming.
What do you mean by “necessarily” here? Do you mean, this always occurs in our universe? If so, the only way of knowing that would be empirical observation of our universe. Or do you mean “I have a gut feeling that this must be the case in our universe, beyond what is empirically established”? If so, then you don’t know that, you merely opine it. This is only a problem for me if you can first establish (by non-empirical methods) that you know this to be true. Or do you mean that it must be the case in all possible universes? Again, this claim is only a problem for me if you can establish (non-empirically) that you know this to be the case. Can you?
Can any non-scientific method do that?
This one I’m not sure I understand. Perhaps I’m unsure what you mean by “could have existed”. Do you mean, is it possible that, had ecological conditions been such and such, and had certain mutations arisen, that a sufficiently unicorn-like creature could have evolved? If so, then I’d suggest that science is exactly the right method to answer that question. Or are you asking, does the definition “unicorn” contain features that are incompatible with known laws of nature in our universe? If so, then, again, I’d assert that science is exactly the right tool to answer it. Or are you asking whether some possible universe could contain unicorns, and thus whether our universe could have been such? If so, then, I’m not sure how to answer it, but again this is only a problem for me if you can show that some non-scientific method can answer it better than science can.
The ethics stuff I’ll reply to shortly …
I’ll try to condense some stuff, since this is getting really long, probably my fault for trying to address an entire post in one comment.
(1) On reality and observable reality: I’m not happy with defining ‘reality’ as ‘observable reality.’ I (and most other people) define it as ‘that which exists’ or ‘that which is’ or ‘that which is real’ or ‘that which can instantiate properties’ or something. It’s difficult to define. In any case, if you think ‘reality’ means ‘that which is observable,’ then I’m asking about reality*, which is ‘that which is.’ There will, then, be large parts of reality* that science cannot answer questions about.
If you’re happy with science’s discoveries of propositions p all taking the form of,
then that’s okay. But some people will want discoveries of the form ‘p.’ In other words, what if we want to know that
(Not just that
Don’t we need to know that we can trust our senses, that we’re not in a computer simulation? Are you saying that science just can’t tell us that?
(2) On modality: Consider the claim that
(I guess it’s closest to your “in all possible universes.”) That claim seems extremely obviously true, to me. And as noted, science can really only tell us about the actual world, not about merely possible worlds; at best, it can tell us that so far, three has never been greater than five. (Right?) So now I want to know: What is your argument that we can’t at least prima facie trust “extreme obviousness,” and does it have premises that are, themselves, overall more plausible or justified than the claim that it could never have been the case that three was greater than five?
Similar remarks apply about unicorns. I can imagine unicorns, which I think is at least prima facie evidence that they could have existed. (I don’t just mean ‘compatible with the laws of physics,’ since many things compatible with the laws of physics are impossible. For example, a person being a prime number does not violate any laws of thermodynamics, Newtonian motion, etc.)
(3) On ethics: There’s far too much here to address in a comment, of course. I think we actually agree that most people have some sort of conception of absolute morality, even though you would argue that the conception doesn’t hold up to critical scrutiny. And the stuff about explanation is interesting, but I think it would take us far afield. In any case, we may be able to subsume it into the epistemological questions, which I take to be the most interesting.
So let’s just look at alleged reasons to believe in objective (mind- and stance-independent) ethical truths. Let’s look at one moral and one axiological:
(We set aside extreme cases; read these as ‘pro tanto‘ or ‘prima facie‘ if you want.)
Now, both (A) and (M) seem extremely obvious to me. So consider your arguments against (A) and (M)–that there is no clear explanation for why they would be true, that intuition is unreliable, or whatever–and look at the premises of those arguments. Two questions:
(Q1) Are those premises more plausible than (A) and (M)? If not, why not accept (A) and (M) and reject the premises of your anti-ethical arguments?
(Q2) Could someone make a parallel argument against the existence of objective epistemological truths? That is, suppose you think your position is justified–that people ought to accept it. Can you pose your arguments against the existence of objective truths about justification? (That we can’t imagine how a brute fact about justification could come into existence, that intuition is unreliable, etc.) The worry here, of course, is that any anti-normative or anti-oughtness position is self-defeating.
On the ethics part of your comment:
I accept that I haven’t read as much of this as I might. Whenever I do read moral philosophers I quickly get bemused and sit there thinking: “you’re entirely missing the point; isn’t it obvious that the human ethical system is just cobbled together by evolution just like the human respiratory system, the human aesthetic system, or the human immune system? You wouldn’t write all this stuff about those …”.
I’m really unimpressed by any of those as establishing morals as absolute, as oppose to humans acquiring moral opinions.
But we know that intuition is unreliable! Human intuition is another system cobbled together to do a job, and it works ok at anything related to survival/procreation over our evolutionary past (because that is the only thing that can have programmed intuition), but it works badly at anything that is unrelated to survival. For example, our intuition works very badly in the realms of special relativity, general relativity, and quantum mechanics, for the simple reason that our evolutionary past has only ever involved (respectively) speeds slow compared to c, gravitational fields that are weak, and sizes much bigger than Planck’s constant.
We also know that intuition can be systematically biased and inaccurate in ways that help survival/procreation. For example, we are over-prone to seeing patterns where there are none, because the penalty for discerning a pattern where there is none is usually far less than the penalty for missing a pattern this is real (e.g. a lion camouflaged in grass).
I have suggested that our intuition is warped towards regarding morals as absolute because that makes our moral system more effective at its job. The fact that that is a possibility means that we can’t rely on intuition here.
This is back to my previous reply: I don’t accept that the claim that we know about empirical reality is circular. And I’d regard intuition as highly suspect unless supported by empirical evidence. And it is that empirical evidence, supporting some types of belief, that sets such beliefs apart from claims founded only on intuition.
Afterall, intuition would point towards classical physics, and in many ways our intuition has been found to be wrong and superseded by modern physics, which is often highly counter-intuitive. But the modern physics matches the data better, demonstrating that empirical evidence trumps intuition.
Wow! Only a philosopher would postulate that morality is a brute fact that needs no or has no explanation. Isn’t that the ultimate nihilism? From my scientific stance, yes, I think facts and “truths” need explanations. At least, any good explanation for something is vastly better than a non-explanation. Good explanations make predictions that can be verified; good explanations mesh with other explanations for other things, and those other explanations can also be tested and verified. If this is successful then this provides very good evidence for the explanations (that’s the scientific method which, empirically, has been found to work well about the empirical universe).
In the case of morals we can come up with a very good explanation: our moral system has been programmed into us by evolution to enable and facilitate the highly social and cooperative way of life and ecological niche that humans exploit. This makes predictions: it predicts that other highly social mammals will also have moral systems policing how they interact, and we find that to be true. It predicts that, at least to some degree, our moral attitudes will be a product of our local society, in that they will be worked out together in that society (rather than being absolutes), and that also is true. It predicts that our moral opinions will be under the control of genes (since that’s the only way that evolution could have programmed them), and that also is true (e.g. the existence of psychopaths, the role of monoamine-oxidase-A and of oxytocin). It predicts a lot of things about what our moral sentiments will be (and to a very large extent we find that are morals are indeed what we would expect if they are there to produce social cooperation: for example the role of loyalty, guilt, shame, cheat-detection, shunning, contrition, etc, are exactly what we’d expect for a moral system designed for the purpose of social interaction). It also predicts that the evolutionary heritage will be present in the moral system, since evolution usually uses and co-opt what it has. And it does seem to be the case that our moral system is a variation on our aesthetic system, in the sense that disgust at a morally bad action is similar to disgust at rotten food. The latter will be far older in evolutionary terms, and so will have been available for co-option. And, as above, it predicts that we might be programmed to think of our moral sentiments as absolutes, to make them more powerful. And there’s evidence for this, in the finding that, in religious believers, brain processes relating to moral “I think …” ideas are very similar to those for moral “God wants …” ideas, as if humans were taking their own opinion and turning it into an absolute. And that’s just a few examples. There’s just so much about our moral system that makes sense and is well explained by the product-of-evolution hypothesis.
The “brute facts” hypothesis predicts absolutely none of this, it doesn’t even predict that moral “truths” would be about humans, or even that they are about sentient animals (as oppose to being about sand grains or air molecules). To project our parochial human concerns onto the universe as a whole is anthropocentric hubris.
This hypothesis could only be entertained by someone who regards explaining morals as a job for philosophers rather than scientists! Yet, moral systems are real properties of real animals in the real world, and thus fair game for scientists. As Dobzhansky said, nothing in biology makes sense except in the light of evolution; moral systems are blatantly biological and a product of evolution, and moral philosophers who don’t approach the subject from that stance are entirely missing the point and fundamentally misunderstanding the whole topic.
Does the request for explanations lead to infinite regress? OK, yes it might. But understanding a long chain of explanations that then leads to distant unknowns or infinite regress is still vastly better than invoking an unexplained brute fact at the first link in the chain. And by “vastly better” I mean the above point about predictive capacity and meshing with a wide and proven understanding of empirical reality.
Yes; evolution has programmed us to think like that. We’re also programmed with a language instinct, and with curiosity, and notions of what tastes good and with disgust and lots of other things.
Sure, that’s the intention of such claims.
Well no, they don’t have a conception of such absolute morals, they just have to declare them unexplained `brute facts’ (that’s hardly sufficient to be called a `conception’ of them). No philosopher has put forward any conception of absolute morals that holds water, so they don’t really have a conception or any understanding of this, they merely have a gut feeling (aka intuition) about it; and they have that because evolution has programmed them to have it.
Let’s ask about this reality*. Is it empirically observable? No, by definition. Does it have any effect on the empirically observable? No, it can’t, because if it did then its effects would be empirically observable, and thus it itself would be empirically observable (we observe very little directly, most of the observable universe we observe indirectly from its effect on other observable things). So, this reality* can never have any effect on us (at least, not an effect that we could ever notice, which really means no effect). So in what way could it be said to “exist”? It would be some sort of parallel existence which is causally disconnected from our empirical universe, but which is entirely irrelevant to our lives and what we observe.
You’re right that science could not tell us about this reality*, but it could tell us about reality, and I’m satisfied with that. To claim that reality* is in some way “real” seems to me a misuse of the word “real”. Indeed, I could define “real” and “exists” to mean “empirically observable (even if indirectly)” and leave meta-realities to the meta-physicians and the philosophers.
What do we mean by “trust our senses” here? Suppose that what our senses told us about was some Matrix-style simulation S that was so complete that empiricism and our senses always told us about S and never told us about anything except S. In that case, by my above definition, S is “real”. Could we trust our senses to tell us about S and hence about reality? Yes.
Also, again by definition, the reality* beyond S, call it S* could never have any effect on us that we could ever discern. In what way would S* then be “real” (other than the misleading label!)? I’m happy to accept that science is limited to “reality” (as I’ve defined it), and regard talk of parallel realities with a shrug.
No we don’t, to the first question, and Yes I am to the second. I do accept that we can’t rule out the simulation idea, or similar “realities” beyond the empirical. This isn’t a problem for my scientism stance unless other types of enquiry could tell us that we were in a simulation.
Your intuition here is the product of eons of brute empiricism about our universe. What your intuition is really telling you is thus, “that’s not what our universe is like”. So I don’t regard intuition as reliable about “all possible universes” and thus I suggest that you can’t “know” that your claim about all possible universe is correct.
My argument is the fact that your obviousness-declaring device is very much a product of our universe. Pebbles don’t go around declaring things to be `obvious’ but human brains do. Those human brains have evolved to be useful in our universe, to make judgements in our universe. The only thing that could have created and programmed our brains are empirical facts about our universe (aka natural selection in our evolutionary history). Thus, we can suppose that they will be fairly reliable about our empirical universe (at least, about anything in it that has had consequences for our survival and procreation in our evolutionary history), but we have no basis whatsoever for asserting that they have any reliability at all in any other possible universe.
(1) On reality*: Sorry if I wasn’t clear. Reality* is that which exists, so it includes (a) everything that exists and is empirically observable and (b) everything that exists and is empirically unobservable (if anything). The empirically unobservable parts would affect us if we have ways of learning about things a priori, which is the subject of the rest of our discussion. They wouldn’t affect the five senses, but we’re still trying to figure out whether we have non-empirical ways of learning.
(2) On skepticism: Suppose it turned out that there was no reason at all to trust our senses. (In the same way that, for example, there is no reason at all to trust astrology or Ouija boards.) Wouldn’t that mean science could never give us knowledge? (Why should we trust the five senses and not Ouija boards? Do you have a non-circular argument? Proponents of Ouija boards will certainly be able to give circular arguments for trusting Ouija boards.)
(3) On modality: It seems to me (correct me if I’m wrong) that you’re conceding that if modal facts exist (not just about what the laws of physics permit or forbid, but instead what’s “possible” in the most basic sense of possible), science cannot know about them. So for all science can tell us, it’s possible that tomorrow, two will start equaling three. (If science can tell us that that’s impossible, how? When did scientists observe an impossible two that was equal to three?)
(4) On extreme obviousness: To say that we should trust extreme obviousness prima facie means it provides some defeasible evidence. Now, you seem to think (right?) that we shouldn’t trust extreme obviousness, even prima facie. So which scientific observation was an observation of the wrongness of trusting obviousness? (More on this in point #7 below.)
(5) On the metaphysical and conceptual status of goodness and wrongness: I’m certain that most people in the world have some conception of goodness and wrongness. And for most people in the world, if you asked them whether torturing and murdering a toddler was wrong, they would say yes. (The same with asking them whether happiness is good.) Now, the claim that torturing and murdering a toddler is wrong seems far more obvious to me than the claim that the Theory of Evolution is true, even though I accept evolution. And the claim that happiness is good (not just that people believe it’s good) seems far more obvious to me than the premises of any anti-morality arguments. So what are you recommending–that I accept the implausible hypotheses instead of the plausible hypotheses? Is that something rational people do? (In the end, won’t you run up against a belief that’s not justified by any further belief? What do you do then?)
(6) On Occam’s Razor: What is your scientific, empirical argument that Occam’s Razor is true? In any case, here’s the datum: Millions (perhaps hundreds of millions) of people intuit that torturing and murdering a toddler is just wrong–not just that they disapprove of it, but that people shouldn’t do that, even if they like to. Now, you have one explanation: These millions of people are massively deceived. I have a different explanation: These people are correctly perceiving an ethical truth. My position might be less ontologically parsimonious, but there are no good arguments for ontological parsimony. (There are especially no arguments for ontological parsimony that have more plausible premises than ‘torturing and murdering a toddler is just wrong, no matter whether someone wants to.) And it’s not at all clear how my position is less propositionally parsimonious than yours. It might even be more, since you need to posit a deceptive mechanism that I don’t.
(7) On justification: Do you believe that some beliefs are right to have and some are wrong to have, given the evidence? Do you believe that some beliefs are more reasonable than others? Do you believe that it’s irrational not to accept the Theory of Evolution? Do you believe that we should accept empiricism?
Sorry for the delay in answering, owing to a hectic couple of days:
Doesn’t that possibility lead to a contradiction? Suppose we have ways of learning about some meta-reality that is “empirically unobservable”. That knowledge must be manifest in our brains somehow, in the (physical and material) pattern of our neural network. Since the physical patterns in our brains are, in principle, empirically observable, that must mean that this meta-reality has led to an empirically discernable change in our universe, and therefore this meta-reality is empirically observable (indirectly so, but then almost everything we observe is only observed indirectly). Therefore we have a contradiction with the premise.
So, I can accept the possibility of causally-disconnected realities that have no effect on our observable reality, but then, ex hypothesi, there would not be any causal-links that would enable us (physical beings) to be affected by them and thus learn about them. Even if you postulate a chain of supernatural links that end up affecting natural/physical entities you would still be making the meta-reality empirically observable by its effects on the empirical end of the chain.
Trust our senses to do what? Trust them to accurately reflect the empirical-world-as-experienced-by-our-senses? Yes, more or less tautologically, we can. Can we trust them to tell us about some meta-reality that has no possible effect on our senses? No, but is that a problem?
Again the question is trust them to do what? If the answer is trust them to tell us about empirical reality (as revealled by our senses) then the former is tautologically true and the latter can be tested empirically. If the question is whether we can trust either to tell us about a hypothetical spirit world then (as far as I’m aware) the answer is no for both.
Yes, I agree, in the sense that science is always provisional and open to correction, and so has no basis for asserting absolute impossibilities in “other conceivable worlds” or even in our future world. As before, my assertion isn’t that science can tell us everything, it is that no “other way of knowing” can do better.
Well, obviousness is one type of evidence, but I don’t see “obviousness” as independent of empiricism, our intuition about what is “obvious” is just a distillation of our experience of the universe (and that of our forebears in our evolutioary past).
If empirical data shows that our intuition about something “obvious” is wrong then we have shown the wrongness of completely trusting obviousness. There are many examples of this in physics, for example in relativity and quantum mechanics.
For example, when you teach relativity to undergraduates you usually start by using “obviousness” to produce the Galilean Transforms. Then you show that these are incompatible with empirical data, and that the data instead favour the Lorentz Transforms. Of course the Lorentz Transforms reduce to the Galilean Transforms for speeds much less than the speed of light. So you then explain to the undergraduates that the reason their intuition about “obviousness” breaks down is because throughout our evolutionary past the only speeds that have been relevant (to survival/procreation) have been very small compared to the speed of light.
Yes, agreed (they’ve been programmed to have concepts of good and bad).
I agree, your intuition about that wrongness is indeed obvious. What is equally “obvious” to me is that chocolate tastes good whereas faeces don’t. However, if I were to ask myself whether a dung beetle would have the same aesthetic judgement then I’d admit that there is no reason why it should, and a very good reason why it should have a very different opinion.
So the obviousness of MY moral judgement is indeed obvious. However what is not in any way obvious is that my personal moral judgment (or even that of all humans) in any way reflects an absolute reality, rather than being a partiucular outcome of our evolutionary past and ecological niche.
I’m recommending that intuition and obviousness are unreliable, and that you should doubt them where there is good reason to doubt them (I have given one such reason: that evolution would have programmed you to think of your morals as absolute because they strengthens them), and that you should look for corroboration, particularly empirical corroboration.
It’s a probabilistic thing. The number of things that are true in our universe (e.g. 1+1=2) is vastly exceeded by the number of possible things that aren’t true (1+1=3, 1+1=4, 1+1=5 … to infinity). Therefore the chances of something uncorroborated by evidence being true is vanishingly small; therefore we should only accept things for which there is evidence.
My biggest stumbling block with that claim is still that I can’t even conceive of what it means in the absolute sense (no, really, I can’t!). What do we mean by the “wrong” in that sentence? Well we mean it is … err … “wrong”. And by that we mean it is “immoral”, but that word is just a synonym for “wrong” so doesn’t help. We could say one “shouldn’t do it”, but what does “shouldn’t” then mean other than to do it would be … err … “wrong”?
The only sense I can make out of “wrong” is that humans have an aesthetic revulsion to and abhorrence of it, it’s a human opinion, something humans would not like to happen. There is a proper grounding of moral judgements. But this roots morals it in human opinion and human evolution, whereas to be an “absolute” wrong it needs to be “wrong” regardless of what humans think of it. And, try as I might, I have no conception of what that even means.
(Clarification, I can conceive of independent moral systems having evolved in other social mammals, for the same reason one has evolved in our species, but not of any “absolute” moral system. Indeed this produces even more evidence against any “absolute” morals. For example it is clear that the Bonobo Chimp moral system includes behaviour that would be regarded as highly immoral in our system, for example routine copulation between adults and juveniles as a form of social bonding.)
If by “right” we mean “in keeping with the evidence” then yes.
Yes, or rather, it could be either irrational or ignorant.
What do you mean by “should” here, that’s a rather normative word! I don’t argue that there is anything intrinsically virtuous in so doing. I do argue that empiricism is the best (only?) method of answering questions about our universe (any by “best” I mean most likely to give answers that match empirical reality).
Now the ethics bit:
“Obvious” to your evolutionarily programmed intuition.
First, here’s how I would respond to those two: (A) Yes, humans do regard happiness as “good”, something they want, seek and enjoy. I’d suggest that the fact that humans want, seek and enjoy “happiness” is entailed in the definition of happiness. So far we have a tautology, so long as “good” is interpreted as “something humans want, seek and enjoy”.
If you’re then suggesting that “good” has some metaphysical status beyond human opinion and desires, then, first, that’s not at all obvious to me, so simply declaring it obvious is insufficient, and, second, I’ve never encountered a decent argument for this claim.
(M) Again, if we interpret “wrong” in terms of human opinion, then yes, that is a true statement about general human opinion (a truth that can be established empirically by asking humans). It’s also obvious why evolution would have programmed us with a very protective and nurturing attitude towards children.
If you’re interpreting “wrong” in some metaphysical sense (totally separate from human feelings) then, again, that’s not obvious to me, and actually I don’t even have a conception of what the claim would even mean.
Yes, they are, a lot more plausible! In their absolute, metaphysical interpretation I don’t even have a conception of what A and M mean. You can suggest that “It is a brute fact that it is immoral (in an absolute sense) for X to do Y”, but I don’t have a conception of what “immoral” means in this claim. At least, the only explanation of “immoral” would then be referring to that very same sentence, which gives you a loop that explains nothing.
The premises of what I’m arguing, that human morals are bound up with humanity and our attitudes and feelings, and that human intuition is intimately related with our evolutionary past, are things we know to be true. It is vastly more parimonious to base an account of morals on those things than to invoke a whole new conception which requires wholly new properties of the universe (“brute fact morals”) and which seems entirely conceptually empty (since you only have that self-referential loop). You’ve thus massively violated Occam’s razor for no explanatory gain at all.
My short answer is that, so long as I’m sticking to reality and not reality*, then ideas of “justification” and valid arguments and so on are also empirically verified. If we find that our claims and predictions are empirically verified, then that validates all the ingredients that went into them, including notions of logic and of what is and isn’t justified. In other words, the whole shebang is empirically validated, which is fine so long as I’m only claiming about “empirical reality”.
Pingback: On science and scientism | The Heretical Philosopher
It looks as if we’ve reached some kind of thread depth limit above, so I’ll try to pick up some of the discussion here.
(1) Our real question is whether we can learn anything, such that that way of learning, itself, is non-empirical. This unobservable portion of reality might be something we learn about a priori, even though the effects of that learning are empirically detectable.
(2) On trusting the senses: I’m suggesting that unless we have good reason to believe that the five senses are generally not deceiving us, we have no good reason to trust them. Take the simple Cartesian case: What is your argument that we’re >not the victims of a powerful deceiver who makes us think that tables and chairs exist, when they really don’t? If you don’t have such an argument, then should you rate the possibility of deception as 50-50?
(3) On modality: What about a simpler case? I think that necessarily, 1=1, that no matter which world turns out to be actual, one will equal one. I guess anyone who thinks we do know that should be a rationalist, and anyone who thinks we don’t know that should consider empiricism.
(4) On epistemic wrongness: Suppose you observe that Ouija boards are extremely unreliable. Does it follow that it’s wrong to trust Ouija boards? I’m asking what the empirical observation of the wrongness itself was. What color is it? (Is wrongness-information transmitted by photons? Electrons? Air molecules? If not those, then what?)
In your remarks below (re #7), you suggest that there’s no normativity in the world, even about epistemology. So there’s nothing overall wrong with rejecting the Theory of Evolution in favor of Creationism. It’s only imprudent to do so if you want to have true beliefs. Similarly (right?), there’s nothing wrong with legally forcing the teaching of Creationism instead of the Theory of Evolution in schools, since there’s nothing wrong with teaching people to hold beliefs that don’t have good evidence. (Maybe people want to have true beliefs. But there’s nothing wrong with forcing them to have false beliefs, even if they want true ones.) Right?
(5) On morality and evolution: I find myself with the intuition that torturing toddlers is objectively wrong. You argue that evolution predicts that I would have this intuition, even if the intuition is inaccurate. But I’m suggesting that ‘torturing toddlers is objectively wrong’ seems far more plausible than the Theory of Evolution, even though the Theory of Evolution seems very plausible to me. If you’re presented with two incompatible beliefs, when would you ever choose the overall less plausible one? (I don’t agree, by the way, that the Theory of Evolution and ethical objective realism are incompatible (after all, if morality is about prosociality, then there is a survival advantage to having prosocial moral intuitions), but we can set that aside for now.)
(6) On Occam’s Razor: Certainly there are continuum many false beliefs about some fact. (The belief that the distance from point A to point B is irrational number n1, is irrational number n2, is irrational number n3, etc.) But this proves far too much. It shows that the a priori probability of any proposition being true is infinitesimal, and could never be outweighed by any evidence short of certainty, which you admit science never gives us. Indeed, suppose you think you observe the mass of a particle to be 1 nanogram. What’s the probability that that’s correct? Isn’t it infinitesimally small?
As for whether objective ethical facts exist, that seems only 50-50 a priori, right? Either they do or they don’t. So I don’t yet see how to make the probabilistic argument here. And as I mentioned in my earlier comment, I think we do have an explanatory gain: we can explain why people would intuit objective ethical facts: because they’re accurately perceiving ethical facts. You might think your explanation is better: that they evolved to do so. But Occam’s Razor doesn’t say to prefer the explanation with more evidence on its side; it just says not to posit entities or propositions that don’t give you any explanatory gain, right? (You might say that ‘correctly perceiving ethical facts’ is unnecessary, given the evolutionary explanation. But then I can say right back that the evolutionary explanation is unnecessary, given ‘correctly perceiving ethical facts.’)
Really what we need to do is weigh the evolutionary explanation against the objectivistic, intuitionistic (but ‘brute fact’) explanation, and assess which is more likely overall to be true. And I don’t know of any propositional (non-ontological) version of Occam’s Razor that could settle that question, since both sides claim to make explanatory gains.
This is similar to the above argument about a Matrix-style simulation produced by a possible meta-reality. If the deception is so complete and all-encompassing that the deception essentially is our empirical world, with all our observations being about that deception, and with this deception being so coherent that all our predictions about it come true, then that deception would be our “reality”, as I’ve defined it.
You probably mean, however, a partial deception, in which most of the things in our empirical universe are “real” and the rest “fake”. I’d approach this the same way: if the “fake” items are such good fakes that in every particular they act as if real, including obeying all the laws of physics, and meshing seamnlessly with the “real” stuff, and complying with predictions we make about them, then these “fakes” are indistingishible from being real and thus essentially are “real” as I’ve defined reality. To be “fake” there must be some (empirical) way of telling that they are fake, and if that is the case then empirical enquiry is the right tool to investigate whether they are fake.
We can actually cope quite well with our senses being deceived some of the time, and indeed a lot of psychology investigates our foibles and the limitations of our senses. A favourite neat example is the false colour illusion which deceives me every time I look at it, and even continues to fool my senses even though I know I am being fooled. Thus we can detect imperfect fakery (perfect fakery again being equivalent to `real’).
I can conceive (well, sort of conceive) of a chaotic world in which “1” is unstable and continually turns itself into 2 or 6 or 3.47 and then into any other number. In such a world “1=1” would not really mean anything. Again, I’d suggest that the extent to which we are confident that “1=1” is just an empirical confidence about our universe.
Yes, they are wrong if you want a reliable guide to reality (note the “if”).
The “wrongness” is the deduction of the form “if you are trying to obtain X then Y will not give it to you”, shorthanded to “it is wrong to use Y”. It is an interpretation, one amply supported by empirical data. How is that information transmitted? Well it could be by many means, since different material substrates can carry information.
I’m suggesting that there is no absolute normativity, but humans — entities with opinions, feelings and desires — can establish local normativity, by having opinions about what they want to happen.
Not in any absolute sense, no. But humans do have opinions about rightness and wrongness (that’s what those words mean, they’re about human desires), and hence humans could and might declare that “wrong”.
Exactly. You might want false beliefs.
As above, there’s no absolute wrongness or normativity as fundamental properties of the universe, but we humans sure are an opinionated lot, and most of us would opine on that topic. Similarly there’s nothing wrong with murder except human opinion and feeling.
On what basis are you asserting that your intuition has any validity at all? How do you jump from “this is my intuition” to “this is plausible”? I suggest that the only confidence you can have in your intuition comes from empiricism, namely the number of ways in which you have found empirically that your intuition is a decent guide.
Doesn’t Bayes’s theorem have a term on the bottom which essentially normalises over possibilities, cancelling out the infinitesimal, and hence giving a sensible answer?
I’ve never liked the idea that zero information equates to a 50:50 starting point. It’s more 50+/-50 : 50+/-50, or in other words anywhere from 0 to 100.
Yes, agreed, though the first of those has lots of evidence on its side, lots of things that it explains better than the second. In contrast the second has no evidence supporting it, explains no fact better than the first, and also suffers the huge defect (at least to me!) that I don’t even know what it means.
I think we’re making a lot of progress here.
(1) On skepticism: Okay, now what if the deception is very minor–that the planet Neptune exists, but in fact it doesn’t really exist? Is there a fully empirical argument that we can trust our observations of Neptune to be accurate? (You could appeal to observing that other cases of observation are accurate, but that’s circular.)
(2) On numbers: Okay, once again, those who think that we know it’s impossible for 1 not to equal 1 should be rationalists, and those who think we don’t know that should consider being empiricists. (Again, I’m not sure what the argument for empiricism is supposed to be that has premises more plausible than ‘we know that necessarily, 1=1.’)
(3) On wrongness: There’s nothing intrinsically valuable about having true beliefs or knowledge, according to you, then, right? Everyone should admit that their preferences are irrational (or at least nonrational), right, since their preferences don’t track any such thing as value in the world? (It’s just as ultimately rational to prefer the deaths of all of one’s family to missing a football game, since life is no more or less ultimately valuable than football.) (Once again, I wonder what the positive argument for empiricism is that’s more plausible than, e.g., ‘suffering is overall less valuable than pleasure.’)
(4) On prior probability: Here’s Bayes’s theorem for two contradictory hypotheses:
Pr(H|E) = Pr(E|H)Pr(H) / [Pr(E|H)Pr(H) + Pr(E|~H)Pr(~H)]
If P(H) is infinitesimal, then the quotient is infinitesimal, since Pr(~H) will be infinitesimally close to 1. (The general form also comes out infinitesimal because P(H) is in the numerator but not the denominator.)
(5) On competing plausibilities: Here’s an example of the problem. Compare three claims:
To me, (A) is much more plausible than (B) and much more plausible than (C). But more importantly, the conjunction of (B) and (C) will have a lower probability than either of them, and you need the conjunction in order for (B) and (C) to militate against (A). In particular, for me, (B) is very plausible but (C) is not very plausible.
Now, certainly, this is a caricature argument. But I’m confident that there will be a point in the evolutionary argument against ethical realism that relies on a premise less plausible than (A). (I’d be interested to see the argument for (C) fully expanded to reveal all of its premises. As long as even one of them is less plausible than (A), then it’s more rational to accept (A).)
(5) One more thing: I take your empiricism to be this thesis:
What exactly is the positive argument for that supposed to be? Is it possible to summarize it in numbered premises and conclusions form? E.g.,
1. Observation seems to report that observation is reliable. [Or something similar?]
n. Therefore, all of our true beliefs about the world ultimately can only come from the senses, including via science.
If the fakery of Neptune were perfect (= no empirical observation could reveal the fakery) then by my definition it would be “real”. If the fakery were imperfect then, ipso facto, there would be empirical observations that could reveal the fakery.
No, there isn’t. It could be a fake that we’ve not yet uncovered. In the same way all science is provisional in that new empirical evidence could overturn existing “knowledge”. In that sense the claim of scientism isn’t that it is infallible, it is that it is the best that humans can do.
Yes, agreed. Indeed there might be “value” in having false beliefs (believing, falsely, that one is very handsome might lead to self-confidence that then leads to desired outcomes). The only “values” here are ones that humans hold, they are not absolute or intrinsic.
Yes, agreed (though “non-rational” rather than “irrational”). There may be no rational reason for prefering strawberry ice cream to chocolate ice cream, yet that might be someone’s preference. There may well be reasons why people have particular preferences/values, but it is still the case that a value-judgement is epistemologically distinct from a rational argument.
Investigating this further I discover that the use of infinitesimals in Bayes’s theorem is problematic. For example this seems to be a neat proof that combining infinitesimals and Bayes’s theorem can lead to absurdity. So, I’m unsure how to respond on this one as yet.
I guess my problem with just assigning plausibilities on intuition like this is two-fold:
First, what is the argument that intuition has any validity? Something might be highly intuitively obvious, but how do we leap from there to it being probably true? The only argument I can see is that our intuition has been molded by empirical reality (evolution), and hence is reliable (at least concerning items of evolutionary relevance). But, for you to use this, you can’t assign a higher probability to P(intuition-is-reliable) than P(evolution), which reverses your argument.
The other point is that these intuitive assessments are only a starting point. We don’t hold to evolution owing to any prior asessment of its intuitive plausibility, we hold to it owing to the copious empirical evidence.
My difficulty here is that I personally don’t find this premise at all plausible, indeed I can’t even conceive of what it means, and thus can’t assign it any significant plausibility. Given what we know about the world I’d evaluate P(evolution) = 1 (near enough). And given how well evolution would explain ethics as an evolutionary product coupled with the illusion that ethics are absolute, I’d evaluate P(evolutionary-ethics|evolution) as near one and hence I’d evaluate P(absolute-ethics|evolution) as near zero.
Hence, feeding in P(evolution) = 1, I get P(evolutionary-ethics) = 1 and P(absolute-ethics) = 0 (in round numbers).
C here is P(evolutionary-ethics|evolution) = 1 (or at least high), and P(absolute-ethics|evolution) = 0. Let’s compare our moral system to, say, our immune system or our kidneys. These could be either cobbled-together products of evolution or they could be fundamental ontological entities.
Given how well evolution explains the existence of an immune system and things like kidneys, indeed predicts that any human-like being would need these things (and given how bonkers the idea that a kidney is a fundamental ontological entity is), I’d assign P(immune-system-explained-by-evolution|evolution) = 1 and P(kidneys-explained-by-evolution|evolution) = 1.
To me the likelihood P(ethics-explained-by-evolution|evolution) is just as obviously near 1, and I don’t see anything implausible about it. Certainly I’d assess that as far more plausible than P(A).
OK, how about:
Premise (1): By “true beliefs about the world” we mean “faithful representations of what our senses detect”.
Premise (2): We can test whether something is a “faithful representation of what our senses detect” by comparison with what our senses detect, and thus we can tell whether claims are “true beliefs about the world”.
Premise (3): It might be that some route other than sense data (perhaps divination) leads to accurate predictions (in the sense of the comparison in 2), and hence to “true beliefs about the world”.
Hypothesis (4): There are no other routes as in (3). (= Scientism).
Premise (5) : 4 has not been falsified (though it could be by means of 2 and 3).
Conclusion: (4) holds — provisionally. It could be overturned (if priests using divination start predicting solar eclipses better than scientists using empirical models, then it would be).
Thanks again for taking the time to reply. Again, I think we’re making progress.
(1) On observation: We’re discussing the stuff about trusting observation below. But I also worry that your position entails that perfect “deceptions” are not deceptions. (If someone flawlessly, undetectably deceived you about the existence of fairies, then it would follow (right?) that fairies exist, since they would be “real.”)
(2) On intuition: What we’re really getting at, here, is actually kind of similar to point (1): what we might call the foundational problem in epistemology. The worry is that when you trace back any argument for any position, you end up eventually at making an intuitive judgment about the plausibility of something. So while there is copious empirical evidence for evolution, when we trace back arguments for evolution far enough, don’t we end up with some kind of intuitive judgment that (e.g.) empirical evidence is accurate? I guess we’re addressing that, as I mentioned, in point (1). We’re also addressing it below, since there, we talk about empiricism itself.
(Many philosophers believe that if we don’t accord intuition at least prima facie evidence, we’ll end up in global skepticism: skepticism about everything.)
(3) On evolutionary explanations for ethics: Again, I wonder what the argument would really look like, in fully numbered premises and conclusions. Here’s a stab:
1. The Theory of Evolution is true (> 95% epistemic probability).
2. If the Theory of Evolution is true, then the best explanation (> 75% epistemic probability) for any particular trait is that we evolved to have it.
3. Therefore, probably (> 75% epistemic probability), the best explanation for any particular trait is that we evolved to have it.
4. If we evolved to have moral intuitions, then those moral intuitions are probably (> 75% epistemic probability) generally (> 50% proportion) inaccurate.
5. Therefore, probably (> 50% epistemic probability), our moral intuitions are generally inaccurate.
6. Therefore, any particular moral intuition is probably (> 50% epistemic probability) inaccurate.
Is that a good summary? If so, I would question (2) and (4). Most people would agree with me that, e.g., molesting children is deeply, objectively morally wrong, and they would think this seems obvious. So our question (provisionally) is whether that claim is overall more plausible or obvious than (2) or than (4). (It only has to be more plausible than one of them in order to refute the argument.) But before I critique (2) and (4), I want to know whether the above is an accurate enough summary of the evolutionary argument against ethical objectivism.
(4) On empiricism: What is the fully-empirical support for your premise
If the deception were perfect, so that no conceivable empirical information could reveal it, then, yes, I’d call it “real”. What is the difference between someone putting fairies that fulfill that criterion into the world and someone putting “real” fairies into the world?
I think our difference is that you want, at root, to appeal to intuition whereas I, at root, want to appeal to empiricism. Isn’t the foundation of everything that we observe patterns in our world, for example day/night patterns, and derive everything from those? Intuition is also derived from those patterns (either our early empirical experiences or instinct deriving from empirical experiences of our ancestors). I don’t see where else intuition can come from, and thus I wouldn’t regard intuition as primary.
But can’t we doubt all intuition, and then build up reliable edifices based on those day/night patterns and other observations of empirical reality?
I’ll accept your 1, 2 and 3. Though the “best explanation” claims would only be a starting point, and they would be up-dateable by evidence about any particular trait.
I’d prefer to say: If moral intuitions are evolved then they would likely be accurate in so far as accuracy would be beneficial for survival/procreation; they would likely be INaccurate if that INaccuracy aided survival/procreation; and anything with no implications for survival/procreation would be un-tethered and so there would be no reason to regard it as accurate.
I thus wouldn’t agree with your 5 and 6, for example many of our moral intuitions about people and their interactions are highly accurate (because of course that is the task our moral intuitions have evolved to deal with).
Again, though, we differ on whether the ultimate appeal should be to intuition and “obviousness” or to empirical evidence. To me the latter trumps intuition — for example much of modern physics is highly counter-intuitive. We also know (and indeed predict) that what happens over evolutionary timescales is highly counter-intuitive (since our intuition is tuned to happenings in our life-spans)
Thus if intuition conflicts with evidence then throw out the intuition. Yes, intuition has a role in our thought processes, but as we iterate with empirical evidence it is the intuition that defers to the evidence.
Isn’t that one sufficiently tautological to not need further support?
On reality: Okay, this is probably a fundamental disagreement between us, then. I believe that perfect deceptions are possible. You don’t, since you think if it were perfect, it wouldn’t be a deception after all. So if there were a perfect “deception” of the existence of Neptune, then Neptune would exist, according to you, right? (Your position is closest to George Berkeley’s idealism, the theory that there is no difference between perception and reality; essentially, that perception is reality, or that there are no mind-independent entities. This may not be a coincidence, as Berkeley himself was an empiricist.)
On patterns: This is another point where I think the empiricist is in trouble. Empiricist David Hume proved long ago that there is no fully empirical non-circular defense of induction. (‘Induction worked in the past; therefore, it’s presently working’ is circular.) If you can appeal to intuition, in contrast, that’s an easy non-circular justification of induction. But if you allow circular arguments, then again, why not crystal balls, Ouija boards, the Bible, etc.?
On evolution: Then it looks as if you want an argument such as this:
1. The Theory of Evolution is true (> 95% epistemic probability).
2. If the Theory of Evolution is true, then the best explanation (> 80% epistemic probability) for any particular trait is that we evolved to have it.
3. Therefore, probably (> 75% epistemic probability), the best explanation for any particular trait is that we evolved to have it.
4. Probably (> 95% epistemic probability), if we evolved to have moral intuitions, then if a moral intuition has no survival benefit, then definitely (100% epistemic probability) that moral intuition is inaccurate.
5. Therefore, probably (> 70% epistemic probability), our moral intuitions with no survival benefit are all (100% proportion) inaccurate.
6. Probably (> 80% epistemic probability), moral intuitions that imply the existence of objective moral reasons never (0% proportion) have survival benefit.
7. Therefore, probably (> 50% epistemic probability), moral intuitions that imply the existence of objective moral reasons are all (100% proportion) inaccurate.
I’ve boldfaced what I consider to be the disputable premises. Notice that we needed to bump up or down some probabilities and proportions, since if even one intuition that implies objectivity is accurate, then ethical objectivism is true. That’s why you’d have to believe that all intuitions with no survival benefit are inaccurate, and in turn, that no moral intuition has survival benefit. Again, if you let even one in, you end up with ethical objectivity.
And now we can see, I think, that (4) and (6) are extremely unlikely to be true. We know (4) isn’t true generally of traits; there are plenty of spandrels, etc. And (6) is clearly false, given that prosocial intuitions obviously have survival benefits. As for (2), I think that’s also false, since (for example) evolution predicts (doesn’t it?) that we would find it permissible to torture cattle if it benefited us. After all, cattle are only distantly related to us, genetically. So the best explanation for the intuition that torturing nonhuman animals is wrong isn’t that we evolved to have that trait, is it?
Less formally, we have plenty of traits that have no survival benefit, and accurate moral intuitions would have a survival benefit. And certainly, ‘it is objectively wrong to molest children’ seems much more likely to me to be true than (6), (4), and (2) (and even (1)). So once again, we might wonder: is it possible to expand the defenses of (2), (4), or (6) to make any of them more plausible than ‘it is objectively morally wrong to molest children’? Given the argument so far, why would a reasonable person accept (2), (4), and (6), and hold them to be more likely to be true than ‘it is objectively wrong to molest children’?
On empiricism: Here’s the crux:
Can we test whether a Ouija board’s reports are a faithful representation of what the Ouija board detects by comparison with the Ouija board’s reports? How do we non-circularly argue that our senses faithfully represent the world? (Without accepting idealism, according to which the world just is our sensory representations. But maybe you want to accept that?)
The claim doesn’t seem to me to be a tautology, because it seems to be the claim that it’s possible to figure out whether our senses are accurately representing the world. I agree, but how would you do that non-circularly by observation alone?
(Also, what’s the fully-empirical argument for the claim that tautologies are true?)
Yes. So there could “really” be an empty space where we think Neptune is, and we’re shielded from that by a perfect-deception wrapper. In that situation I’d define the perfect-deception wrapper as “reality” (afterall, it’s what affects us), and would regard the empty space beyond as a “meta reality” about which we can’t know.
Of course the two labels there (reality and meta-reality) are indeed just labels, so someone else might prefer different labels (e.g. “deception” and “reality”). But the claim of scientism still holds provided no other method can do better in discerning the meta-reality.
There is no absolute defense of induction, but there are high-probability defenses of it, and that’s sufficient (scientism makes a strong claim, but the concession that scientific knowledge is inevitably provisional makes it defendable).
An outline defense of induction could go like: Induction (= laws of physics being constant in time) has worked so far, but there could come a day when it fails. We have no information on when/if that day will be, but we have good evidence that we’ve had 13.7 billion * 365 days when induction held. Therefore the chances of induction failing tomorrow are less than 1/(13.7 billion * 365). Ditto the chances of it failing the day after tomorrow. Etc.
Thus induction is overwhelmingly probable in the near future. Of course this argument is useless about what happens a trillion years in the future, but then I’m ok with that (again, provided no other method can do better).
I’m sorta puzzled why you place such store in intuition; it this common among philosophers? Among us physicists intuition is something we regard as highly suspect, something that does an “OK” job in every-day life, but which very much needs re-programming in the light of evidence and reason, when we consider things a bit deeper. Much of being a good physicist is re-programming your intuition to think in the counter-intuitive ways that correspond to how the physical world actually works. (And much about teaching physics is recognising where student’s everyday intuition is leading them to think in wrong ways.)
This one I’d say the opposite, I’d say it was far more likely that these intuitions (though false) do have survival benefit (which is why we have them).
However, a more general response to your argument is this: If your argument amounts to the claim that morals being absolute is far more intuitive than morals being programmed by evolution, then yes, I agree with you, it is (there is much about evolution that is highly counter-intuitive, which is why there is so much resistance to it in the population).
So we can agree: the idea that morals are absolute is highly intuitive, indeed “intuitively obvious”. But where do we go from there? Unless you have an argument for preferring intuition over evidence and reason, that doesn’t get you far (and most scientists would take evidence/reason over intuition).
I’m a bit lost by that argument. I don’t have to believe that all intuitions with no survival benefit are inaccurate, only that the intuitions about objective morals are inaccurate (regardless of any survival benefit). And, as above, I’d opine that many inaccurate intuitions and particularly many inaccurate moral intuitions do have survival benefit.
The distinction between something being objective, as opposed to an evolved opinion, is different from the distinction between a trait being adaptive versus being a spandrel. I’ll readily accept many evolved spandrels, without them being objectively “true”.
It’s not that simple. Cattle are sentient mammals very like humans in many ways. If sympathy/empathy and non-torturing of humans is adaptive (very likely true), then this will likely produce similar attitudes to furry mammals in general, especially baby ones (many of which have the same appealing features as baby humans). Evolutionary products are very much cobbled-together in such ways, and evolution would only “bother” to avoid this wider effect if torturing non-human animals were highly adaptive. If it weren’t (and most likely it isn’t) then applying human morals to animals (at least to some extent) could well be a spandrel.
Why, in general, would accurate moral intuitions have survival benefit? Correctly interpreting and predicting other humans’ morals would clearly have survival benefit, but wider than that it gets less clear. It isn’t clear to me why thinking, incorrectly, that morals were absolute would be maladaptive.
If by `plausible’ you’re just appealing to intuition then, again, I don’t place much weight on that.
By defining `the world’ as what our senses sense.
It’s not quite saying that the world is our sensory representations, it’s saying that the world is what is accessed by our sensory representations.
If by `the world’ you want to refer to some meta-reality on which you slap the arbitrary label `real’, then no, I can’t figure out whether our senses accurately represent it. However, I’d instead slap that `real’ label on whatever it is that is producing the coherent and enveloping set of experiences that we call our sense data. If that is a simulation produced by a meta-reality then, yes, I’d be unable to tell.
The empirical justification for logic (including that tautologies are true) is that it works, in the sense that using it leads to predictions about the real world (real as defined just above) that are then verified as true (true in the sense of corresponding to the real world).
Sorry it took me a few days to reply; I’ve been at a conference. In any case, here goes.
A. On idealism:
I guess what you define as ‘meta-reality,’ everyone else defines as ‘reality.’ In any case, it looks as if you think we have to be skeptics about ‘meta-reality.’ In contrast, rationalists think that we can have non-circular arguments that we can know about meta-reality after all. Thus at least it seems (right?) that if your empiricism is true, we have to be skeptics about meta-reality, and if rationalism (here, essentially intuitionism) is true, we don’t have to be skeptics about meta-reality. In other words, if empiricism is true, then we can’t really know whether Neptune exists; we can only know whether we perceive something that seems to be Neptune. In contrast, if rationalism is true, we can know whether Neptune exists.
B. On induction:
I worry that your defense is still circular. You’re appealing to past observations of laws of nature being constant in order to justify the claim that they will remain consistent. But there’s the underlying question, Why is the past any guide to the future? If all you have is ‘it’s been that way in the past,’ that’s circular. And circular arguments (right?) provide zero justification. They don’t merely fail to prove their conclusions.
C. On intuition:
Philosophers have different defenses of intuition. One is that our intuitions about metaphysically modal truths (necessity, impossibility, possibility) seem to be extremely accurate. Another is that denying them entails global skepticism; global skepticism is self-defeating; so global skepticism is false; so intuitions confer at least prima facie justification. But we can agree that intuitions about contingent physical facts tend to be suspect.
D. On tautologies:
Okay, but this support will still run up against the Problem of Induction, above: the empiricist has zero reason to believe that tautologies will continue to be true tomorrow.
Basically, I’m suggesting that the evolutionary case against ethical realism (= ‘there are objective, irreducibly normative ethical truths’) depends on a few really important lemmas:
Trait-Evolution or ‘TE’: Very probably, if we have some trait, then that trait evolved.
Trait-Usefulness or ‘TU’: Very probably, if a trait evolved, then it’s survivally (to the individual or to the tribe) useful.
Ethics-Uselessness or ‘EU’: The ability to learn objective ethical truths is very probably not survivally useful.
If you accept TE and TU, you get: ‘Probably, if we have some trait, then that trait is survivally useful.’
If you accept that and EU, you get: ‘Probably, we do not have the trait of learning objective ethical truths.’ And that’s the conclusion you want, right?
Most of us, it seems, have many intuitions that imply (if accurate) that there are objective ethical truths. I would say that every day, I hear about some action that strikes me as deeply, unavoidably morally wrong, and think about some state of affairs that strikes me as deeply, unvaoidably good, or bad.
Suppose I’ve had ten thousand intuitions that seem to suggest objective ethical truths. The prior probability that all of them have been inaccurate is two to the negative ten thousandth power. That’s extremely, extremely low. So whatever case you have against the accuracy of those ethical intuitions, it had better be pretty strong, right? You would have to have very good evidence for TE, TU, and EU. (Hence “very probably” in the lemmas above.)
Now, deep down, what is the case for TE supposed to be? I guess it’s the tons of empirical evidence for evolution. But trusting that depends on trusting that our observations are accurate guides to reality (or ‘meta-reality’ in your terms). Since there’s no non-circular argument for that (without intuition), that’s at most 50% epistemically likely to be true, right? We have no evidence either way (without intuition), so we should be agnostic, right? There may also be counterexamples, since, as I pointed out, many ethical intuitions (especially altruism toward strangers or animals) seem at first glance to be maladaptive. You might argue that they could still be adaptive in some way, but then the evolutionary explanation starts to look unfalsifiable. (If you have an explanation, for every trait and its reverse, why it would be adaptive, the evolutionary explanation is definitely unfalsifiable, and thus only dubiously scientific.)
What is the case for TU supposed to be? We have many counterexamples, as I pointed out: traits that evolved along with the survivally useful traits, but are not themselves survivally useful. Many philosophers have argued that the ability to discover objective ethical truths would be this way; it would derive from a more general ability to discover objective normative truths, or from reason itself, or from the trait of accurate intuition itself, all of which might be more likely to be survivally useful. Still, suppose we were just agnostic about TU; we would again have to assign it 50% probability.
And what is the case for EU supposed to be? We can recognize that in general, commonsense morality is prosocial. Thus the ability to discover objective ethical truths might be very survivally useful. A tribe that discovered that murder was objectively wrong would be more likely to survive and reproduce.
Now, obviously those three lemmas are not rationally undeniable or self-evident. Their support (if empiricism is true) has to be from further empirical observations. What I’m suggesting, overall, is that if we expanded those lemmas to reveal the arguments that are supposed to support them, we would eventually end up at appeals to plausibility, intuition, or self-evidence.
This is part of a general idea called ‘foundationalism’: Most of our beliefs are supported by inferences from other beliefs, which are supported by inferences from other beliefs, and so on. But surely this has to stop somewhere, right? (We don’t have infinitely many beliefs, and circular inferences are unjustified.) If so, then it either stops at justified beliefs or at unjustified beliefs. If it stops at unjustified beliefs, then none of the other beliefs are justified, right? (Here we can read ‘justified’ as you do: objectively likely to be true.) So it has to stop at justified beliefs. In particular, it has to stop at foundational beliefs: beliefs that are justified, but not by any inference from any other belief.
So: How could an empirical belief justify itself? If it can’t, how could it be foundational? If empirical beliefs cannot be foundational, then how can any of our beliefs be justified, if all we have is empiricism?
Been away, so haven’t replied earlier:
My definition of ‘meta-reality’ implies that it could not, even in principle, produce any discernable consequence for any observable. I’d have thought that most scientists would side with me in having a hard time regarding such a thing as ‘real’.
As defined as I’ve just done, yes.
We can’t know that it is for sure. However, how about this statement of my position: We are in a period of stability that may come to an end at some (unknown) point. If you have a long period of stability followed by an end, then, if you distribute observers at random in that period, most observers will be far from the end and fewer will be near the end. Thus, by the Principle of Mediocrity (or basic frequentist probabilities) we are not likely to be near the end. Therefore it is likely that stability will hold tomorrow.
How do you know that?, or what do you mean by “seem” here? Are you appealling to intuition to validate intuition, or are you appealling, perhaps, to some empriical evidence to validate intuition?
I accept TE. I don’t see why I need TU; evolved traits could be spandrels; or they could be hold-overs from past conditions when they were survivally useful, but no longer are; or they could be non-adaptive consequences of something else that was selected because it was adaptive.
I don’t see why I’d need that one. If there were no such thing as objective ethical truths then there would be no such thing as the ability to learn them. (I don’t need to take a position on the usefuless of learning them if they did exist.)
That calculation only holds if all ten thousand claims are entirely independent. If we have an evolved tendency to be deluded over objective morals, then all ten thousand (false) intuitions would be accounted for.
Why would I need to make any claim about ‘meta-reality’ here (aka hidden reality with no observable consequences)?
About meta-reality? Yes, about that I make no claim.
We can look at actual data about different people’s morals and their average number of grandchildren, so it’s not unfalsifiable. In that sense, science’s coming up with plausible expanations is only the start, you then test them against data. Thus one can’t make arbitrary claims about adaptation to get out of a hole.
I agree that TU is not supported, and don’t base my argument on it.
I’m not holding to EU — I agree it could easily be false. However, I don’t see how your latter claim follows. At least, whether it holds would depend on what `was objectively wrong’ meant, and I still have no idea what that phrase even means. It would be entirely possible for some act to be `objectively wrong’ and yet evolutionarily favoured.
For example, the act of having children could be `objectively wrong’ (since I don’t know what `objectively wrong’ means this is just as plausible as murder being `objectively wrong’), yet not having children would clearly be maladaptive.
So long as we’re restricting ourselves to claims about empirical reality, empirical beliefs can be justified by comparison with empirical data. (Again, I only run into difficulties if I start making claims about meta-reality, which I’m not).
Pingback: Nagel’s bat doesn’t demonstrate incompleteness in materialist science | coelsblog
Pingback: On Stephen Law on Scientism | coelsblog