Sam Harris’s Moral Landscape Challenge had a 1000-word limit, and thus to accompany my entry I’ve written this additional (and rather longer) piece, essentially a response to Harris’s Response to Critics article. This piece is best read after my first part and is intended to clarify where I agree and disagree with Harris. Indeed I do agree with Harris on much, probably more so than many of his critics. However, I consider that Harris goes wrong in hankering over the label “objective” to stamp on his account of morals, and that this gets him into a mire while gaining little.
I agree with Harris that there are deep similarities between human moral values and our other feelings and emotions. I agree that (all quotes are from Harris’s Response to Critics):
Science could, in principle, account for why some of us prefer chocolate to vanilla, and why no one’s favorite flavor of ice cream is aluminum.
Amusement to the point of laughter is a specific state of the human nervous system that can be scientifically studied. Why do some people laugh more readily than others? What exactly happens when we “get” a joke? These are ultimately questions about the human brain.
However, morality and values appear to reach deeper than mere matters of taste—beyond how people happen to think and behave to questions of how they should think and behave.
I don’t think the distinction between morality and something like taste is as clear or as categorical as we might suppose.
It seems to me that the boundary between mere aesthetics and moral imperative—-the difference between not liking Matisse and not liking the Golden Rule-—is more a matter of there being higher stakes, and consequences that reach into the lives of others, than of there being distinct classes of facts regarding the nature of human experience.
Here, I would argue, Harris is close to the mark. These different preferences are not “distinct classes of fact”, they are, at root, the same thing. But when Harris notes that moral values appear to reach deeper, to some objective standard of how we should behave, the important word is “appear”.
Humans are notoriously prone to delusions, and this mere “appearance” would need to be backed up robustly, not just accepted at face value.
Harris quotes Thomas Nagel, giving as an example of “evaluative truths so obvious that they need no defense”, the claim that:
A world in which everyone was maximally miserable would be worse than a world in which everyone was happy.
What does “worse” mean here? Does it mean that humans would judge it as worse? Yes, indeed they would. If it means “worse” in some objective sense — meaning, worse regardless of what any human thinks of the matter — then I for one dispute that this is “so obvious that [it] needs no defense”, and indeed would reject the concept as meaningless. The terms “worse” or “better” are value judgments and cannot be divorced from the person doing the judging.
I thus agree entirely with Sam Harris in his view that:
morality must be viewed in the context of our growing scientific understanding of the mind. If there are truths to be known about the mind, there will be truths to be known about how minds flourish; consequently, there will be truths to be known about good and evil.
However, the “truths” to be known about “good and evil” are that these are human value judgments. At root they have no more objective status than, for example, a human opinion about the beauty and sexual attractiveness of another person.
Who’s to say that well-being is important at all or that other things aren’t far more important?
The only answer to that is “us”; we say that because we value it. Indeed “well-being” is merely a shorthand for “states that we value”.
How, for instance, could you convince someone who does not value well-being that he should, in fact, value it?
What, Sam, do you mean by “should” in that sentence? What is the goal of the should? Do you mean: “How would we convince someone that, in order to attain well-being, they need to value well-being”? I suspect that the very concept of someone who “does not value well-being” is incoherent. If “well-being” is defined as a state that we value, then it becomes “someone who doesn’t value what he values”.
Harris compares his critics’ claim (the “Value problem”) to an analagous claim about health. Compare these two:
There is no scientific basis to say that we should value well-being, our own or anyone else’s.
There is no scientific basis to say that we should value health, our own or anyone else’s.
Harris seems to regard the latter of these as self-evidently silly, and thus that it is entirely sensible to have a scientific basis for why we “should” value health, and thus also, by analogy, well-being.
I regard neither claim as coherent. It is true (tautologically true) that people do value well-being (= states that they value), and it is a true fact that one of the things that people value is good health. [To avoid a diversion let’s accept that we can define and measure “good health” in terms of bodily function and dysfunction, independently of human opinion.]
But what do the phrases “we should value good health” or “we should value well-being” actually mean, remembering that a “should” phrase only ever means anything in relation to a goal? The first could mean: “In order to maximise well-being we should value good health”, which is coherent and sensible. The later could only mean: “In order to maximise well-being we should maximise well-being”. Thus we are not gaining anything that we don’t obtain from the simple declaration “humans value things” coupled with the labelling of states that humans value and aim for as “well being”.
Thus it is true that humans have values and desires, and thus true (tautologically) that humans seek well-being. It is not true that they “should” value such things (that statement has no meaning except as a re-statement of the previous sentence), and it is not true that there is a “scientific basis” for saying that we “should” value such things (unless all that means is that we have a scientific basis for asserting the first sentence of this paragraph, which we do).
Thus, it seems to me that Sam is getting into a mire by seeking the “should” word and hoping for “objective” status for morality.
Harris responds that the same criticism “can be said about medicine”. Indeed so, but medicine is grounded on the fact that people do value good health. It is not grounded on the “fact” that people “should” value good health, and there is no reason for it to be.
Harris continues that the same criticism can be said about science:
As I point out in my book, science is based on values that must be presupposed-—like the desire to understand the universe […] One who doesn’t share these values cannot do science.
I agree, unless one values science one will not pursue science. So what? Science results from the fact that humans do want to understand the universe, and it is also a fact that doing so often aids well-being. But there is no deeper reason why we “should” value this understanding, and nor do we need one. We should distinguish between doing science (which needs motivating) and science itself (which doesn’t).
[In order to avoid a diversion I used an ellipsis in the last quote to replace the words “a respect for evidence and logical coherence, etc” as other supposedly presupposed values. I assert that these are imposed by the universe, given the desire to understand the universe, and thus are not additional values that one must presuppose.]
In my book, I argue that the value of well-being —- specifically the value of avoiding the worst possible misery for everyone —- is on the same footing. […] To say that the worst possible misery for everyone is “bad” is, on my account, like saying that an argument that contradicts itself is “illogical.”
Again I disagree. A self-contradictory argument is illogical regardless of what any human thinks about that. A “worst possible misery for everyone” is “bad” only because humans would consider it so. The two concepts are thus not comparable.
Harris says, about the worst possibly misery for everyone being “bad”, that: “Anyone who says it isn’t simply isn’t making sense”. I disagree: you can reject the claim as ill-stated because “bad” is a value judgment and the entity doing the valuing hasn’t been stated. Saying it is “bad” in the abstract is nonsensical.
Alternatively, a sadistic God could consider the “worst possible misery for every human” to be “good”. Indeed Christian mythology includes a deity who delights in ruling over a domain of “worst possible misery” for all in his clutches.
Harris suggests that:
the same doubts can be raised about science/rationality itself. A person can always play the trump card, “What is that to me?”-—and if we don’t find it compelling elsewhere, I don’t see why it must have special force on questions of good and evil.
But I do find it compelling applied to science! There is no reason why someone should value science for its own sake. It’s possible that someone does value science, or values the knowledge and understanding it leads to, or values the technology and capabilities that it produces (for example they might value the good health that results from medical science), but ultimately all of this derives from their subjective values, not from any objective “good”.
Should we maximize global health? To my ear, this is a strange question. It invites a timorous reply like, “Provided we want everyone to be healthy, yes.” And introducing this note of contingency seems to nudge us from the charmed circle of scientific truth. But why must we frame the matter this way?
We must frame it that way because the word “should” only ever means anything with reference to a goal. Stripped of any goal it quite literally has no meaning. Harris is entirely right that: “If we want everyone to be healthy then we should maximise global health”, but such “shoulds” are not matters of abstract objective fact, existing regardless of human values, but instead derive from human values. Rather than being a “strange” way of thinking, this is the only way of framing the matter, the only way to clarifying what morals actually are.
These aren’t the kinds of questions that will get us to bedrock.
Yes they are! The “bedrock” here being human values, feelings and opinions.
What if we could change Alice’s preference themselves? Should we? Obviously we can’t answer this question by relying on the very preferences we would change.
Oh yes we can! Indeed the only way of answering a “should” question is with reference to the goals deriving from our current values and preferences.
I am claiming that people’s actual values and desires are fully determined by an objective reality, and that we can conceptually get behind all of this—-indeed, we must-—in order to talk about what is actually good. This becomes clear the moment we ask whether it would be good to alter people’s values and desires.
Yes, people’s actual values and desires are determined by objective reality (by our evolutionary programming and by our environment and experiences). But notions of “good” are value judgements, deriving from our values, we cannot “get behind” our values in order to discover what is “actually good”. There is no “objective good” behind all of this. And, again, the only way of answering whether it would be good to alter people’s value and desires is by reference to our current values and desires.
[There is nothing inconsistent about that last claim. A majority in society could decide (judged from their values) that the values of a psychopath are bad, and thus want to change the psychopath. Additionally, no human is fully conistent in their attitudes and values, and someone might decide, using some of their values and desires, that others of their values and desires are better changed. Thus an obese person could want to change their craving for food, and a sex offender might want to change their sexual desires.]
In discussing someone being selfish and not caring about the rest of the world, Harris declares:
It is only against an implicit notion of global well-being that we can judge my behavior to be less good than it might otherwise be.
Well, no, the standard by which to judge such behaviour is the appropriate degree of concern for others, and we all have opinions about that. Everyone accepts that being more concerned for ones own family is appropriate, and indeed if someone were to care no more for their own family than they did for other children across the world (aka “cared just as much about other children worldwide as they did for their own”) then we would judge them immoral and bad parents who neglected their children. Thus we have evolved to have some concern for other humans but more concern for our relatives. It is deviation from that standard that would be judged “bad”.
Near the end of his article Harris declares:
Are the Taliban wrong about morality? Yes. Really wrong? Yes. Can we say so from the perspective of science? Yes.
Here is how I would answer:
I think so, yes.
You mean wrong by some absolute standard? Well, there isn’t one. Moral judgments are made by judges, and I’ve just stated how I judge them.
No, science cannot make a moral judgement, it isn’t a sentient being with feelings and opinions and desires. But can science inform our judgements? Yes, indeed.
If we know anything at all about human well-being—and we do—we know that the Taliban are not leading anyone, including themselves, toward a peak on the moral landscape.
They’re certainly not leading anyone towards well-being. And feel free to express your opinion that, as a result, they are immoral. But why do you need to extrapolate that to “God’s opinion is that they are immoral”, or “it is an objective fact that they are immoral”? Isn’t your opinion sufficient, and if not why not?
Surely the Taliban go wrong precisely because they are not guided by humanity, but look beyond it for objective morals, morals to be imposed whatever humans think of them?