Sam Harris has issued an essay challenge, calling for 1000-word pieces that try to refute the main thesis of his book The Moral Landscape, essentially the idea that morals are objective facts about human well-being. Here is my entry (with some bits similar to my previous posts on the topic).
“Nothing in biology makes sense except in the light of evolution”, wrote Dobzhansky, and we can’t understand morality except as part of our biology, programmed into us by evolution to do a job. That job is to facilitate cooperation. Morality is a social glue that enables us to collaborate with our fellow humans and so benefit from a highly cooperative way of life.
Evolution had long programmed feelings and emotions into us (hunger, fear, disgust, love, satisfaction, pain, etc) so it adapted that mechanism to police our interactions. Thus we have notions of loyalty and comradeship, and treachery and ostracisation, of fairness and exploitation, of pride and shame, punishment and forgiveness.
Morals are opinions about how people should should treat each other; morality is our feelings and emotions about inter-human behaviour.
These feelings do not reflect any deeper and more objective reality about how we “should” behave or treat each other. Why would they? Evolution has no such concern; all that matters for evolution is whether someones moral feelings assist cooperation and enable them to leave more descendants. Even if there were such a thing as “objective” morals evolution would not care one hoot about them and thus they would bear no relation to how we feel, to our evolutionarily-programmed sense of morality.
The idea of an objective morality, what we “should” actually do, is a red-herring, an illusion programmed into us to make our moral sentiments seem more powerful and thus more effective. We know that religious believers tend to extrapolate “I want …” into “God wants …” and the illusion that your visceral disgust at a betrayal is not just your dislike but reflects a deeper violation of how things should be, is a similar extrapolation to enhance the effect.
What can science tell us about this? It can tell us what moral values we have, and why we have those moral values, and it can (in principle) tell us how to maximise our moral satisfaction and our well-being.
But science cannot tell us what our morals and values “should be” since that question makes no sense. A “should” phrase only has meaning referred to a stated goal: “in order to achieve X you should do Y” or “in order to please A you should do B”. The goal can be omitted if it is clearly implicit, but a “should” or “ought” statements without any attached goal are literally meaningless.
Thus science cannot tell you what you “should do” morally, but science can (in principle) tell you what you should do in order to maximise your well-being and happiness, or what you should do in order to achieve any other specified goal.
Sam Harris states his core thesis as:
Morality and values depend on the existence of conscious minds […] Conscious minds and their states are natural phenomena […].
Agreed so far. He continues:
Therefore, questions of morality and values must have right and wrong answers that fall within the purview of science … some people and cultures will be right … and some will be wrong, with respect to what they deem important …
This is a non sequitur. Yes there are true or false statements of the form “Greg considers X to be moral” or “most people consider Y to be immoral”, and yes there are true or false statements of the form “Society A’s moral values lead to more human well-being than Society B’s moral values”, but it doesn’t mean that there are true statements of the form “doing Z is morally right” in an abstract way independent of a person doing that opining.
Of course one can, as Sam Harris does, simply declare by fiat that maximising human well-being is what morals are “about”. Since one can make objectively true statements about what leads to well-being it would then follow that one can make objectively true statements about what is morally right.
Why is this a false move? First, and sufficient in itself, it is simply not justified. Yes, humans desire well-being and thus what leads to well-being will usually, in their opinion, be the moral course. But that attempt at justification would require that human feelings and desires (and not well-being) be the actual ground of morality.
Second, since our moral sentiments are evolutionary products, there is no reason to suppose that they are “about” human well-being since evolution “cares” about leaving descendants, not about human well-being.
Third, if “well-being” were the ground of morals then an action one did for ones own benefit would be as moral as one for someone else’s benefit, which conflicts with our intuitions.
Fourth, an evolutionary origin of morals explains that they are about interactions within the gene-pool of our species. Grounding morals in “well-being” gives no rationale for selecting particular sentient beings. Dogs?, dolphins?, chickens? Do they all count equally?
Fifth, an evolutionary origin of morals explains why we care more for our close family then for distant strangers. An “absolute” morality based on human well-being contains no such preference, and is alien to humanity.
Sixth, to base “objective” moral truths on well-being requires an ability to objectively quantify well-being, which is problematic. Subjectively quantifying it is insufficient and again results in a grounding in human feeling and opinion.
Seventh, you also need to aggregate that well-being across people (and other species?). Is it moral to advance someones well-being by eight units if it harms five people by one unit each? There is no objective way of doing that aggregation.
Hankering after objective morality is a red herring that gains you nothing but problems. Accepting that morals are subjective, and about human feelings and desires, works much better. You lose little: you still have science able to tell you what will maximise human well-being, and you’re still able to advocate that we aim for that goal; you lose only the claim to the backing of a god or other “absolute” authority.
For Part 2 of my response to Sam Harris click here
The one place I think science can legitimately talk about ‘should’ in the case of morality is local .v. global optima – one of the problems with evolution is it can get stuck in a local maxima, unable to climb back out. Science does not, in principle, have this limitation. So once you’ve shown what morals we’ve evolved, and how to optimise your life for maximum personal satisfaction, you can, in principle, run a game theory, massive simulation or similar analysis of every possible set of moral values and the outcome of those sets, and therefore come up with something closer to the global maxima than the one evolution found. This I think would give a scientifically valid set of what morality ‘should’ be, maybe?
Certainly not what is generally use by the term though…
Pingback: The Sam Harris Moral Landscape Challenge: Part 2 | coelsblog
“Hankering after objective morality is a red herring that gains you nothing but problems. Accepting that morals are subjective, and about human feelings and desires, works much better. You lose little: you still have science able to tell you what will maximise human well-being, and you’re still able to advocate that we aim for that goal; you lose only the claim to the backing of a god or other “absolute” authority.”
Since there are objective facts about our subjective experience, and our feelings and desires are completely geared towards maximizing our wellbeing (by definition), morality can be said to be objective and about maximizing wellbeing.
“Of course one can, as Sam Harris does, simply declare by fiat that maximising human well-being is what morals are “about”. Since one can make objectively true statements about what leads to well-being it would then follow that one can make objectively true statements about what is morally right. Why is this a false move? First, and sufficient in itself, it is simply not justified. Yes, humans desire well-being and thus what leads to well-being will usually, in their opinion, be the moral course. But that attempt at justification would require that human feelings and desires (and not well-being) be the actual ground of morality.”
He is making a definitional move, namely that the word ‘morality’ itself means ‘maximizing the wellbeing of conscious creatures’, as in this is how it is conventionally used. He proposes this to replace the idea that morality is ‘how humans ought to behave’, because this is so loose as to not define the word at all, e.g. is there a moral method that one ought to use to play chess?
Morality is about the experience of conscious creatures, and since suffering is bad and wellbeing is good we can talk about objective facts concerning how to ensure that the most people’s wellbeing is maximizing and suffering lessened or eliminated. You use the example of one person gaining 8 wellbeing units at the expense of one unit from 5 people, but this example only works if the consequences of such an action are not considered (e.g. how this could prevent future co-operation, or cause conflict, etc.), and alternative actions are not considered (e.g. positive sum games where each of them gains one unit, thus doubling the moral outcome).
Good effort, I assume you don’t actually disagree with him!
Yes, Sam Harris is making a definitional move in defining morality as “maximizing the wellbeing of conscious creatures”. I don’t go along with him, however, for two reasons.
First, it is confusing. Effectively what he is doing is declaring: “In my opinion we should all try to maximise the wellbeing of conscious creatures”. I think it’s a lot clearer to just say that. *If* he got everyone to agree with that, then people might all agree to use the word “moral” to mean that, but it is confusing to claim that it *does* mean that when many people don’t agree with him.
To get everyone to agree that people should act to “maximise the wellbeing of conscious creatures” then we’d have to get everyone to agree that their own child’s wellbeing is no more important to them than that of any other child. Indeed, one would have to get people to agree that their child’s wellbeing is no more important to them than that of a cow or squirrel.
Second, Harris’s definition gives the impression that it is possible to aggregate across different people and different species and arrive at an objective “well being” function that can be maximised. I don’t think that that is possible.
This response essentially denies that there would be situations where, inevitably, one person’s wellbeing is traded off against that of another. I’d regard it as utopian to suppose that there would never be such situations.
The reason behind this is the fact that there is not another definition that works as well to describe the meaning behind our usage of the word ‘morality’. It is obviously to do with wellbeing/suffering, for it cannot be used to describe situations where wellbeing/suffering is absent, and further wellbeing and suffering correspond to physical brain states that can in principle be known. I agree that it is confusing, that he did not make all of this definitional stuff explicit, I’d take issue with that aspect of it for sure.
This is again due to the word ‘morality’ being used in different senses, here you use it as ‘how humans ought to behave’, which is not specific enough to moral claims compared to functional claims about things like how one ought to play chess.
In a moral sense, one child’s life is definitely equal to another child’s life, a parent’s value for their own children over others is not a result of moral thinking, it’s an instinctive drive unmotivated by reason. To say that they ought not to value their own child over others, however would simply be wishful thinking, this value is programmed into parents and it is not something that we can reason our way out of. Other values can be changed, and so we can talk about the values that produce the most moral consequences (the most wellbeing and least suffering). Morality becomes a scale, a continuum.
It does give that impression alright, but he does correct this later by saying there could be multiple equivalent peaks on the moral landscape that could not be distinguished between, he is just saying that some are not equal so some peaks are higher than others.
Of course, game theory posits zero-sum conflicts (you and I both want the cake, only one gets it), but it also posits positive-sum games (we’ll share, get half and half). Positive-sum games result in net gains, movement to a higher peak on the moral landscape, whereas zero-sum games do not move, or move to equivalent peaks. In a sense, moral claims are relative, as in your above example (one person +8, 5 people -1) would be considered more moral than nothing happening (6 people +0), but more moral than your example would be my one (6 people +1). The differences between peaks, however, are objective in principle, and have a definite lowest peak in the Worst Possible Misery for Everyone.
I think that there is a better definition: morality is our feelings and preferences about people’s behaviour. We don’t need any “better” definition than that, and I consider the search for one, a more “objective” statement about morality, to be misconceived. This is where, fundamentally, I consider that Sam Harris has gone wrong.
Plenty of people would think that it would be “immoral” to violate a religious rule, even if it had no consequences for wellbeing/suffering. You might disagree with that conception of morality, and so would I, but that would still be our opinion!
I would have morality as people’s opinions about how humans ought to behave. The idea that there is something more solid and objective behind the whole thing is, to me, a misconception.
What do you mean by that claim? Are you expression your opinion that all children’s lives are “equal”? If so, would you really regard your own child’s life as of no more importance to you than that of a child in some far-off country?
Or are you expressing your opinion that laws and governments should treat the lives of all children in the country as of equal worth? In that case, I would agree with you.
Or are you suggesting that in some deeper and more objective “absolute” sense all children’s lives are equal? If you are then I’d suggest that this view is mistaken simply because there is no deeper and more objective or “absolute” morality; all there is is human opinion.
And, as you accept, humans do not value all other humans equally, they value their family and friends more. But we can still be of the opinion that the governments and rules of society that we create should treat people equally.
We want to experience greater wellbeing and avoid suffering. If you combine everyone’s opinions on how everyone else ought to behave, the result is that morality is about achieving the greatest possible wellbeing for everyone and preventing everyone from suffering as much as is possible.
Your redefinition assumes that something is lost when we go from a single person’s opinion to this overarching explanation, and further since morality is subjective then it cannot be said that it is morally wrong for a parent to beat their child for doing badly in school, only that I personally don’t like it, and I’m sure other people agree.
Only from an outsider’s perspective is wellbeing unaffected, the believer insists that punishments or the ill-will of the creator of the universe result from such actions, which obviously would affect wellbeing if the claims were true.
What I meant by the claim is that no-one says that it is more moral for them to care about their own child than about another child, they just do care more about kin. This is not a moral claim. When we talk about morals, people do not claim that some are more important, morally, than others, except out-groups that are supposed to either be damned to eternal suffering anyway until they become in-group, or out-groups who are supposed to be unable to experience wellbeing/suffering the way that in-group people are known to (demonizing the out-group).
However, we could find objective data that would show that societies that value each life equally are more prosperous and safe than societies that try to discriminate. We could then say that it is objectively true that human rights-driven societies result in greater wellbeing for citizens, and so are more moral, since this is how the word is used, as an aggregate rather than a personal opinion.
That’s a rather vague sentence, so can we make things a bit more specific:
Person A values their own child above Person B’s child and places more importance on the well-being of their own child. Person B also values his own child more.
Does A think that B should value A’s child as much as his own (B’s) child? No, I don’t think that Person A does think that.
A and B, however, likely agree that unrelated Person C should value A’s child and B’s child equally (though again less than C’s own). By extrapolation, A, B and C all agree that collective bodies such as governments should treat all equally.
So, governments and similar entities are given by their electors a special directive to value people equally. But there is no obligation on any given person to value people equally (except when acting in a government role).
I’m not saying that anthing is lost, I’m simply saying that when combining opinions as above you don’t arrive at any obligation on Person A to value Person B’s child as much as his own.
We may arrive at an obligation on the government to do that, but that is as a result of a social contract with the electorate that validates the government.
Don’t they? I’d have thought most people would think that! If they don’t go around saying that then it’s surely because it is so accepted as obvious that it doesn’t need stating.
Does anyone get criticised for lavishing care and attention on their own child while in some far-off country resides an orphan who is just as in-need of parenting? No they don’t (and just about everyone in the rich West is “guilty” of that), though we may laud people who care for both their own and distant orphans.
How would society regard someone who did put all their care towards unrelated orphans, while leaving their own child neglected, needy and abandoned?
I, for one, would regard that person as weird and immoral, and would expect most people to agree. UK law, for example, places obligations on parents. You are legally obligated to provide financially for your own biological children. Further, “child neglect” is a crime if it’s your own child but not if the child you’re neglecting is an unrelated orphan in some far-off country.
But that’s circular since the conclusion is essentially a re-statement of how you have defined “moral”.
Okay, I’ll grant you pretty much everything you said there, it’s nigh-on water-tight.
It seems to me that you think that ‘morality’ and ‘moral’ claims are just unnecessary words outside of theism, would I be right in saying that? Morality reduces to “I think that things should be this way”, so we should change to reframing these opinions as opinions, rather than as objective claims?
At this point, then, we only disagree on tactics. I think the word ‘morality’ is too important and too often-used to just throw it away, but you think that we’ll get by just fine by reframing the discussion to be about laws, etc. I think that telling people who believe in objective morality that morality is really subjective will communicate something different than what is meant; people think that subjective morality means whatever you want to do is moral for you, and whatever everyone else wants is just moral for them, and there can be no comparison, so murder = altruism. I know you don’t mean that, but then I’m already agreeing with you on most things, the way me and you are using language here is far more reasonable than the way people caught up in religious delusions use it so I think we need to be cautious about how we use language and Harris’s definition is close enough to the theist definition to communicate.
No, not at all! I consider morality to be of the highest importance, we should certainly retain those words and concepts.
However, I think that recognising them as human opinions (rather than being a reflection of some objective standard) clears away a lot of confusion and allows us to focus on what is important.
I agree with what Sam Harris is aiming for, but I consider that he gets lost and wanders into a mire by seeking “objective” status for his scheme (e.g. his scheme simply falls down when asking how someone should treat another person’s child versus their own, or asking about sentient creatures other from humans).
Let’s make a comparison. In medieval times people argued for a “divine right of kings” a god-ordained objectively right way for society to be ordered. We’ve long ditched that concept and now have a bottom-up concept of society as a social contract, with everyone entitled to their view and their vote.
We should not seek an “objective” or “top-down” version of morality, but should see it similarly as a bottom-up affair based on us all having our feelings and prefences about behaviour, and then society’s morality being an agreed social contract in the same way that democratic government is.
I don’t think you lose anything at all by accepting this, and yet you gain a clear resolution to lots of otherwise-intractable philosophical puzzles about morality, all of which derive from seeking an “objective” status to it.
Saying that morality is a matter of subjective human feeling and opinion is not in any way reducing its importance. Indeed the only things that are important to us are our subjective feelings about things!
Nobody argues that our subjective aesthetic enjoyment of life, good food, friendship, beauty, etc, can only be meaningful and important if we can make it into an objective scheme, so why this hankering after objective morality?
Yes, exactly. In the same way as we all get our vote about the government, and everyone accepts that this is our *opinion*.
Of course those opinions — both political and moral — will be influenced by information (objective information) about outcomes and wellbeing, which is exactly why society’s politics and morality has evolved over time and continues to do so.
Morality: How I think humans ought to behave. Would that be accurate enough as what you define it as?
So it would be informed by objective evidence about wellbeing. What else would it be informed by? Would you accept Sharia law if enough people voted it as the best morality?
I wouldn’t define it by reference to me personally. I might define it as: “morality: the feelings and preferences that humans have about how humans behave towards each other, programmed into us by evolution as a social glue”. (And then *my* sense of morality would be how *I* feel about such things.)
What do you mean by “accept” there? People voting for it would not change *my* opinion as to whether it was moral. I might “accept” it in the practical sense if the alternative was getting imprisoned or executed.
Pingback: On objective moral values and duties: A reply to Anthony Freeland | coelsblog
“Fifth, an evolutionary origin of morals explains why we care more for our close family then for distant strangers.”
Sometimes evolution gets it wrong. The prejudices that create suspicion between the races in places like Ferguson are evolutionary, but morally wrong.