Having abandoned Divine Command Theory around the age of 12, when I realised that I was an atheist, I then read John Stuart Mill at the impressionable age of 14 and instantly became a utilitarian. I remained so well into adulthood; it seemed obvious that morality was a matter of objective wrong and right, and that utilitarianism — the greatest good of the greatest number — was the way to determine such facts.
Of course I also became aware of the unresolved problems with utilitarianism: there is no way to assess what is “good” except by subjective judgement, and there is no way to aggregate over sentient creatures (should a mouse count equally to a human?) except, again, by subjective judgement. Both of those rather clash with the desired objectivity of the scheme.
Periodically I would try to fix these flaws, but never succeeded. Such mulling led me to the realisation that I didn’t actually know what moral language actually meant. “It is morally right that you do X”, can be re-phrased as “you ought to do X”, but what do those mean? I realised that I didn’t know, and had been proceeding all this time on the basis that what they meant was intuitively obvious and so didn’t need analysis.
But that’s not good enough if we’re trying to solve meta-ethics and understand the very foundations of morality. And so, I eventually arrived at the realisation that the only sensible meaning that can be attached to the moral claim “you ought to do X” is that: at least one human, likely including the speaker, will dislike it if you do not do X. “It is morally right that you do X” then becomes a declaration that the speaker will approve of you doing X and disapprove of you not doing X.
That makes morality subjective. It means that moral claims are not truth apt, not statements that one can assess as being true or false, but are simply human value judgements. It then took me several years to accept that this really is true, re-programming my intuition to accept a way of thinking that most people find so counter-intuitive that they out-right reject it. That’s why the search for an objective way of assigning truth values to moral claims (which philosophers call moral realism) continues to this day.
That search is doomed to fail, and involves a basic category error of misunderstanding what morality actually is. One can try consequentialist conceptions of ethics, such as utilitarianism, but the assessment of “good” and the aggregation have to be subjective; one can try a rule-based (deontological) conception of morality, but the rules can only derive from the pronouncements of people; or one can try a virtue-ethics approach, but the judgement of what is a virtue can again only come from human preference.
Having thus abandoned utilitarianism, and indeed moral realism entirely, I realised that a minority tradition within philosophy (back to David Hume, then such as A. J. Ayer, and more recently J. L. Mackie) had long been arguing that morality was instead subjective. But it wasn’t the philosophical arguments that finally convinced me of that, it was thinking about it from an evolutionary perspective. If we are to understand morality, surely we can only do so by understanding what morals actually are and where they came from.
It was Darwin who explained what morals actually are. They are aesthetic judgements, re-purposed by evolution to be applied to how humans treat each other, re-purposed for the task of enabling us to exploit a cooperative ecological niche. In order for that cooperation to work, proto-humans became programmed with feelings about each other’s behaviour as a social glue and as a means of policing their interactions.
If there were indeed “moral facts” then evolution would have no way of knowing about them. After all, evolution is a blind and amoral process with no sentience or awareness. It could not care less — literally — about moral facts, and certainly could not program us with any awareness or appreciation of them. What matters to evolution is simply whether we survive and leave more descendants. What affects our survival and reproduction is not what is objectively moral, it is how we feel and act, and how other humans feel and react towards us. Our moral programming and intuition will thus be entirely about human subjective feelings, because natural selection would simply have no traction on any “objective” morality.
Let’s make this fully explicit. Suppose I point a gun to the head of an innocent child and kill him. Will some objective moral force descend from the heavens and strike me? Will the ground open up and cause me to descend to my doom? Well no, actually, neither will happen. But people will be outraged, and owing to their feelings they may well imprison or execute me. The only moral force, the only moral agents in the world are humans, and they act based on their subjective feelings.
Suppose it were a moral fact that doing X is virtuous and thus that humans “should do X”. But suppose, further, that your fellow humans, instead, had the subjective feeling that X is immoral and would punish you for it. As regards your evolutionary success the fact that humans objectively “should do X” (whatever that’s supposed to mean) would be supremely irrelevant. The only thing that would matter is how those fellow humans feel and act towards you, and that’s what would affect whether you have descendants.
This is important because it means — and this follows straightforwardly from the fact that we are products of Darwinian evolution — that the moral feelings and values that are at the core of human nature cannot be anything to do with any objective moral facts (since our creator, the amoral and blind process of natural selection, would be utterly oblivious to them) but are instead an entirely subjective system cobbled together to do a job.
Yet, if one discounts human intuition and feelings as evidence for an objective morality then one is left with absolutely no evidence at all, since the human feeling that there is an objective morality is really all there is.
As an aside, how, then, to explain that feeling that morality is objective? My pet hypothesis (admittedly advanced with only meagre evidence) is that the widespread feeling that human moral judgements reflect objective truth is a neat little trick that evolution has adopted and programmed us with in order to make our moral systems more effective. It motivates people more if they think that their morals have an objective basis, rather than being “merely” their opinion. One can see this from the huge amount of resistance to accepting the idea that morals really are subjective. People tend to think that that would make morals unimportant, even though the truth is the opposite.
The next argument comes from game theory. The idea of an Evolutionarily Stable Strategy was proposed by John Maynard Smith who was considering whether behaviours and strategies would be favoured and stable, in evolutionary terms, or whether they could be out-competed by alternatives. It turns out that “pure” strategies such as “always be nice to others” or “always be nasty to others” are rarely stable, but what does end up being stable is a population containing a mixture of the two.
A cooperative society in which everyone was always nice to others would be vulnerable to invasion by a mutant strategy of “always be nasty” (since that strategy would gain all of the benefits of other people’s niceness, but have none of the costs of being nice itself). Equally, a society composed solely of always-nasty people would gain none of the advantages of cooperation, and would be vulnerable to invasion from a mutant strategy of “be-nice and cooperate”. But what is stable and evolutionary favoured is a mixture of the two. That leads to the majority wanting to be nice, and so gain the advantages of social cooperation, but also wanting to punish and ostracise people who are unfairly taking advantage of that niceness. Which, of course, sounds much more like actual human nature than the pure “be nice” or “be nasty” descriptions.
Thus evolutionary theory tells us that the moral feelings and attitudes that comprise human nature will be a complex and tensioned mixture of competing values. We will value selfless cooperation but also look to self advantage. We will deplore cheating, but some fraction of us will sometimes cheat. We will want an equal society, but also one where people benefit from their own endeavours and can get ahead. We will want societal rules and sharing, but also individual liberty and self reliance. Neither full-blown communism nor out-right libertarianism would suit us, instead we’d want a tensioned balance of the two.
Thus what best satisfies human natures is not simple, black-or-white rules, but instead tensioned balances between competing goods — balances that will be different for different people and in different situations. But such complex balances are not truth apt, they are not simple statements to which one can apply the labels “true” or false” about what we “should do”. There is nothing objective or truth-apt about morality if it is really a balance between competing, subjective values. From that perspective, the whole quest for moral realism, the whole quest for applying truth values to moral claims, is misconceived.
Of course the moral realist could reply by saying that, yes, I am correctly giving a descriptive account of how humans think about moral issues, but that doesn’t stop there being — in addition — an objective rightness and wrongness about human acts. And they’re right in principle. But the above account tells us that our human intuitions and gut feelings will all be about that subjective, evolutionarily programmed, non-truth-apt moral sense, and they will not be about this supposed objective scheme. Yet, other than appealing to such intuition, moral realists have no evidence whatsoever for any such scheme. It thus gets excised by Occam’s razor.
At this point the burden of proof is entirely on the moral realist. If they think that there are true facts of the kind “we ought to do X” then they first need to explain what “we ought to” even means. (And explaining what we “ought to do” in terms of what is “morally right” doesn’t help them, if they then only explain “morally right” in terms of the circular “what we ought to do”.)
Having explained what moral realism actually means, they’d then need to explain why humans should care about it and adopt it. After all, humans are likely going to be much happier with their own subjective moral sense, based on our evolutionarily programmed human nature.
No philosopher has got near to answering these questions (though for two thousand years many have tried). It’s now time to abandon the false grail of moral realism. It’s already centuries after Hume and Darwin arrived at the correct way of thinking about human morality, but moral realism is being kept alive by nothing more than the persuasiveness of human intuition. Once one re-programs ones intuition to reject that delusion, meta-ethics is then solved and the arena of morality suddenly makes sense.