The evolutionary argument against moral realism

Having abandoned Divine Command Theory around the age of 12, when I realised that I was an atheist, I then read John Stuart Mill at the impressionable age of 14 and instantly became a utilitarian. I remained so well into adulthood; it seemed obvious that morality was a matter of objective wrong and right, and that utilitarianism — the greatest good of the greatest number — was the way to determine such facts.

Of course I also became aware of the unresolved problems with utilitarianism: there is no way to assess what is “good” except by subjective judgement, and there is no way to aggregate over sentient creatures (should a mouse count equally to a human?) except, again, by subjective judgement. Both of those rather clash with the desired objectivity of the scheme.

Periodically I would try to fix these flaws, but never succeeded. Such mulling led me to the realisation that I didn’t actually know what moral language actually meant. “It is morally right that you do X”, can be re-phrased as “you ought to do X”, but what do those mean? I realised that I didn’t know, and had been proceeding all this time on the basis that what they meant was intuitively obvious and so didn’t need analysis.

But that’s not good enough if we’re trying to solve meta-ethics and understand the very foundations of morality. And so, I eventually arrived at the realisation that the only sensible meaning that can be attached to the moral claim “you ought to do X” is that: at least one human, likely including the speaker, will dislike it if you do not do X. Similarly, “It is morally right that you do X” becomes a declaration that the speaker will approve of you doing X and disapprove of you not doing X.

That makes morality subjective. It means that moral claims are not truth apt, not statements that one can assess as being true or false, but are simply human value judgements. It then took me several years to accept that this really is true, re-programming my intuition to accept a way of thinking that most people find so counter-intuitive that they out-right reject it. That’s why the search for an objective way of assigning truth values to moral claims (which philosophers call moral realism) continues to this day.

That search is doomed to fail, and involves a basic category error of misunderstanding what morality actually is. One can try consequentialist conceptions of ethics, such as utilitarianism, but the assessment of “good” and the aggregation have to be subjective; one can try a rule-based (deontological) conception of morality, but the rules can only derive from the pronouncements of people; or one can try a virtue-ethics approach, but the judgement of what is a virtue can again only come from human preference.

Having thus abandoned utilitarianism, and indeed moral realism entirely, I realised that a minority tradition within philosophy (back to David Hume, then such as A. J. Ayer, and more recently J. L. Mackie) had long been arguing that morality was instead subjective. But it wasn’t the philosophical arguments that finally convinced me of that, it was thinking about it from an evolutionary perspective. If we are to understand morality, surely we can only do so by understanding what morals actually are and where they came from.

Descent of Man, Darwin

It was Darwin who explained what morals actually are. They are aesthetic judgements, re-purposed by evolution to be applied to how humans treat each other, re-purposed for the task of enabling us to exploit a cooperative ecological niche. In order for that cooperation to work, proto-humans became programmed with feelings about each other’s behaviour as a social glue and as a means of policing their interactions.

If there were indeed “moral facts” then evolution would have no way of knowing about them. After all, evolution is a blind and amoral process with no sentience or awareness. It could not care less — literally — about moral facts, and certainly could not program us with any awareness or appreciation of them. What matters to evolution is simply whether we survive and leave more descendants. What affects our survival and reproduction is not what is objectively moral, it is how we feel and act, and how other humans feel and react towards us. Our moral programming and intuition will thus be entirely about human subjective feelings, because that is what determines how humans act. In contrast, natural selection would simply have no traction on any “objective” morality.

Let’s make this fully explicit. Suppose I point a gun to the head of an innocent child and kill him. Will some objective moral force descend from the heavens and strike me? Will the ground open up and cause me to descend to my doom? Well no, actually, neither will happen. But people will be outraged, and owing to their feelings they may well imprison or execute me. The only moral force, the only moral agents in the world are humans, and they act based on their subjective feelings.

Suppose it were a moral fact that doing X is virtuous and thus that humans “should do X”. But suppose, further, that your fellow humans, instead, had the subjective feeling that X is immoral and would punish you for it. As regards your evolutionary success the fact that humans objectively “should do X” (whatever that’s supposed to mean) would be supremely irrelevant. The only thing that would matter is how those fellow humans feel and act towards you, and that’s what would affect whether you have descendants.

This is important because it means — and this follows straightforwardly from the fact that we are products of Darwinian evolution — that the moral feelings and values that are at the core of human nature cannot be anything to do with any objective moral facts (since our creator, the amoral and blind process of natural selection, would be utterly oblivious to them) but are instead an entirely subjective system cobbled together to do a job.

Yet, if one discounts human intuition and feelings as evidence for an objective morality then one is left with absolutely no evidence at all, since the human feeling that there is an objective morality is really all there is.

As an aside, how, then, to explain that feeling that morality is objective? My pet hypothesis (admittedly advanced with only meagre evidence) is that the widespread feeling that human moral judgements reflect objective truth is a neat little trick that evolution has adopted and programmed us with in order to make our moral systems more effective. It motivates people more if they think that their morals have an objective basis, rather than being “merely” their opinion. One can see this from the huge amount of resistance to accepting the idea that morals really are subjective. People tend to think that that would make morals unimportant, even though the truth is the opposite.

The next argument comes from game theory. The idea of an Evolutionarily Stable Strategy was proposed by John Maynard Smith who was considering whether behaviours and strategies would be favoured and stable, in evolutionary terms, or whether they could be out-competed by alternatives. It turns out that “pure” strategies such as “always be nice to others” or “always be nasty to others” are rarely stable, but what does end up being stable is a population containing a mixture of the two.

A cooperative society in which everyone was always nice to others would be vulnerable to invasion by a mutant strategy of “always be nasty” (since that strategy would gain all of the benefits of other people’s niceness, but have none of the costs of being nice itself). Equally, a society composed solely of always-nasty people would gain none of the advantages of cooperation, and would be vulnerable to invasion from a mutant strategy of “be-nice and cooperate”. But what is stable and evolutionary favoured is a mixture of the two. That leads to the majority wanting to be nice, and so gaining the advantages of social cooperation, but also wanting to punish and ostracise people who are unfairly taking advantage of that niceness. Which, of course, sounds much more like actual human nature than pure “be nice” or “be nasty”.

Thus evolutionary theory tells us that the moral feelings and attitudes that comprise human nature will be a complex and tensioned mixture of competing values. We will value selfless cooperation but also look to self advantage. We will deplore cheating, but some fraction of us will sometimes cheat. We will want an equal society, but also one where people benefit from their own endeavours and can get ahead. We will want societal rules and sharing, but also individual liberty and self reliance. Neither full-blown communism nor out-right libertarianism would suit us, instead we’d want a tensioned balance of the two.

Thus what best satisfies human natures is not simple, black-or-white rules, but instead tensioned balances between competing goods — balances that will be different for different people and in different situations. But such complex balances are not truth apt, they are not simple statements to which one can apply the labels “true” or false” about what we “should do”. There is nothing objective or truth-apt about morality if it is really a balance between competing, subjective values. From that perspective, the whole quest for moral realism, the whole quest for applying truth values to moral claims, is misconceived.

Of course the moral realist could reply by saying that, yes, I am correctly giving a descriptive account of how humans think about moral issues, but that doesn’t stop there being — in addition — an objective rightness and wrongness about human acts. And they’re right in principle. But the above account tells us that our human intuitions and gut feelings will all be about that subjective, evolutionarily programmed, non-truth-apt moral sense, and they will not be about this supposed objective scheme. Yet, other than appealing to such intuition, moral realists have no evidence whatsoever for any such scheme. It thus gets excised by Occam’s razor.

At this point the burden of proof is entirely on the moral realist. If they think that there are true facts of the kind “we ought to do X” then they first need to explain what “we ought to” even means. (And explaining what we “ought to do” in terms of what is “morally right” doesn’t help them, if they then only explain “morally right” in terms of the circular “what we ought to do”.)

Having explained what moral realism actually means, they’d then need to explain why humans should care about it and adopt it. After all, humans are likely going to be much happier with their own subjective moral sense, based on our evolutionarily programmed human nature.

No philosopher has got near to answering these questions (though for two thousand years many have tried). It’s now time to abandon the false grail of moral realism. It’s already centuries after Hume and Darwin arrived at the correct way of thinking about human morality, but moral realism is being kept alive by nothing more than the persuasiveness of human intuition. Once one re-programs ones intuition to reject that delusion, meta-ethics is then solved and the arena of morality suddenly makes sense.

Advertisements

32 thoughts on “The evolutionary argument against moral realism

  1. Paul Braterman

    I accept your conclusion but not your argument. Someone who accepts *both* that behaviour has been moulded by evolutionary advantage *and also* that there are moral facts still has a choice of two strategies. They may say that it is a fact that the behaviour that bestows advantage is moral, and this has some face plausibility. Honesty is the best policy, if you’re a nice guy then people will cooperate with you (and massive effort goes into telling real nice guys from fakes, and in undermining attempts to do so), pull your weight in nasty collective jobs which incidentally include, until yesterday, the job of punishing offenders, wax indignant at those who won’t pull their weight, and so on. Or they may urge us, as Darwin does, to use our intellects in order to build on what he calls our natural sympathies, which are not quite the same thing as aesthetic judgements.

    You would, quite correctly, refute both of these by pointing out that they have no way of demonstrating without circularity that their preferences are *in fact* moral, which takes us straight back to Hume. But that is all you needed in the first place, and the evolutionary argument is irrelevant

    Reply
    1. Coel Post author

      Hi Paul,

      They may say that it is a fact that the behaviour that bestows advantage is moral …

      But why would that be? Is it just a coincidence?

      if you’re a nice guy then people will cooperate with you …

      True, but whether people want to cooperate with you will depend on their subjective feelings about you. Now, it might coincidently be that that matches what is “objectively good”, but the moral realist has no reason to suppose that.

      … which takes us straight back to Hume. But that is all you needed in the first place, and the evolutionary argument is irrelevant

      The main point of the evolutionary argument is that moral realists cannot use human intuition as a pointer to moral realism. But, really, that’s all they have.

    2. Paul Braterman

      I think I’ve got it now, and that we actually agree. What you are doing is attacking one specific argument for the existence of moral facts, namely the extent to which moral codes agree. Actually, they don’t always agree all that well, and if they did then perhaps we wouldn’t be bothering to have this conversation. But to return to the main theme:

      What you’re doing, successfully, is showing that to the extent that we do agree about morals, this can be explained in purely evolutionary terms, thus undermining the argument from moral agreement to the existence of moral facts. What appear to be statements about moral facts are merely statements about human psychology and perception, like the statement “Roses smell nice.”

      There is still work to be done by the moral philosopher, in conjunction with the anthropologist and the psychologist. For example, I see compassion, xenophobia,and the urge to punish as arising in much the same way, but being an liberal softy I regard the first of these as a virtue, and the latter two as atavistic defects. If you disagreed, how might I attempt to persuade you?But that would be the start of another conversation.

  2. Mark Sloan

    Hi Coel,

    I hope you can tolerate another argument that morality’s evolutionary origins actually provide us with a set of mind independent objective facts about morality.

    Here are two simple ‘is’ claims, as are normal in science, about which we may largely agree:

    1) The function (the primary reason they exist) of past and present cultural moral codes and the biology underlying our moral sense is to increase the benefits of cooperation in groups. (Meaning their primary selection forces were and are, as a matter of objective science, the benefits of cooperation in groups.)

    Therefore, it is objectively true that the moral claim “you ought to do X” implies doing X is likely to increase the benefits of cooperation in groups. This is not equivalent to “a declaration that the speaker will approve of you doing X and disapprove of you not doing X” – that moral claims are purely subjective.

    2) Large benefits of cooperation are common in our reality and the benefits of cooperation that intelligence enables are perhaps the strongest selection forces for intelligence. Therefore, virtually all biologically evolved intelligent beings from the beginning of time to the end of time can be expected to have a moral sense and cultural moral codes with the same function: to increase the benefits of cooperation in groups.

    If a person or a space alien on the other side of the galaxy with a naturally evolved moral sense believes that moral behavior is something other than elements of cooperation strategies, then that person or space alien is objectively wrong.

    However, what is moral behavior’s ultimate goal for that cooperation? Is that goal your individual well-being, well-being only of your family or tribe, well-being of all conscious creatures, or what? That choice of ultimate goal for enforcing moral codes is subjective as far as science can tell us.

    Also, what specific cooperation strategies should be implemented in a culture’s moral code to increase the benefits of cooperation? Should they use indirect or only direct reciprocity, marker strategies of membership in in-groups such as not eating pigs, demonization of homosexuals as imagined threats to the in-group, individual or group punishment of immoral behavior, or what? The choice of specific cooperation strategies is also subjective as far as science can tell us.
    In summary, the ultimate goal of moral behavior and its specific implementation in moral codes are subjective (as far as science can inform us). But moral behavior’s function is objective and, to that extent, moral realism is true as a matter of science telling us what moral behavior objectively ‘is’.

    But putting aside the science about what ‘is’, is there an objective basis for what morality ‘ought’ to be, including what it’s ultimate goals and means ‘ought’ to be? I don’t know of any.

    Reply
    1. Coel Post author

      Hi Mark,

      ) The function (the primary reason they exist) of past and present cultural moral codes and the biology underlying our moral sense is to increase the benefits of cooperation in groups.

      Yes, agreed.

      Therefore, it is objectively true that the moral claim “you ought to do X” implies doing X is likely to increase the benefits of cooperation in groups.

      Yes, I think I agree. I’d phrase it: the things that people do regard as moral, the things that people think we “ought to do”, will tend to increase cooperation and the benefits of cooperation.

      … Therefore, virtually all biologically evolved intelligent beings from the beginning of time to the end of time can be expected to have a moral sense and cultural moral codes with the same function: to increase the benefits of cooperation in groups.

      Yes, agreed.

      If a person … believes that moral behavior is something other than elements of cooperation strategies, then that person or space alien is objectively wrong.

      Hmm, I’m not sure what you mean by “moral behaviour is …” there. Would you accept a re-phrase to: “if a person considers that the behaviours that such beings tend to judge as moral are unrelated to cooperation strategies, then that person is wrong”?

      However, what is moral behavior’s ultimate goal for that cooperation?

      Hmmm, “moral behaviour” does not have goals. Humans have goals (as do other sentient life forms). Evolution has (metaphorical) goals.

      That choice of ultimate goal for enforcing moral codes is subjective as far as science can tell us.

      Yes, the only agents that do any “enforcing” of moral codes are people, and they indeed do so based on their subjective goals and values.

      But moral behavior’s function is objective …

      Yes, there are objective statements about what morality is, where it came from, and how it functions.

      … and, to that extent, moral realism is true as a matter of science telling us what moral behavior objectively ‘is’.

      But that doesn’t get you moral realism! You’ve switched from *describing* morality and where it came from, to moral realism, which is normative.

  3. Disagreeable Me (@Disagreeable_I)

    Hi Coel,

    Mostly I’m in agreement with you, but I think one need not be a moral realist to hew to one particular moral framework or another.

    The thing is, whether we like it or not, we do have this intuition. I do want to be “good”, whatever that is, and for people to see me as a “good” person. But I also want to be consistent and rational in how I approach being “good”, so I need a working definition of what being “good” entails. In particular, it seems to me to be a cheat to define as “good” whatever I want to do and “bad” anything I don’t. My drive to goodness demands that my criteria extend beyond my own selfish considerations.

    For me, utilitiarianism is the best choice as it is less arbitrary than either deontology or virtue ethics (and each of these can be derived from it). In practice, it basically boils down to “the good choice is the one that will most improve the lot of those around me”. Yes, there are very serious issues with it if we really want to get into quantifying exactly what we mean by utility, or whose utility counts and how much, but in most practical situations utilitarianism is not a bad framework for choosing between alternatives. These theoretical issues just don’t come up that often (though where they do I agree we’re left to little more than our intuitions). For many issues, utilitarianism seems to give clear answers. It would for instance tend to guide us to be pro-choice, pro-LGBT rights, pro-asylum seeker and so on.

    Other questions become empirical. Whether we should have a progressive tax system or a pro-business tax system is not clear. It could be that having a pro-business tax system would produce more wealth and so make us all better off, or it could be that a progressive tax system is required to make sure the most vulnerable in society can cope. It is likely that the best utilitarian solution is a balance somewhere in the middle. The point is that utilitarianism gives you a reference framework to think about these issues, rather than simply demanding equality for equality’s sake (Marxism?) or insisting on the ownership rights of the rich to keep their wealth and pay little tax (libertarianism/objectivism?).

    I don’t think it’s OK to simply conclude that morality is all bollocks and do away with it. To completely divest oneself of moral responsibility is to become a wannabe sociopath or psychopath. If like me you have a drive to be good, you need to attempt to satisfy that drive by deciding what “good” means to you. Once you’ve done this, you’ve effectively defined “good” in your language, and it actually becomes meaningful to talk about whether something is “good” or not. If we agree on what “good” means, then we can even have meaningful disagreements about what is right and wrong, where one of us might indeed be correct and the other incorrect. There can be an objective fact of the matter, in other words, as long as we agree what “good” means beforehand. And I think this drive we (presumably) share demands that we do just that.

    Reply
    1. Coel Post author

      Hi DM,
      I agree with your analysis. The crucial point there (as it relates to my article) is that you’re opting for utilitarianism based on *your* values. Such frameworks can be helpful to people in thinking about their moral codes, but the standing of any such framework can only come from human advocacy. (Even the desire that the code be consistent is, again, a human preference.)

      This is different from utilitarianism as an objective code, imposed on humans by moral realism, and thus something that humans “ought” to adopt whether they like it or not.

    2. Disagreeable Me (@Disagreeable_I)

      Hi Coel,

      I see what you’re saying and I don’t think you’re wrong.

      But I’m going to push back a little.

      I’m not so sure I’m choosing utilitarianism based on *my values*. I usually see it more as utilitarianism guiding me with regards to which values I ought to hold and which to discard.

      The way I see it, utilitarianism is attractive not only because it is simple and consistent, but also because it seems to be a good approximate description of what it is we’re talking about when we discuss “good” and “evil”.

      Consider an analogy to brightness. Brightness is an intuitive concept we all understand without much difficulty. But to make an objective, consistent study of brightness, you need to relate it to something objective in the world, and we can do that by defining brightness as something like “number of photons per second”. Perceived audio pitch becomes frequency of sound waves. Colour becomes a function of wavelengths emitted/reflected. And so on.

      I don’t see why we can’t treat morality the same way. It’s something we vaguely perceive, and what we perceive seems to relate more or less to how something improves or worsens the lot of those around us. So why not just define it that way?

      > This is different from utilitarianism as an objective code, imposed on humans by moral realism,

      Certainly.

      > and thus something that humans “ought” to adopt whether they like it or not.

      Well, not if we define our terms according to utilitarian morality (which I’m arguing we should). If we do, then we “ought” to behave morally because that’s simply what “ought” means.

    3. Coel Post author

      Hi DM,

      I’m not so sure I’m choosing utilitarianism based on *my values*. I usually see it more as utilitarianism guiding me with regards to which values I ought to hold and which to discard.

      It’s certainly possible for ideas such as this to *influence* your values, but in the end it is still you picking the values. Utilitarianism (and which flavour of utilitarianism) would still be your choice.

      Further, and importantly, the only *bindingness* of the moral scheme comes from your advocacy of it (or that of other humans). That’s the sense in which moral schemes are subjective (they are rooted in the preferences of humans) rather than being objective.

      This is important because, if we’re choosing between variants of utilitarianism then we do so out of preference, not through discovering which one is “right”. And, if we dislike some implication of utilitarianism (e.g. the “repugnant conclusion”), then we are free to simply reject it — afterall, the only standing any such scheme has is our assent to it.

      The illusion here is that there must be one such scheme that is self-consistent and coherent, that is the “right one”, and that we “should” adopt. As above, I think the idea of a “right” scheme is a category error, and I suggest that the idea that there must be a self-consistent and coherent scheme that suits us (as opposed to a scheme of competing and tensioned “goods”) is also false.

      But to make an objective, consistent study of brightness, you need to relate it to something objective in the world, and we can do that by defining brightness as something like “number of photons per second”. […] I don’t see why we can’t treat morality the same way. It’s something we vaguely perceive, and what we perceive seems to relate more or less to how something improves or worsens the lot of those around us.

      There is a big difference in that “brightness” could be defined such that an alien scientist could (using that definition) determine brightness without any reference to humans. Well-being is not like that, because the only way of determining whether “something improves or worsens” our lot is to ask humans whether we like it.

      In that sense, morality is always and inevitably subjective (OED: “subjective”: “Based on or influenced by personal feelings, tastes, or opinions”). In that sense, moral realism is inevitably false (“Moral Realism … is the meta-ethical view … that there exist such things as moral facts and moral values, and that these are objective and independent of our perception of them or our beliefs, feelings or other attitudes towards them”).

      Note that your wording about morality as “something we vaguely perceive” suggests that morality is something external that we have to perceive or learn about, whereas actually it’s something internal, being our values and feelings.

  4. Mark Sloan

    Hi Coel,

    Yes, science, even about “what morality is, where it came from, and how it functions” is descriptive.

    My challenge is figuring out how to make such ‘merely’ descriptive knowledge culturally and philosophically useful in discussions about what is and is not moral.

    I am thinking that usefulness could be made possible by arguments (based only on descriptive science) which show that:

    1) All rational, well-informed persons will agree that moral ‘means’ (moral behaviors) are subsets of elements of cooperation strategies. (It is a category error about what morality objectively ‘is’ to argue that moral behaviors are other than elements of cooperation strategies. Defenders of consequentialism, Kantianism, and virtue ethics make this error.)

    2) All rational, well-informed persons will put forward as moral ‘means’ elements of cooperation strategies applicable to other beings held worthy of equal moral regard. (In other words moral ‘means’ are cooperation strategies applicable within an in-group. To claim otherwise is, again, to make a category error about what moral behavior ‘is’. Equal moral regard can be understood to imply Rawlsian fairness. Note Rawlsian fairness does not imply equal help.)

    3) Moral intuitions and cultural moral norms such as “Homosexuality is evil”, “Women must be submissive to men”, and “Don’t eat pigs!” are elements of cooperation strategies that benefit in-groups by exploiting or excluding out-groups.

    Note that 1) to 3) are true (I argue) regardless of “rational, well-informed” people disagreeing about the ultimate goal of moral behavior and who is “worthy of equal moral regard”.

    But in 2), there appears to be a normative claim: “All rational, well-informed persons will put forward as moral ‘means’…” What is the source of that normativity? This normativity’s source is simply the agreement by all rational, well-informed people. Is this an objective normativity? Yes, its normativity is independent of the perceptions, beliefs, feelings or other attitudes of people who are irrational or not well-informed about what the function of morality ‘is’.

    Also, as a practical matter, if there is general agreement that 1) the ultimate goal of moral behavior is something like human flourishing and 2) all people are worthy of equal moral regard in the Rawlsian sense, one has pretty much defined a culturally useful moral code. In such a case, subjective judgments beyond these two may be essentially irrelevant to the science of morality’s cultural utility.

    However, even if you roughly agreed with all of the above, you might still say, “Perhaps, but you still have not shown a source of bindingness for a moral fact that we are mysteriously bound to comply with regardless of our needs and preferences. Therefore, moral realism remains false.”

    Fine, but if we can understand moral behavior and define a highly effective cultural moral code based only on common group goals and science that defines universally moral means, perhaps it is time to update our definitions of ”moral facts” and “moral realism”.

    Reply
    1. Coel Post author

      Hi Mark,

      My challenge is figuring out how to make such ‘merely’ descriptive knowledge culturally and philosophically useful in discussions about what is and is not moral.

      Note that your wording: “what is and is not moral” implies that there is a fact about such matters, rather than them being values which may differ between humans.

      1) All rational, well-informed persons will agree that moral ‘means’ (moral behaviors) are subsets of elements of cooperation strategies.

      Yes, agreed, and note that this seems to be a purely descriptive statement.

      2) All rational, well-informed persons will put forward as moral ‘means’ elements of cooperation strategies applicable to other beings held worthy of equal moral regard.

      Here you seem to be saying that all “rational, well-informed” persons will have similar values, in that that will advance particular behaviours as “moral”. This may not be so. It is possible for someone to be rational and well-informed and yet have radically different values. Psychopaths are examples.

      3) Moral intuitions and cultural moral norms such as “Homosexuality is evil”, “Women must be submissive to men”, and “Don’t eat pigs!” are elements of cooperation strategies that benefit in-groups by exploiting or excluding out-groups.

      Hmm, possibly, although I suspect that how such ideas come to arise can be quite complicated. The “don’t eat pigs” one, for example, may have resulted simply from trichinosis.

      This normativity’s source is simply the agreement by all rational, well-informed people. Is this an objective normativity? Yes, its normativity is independent of the perceptions, beliefs, feelings or other attitudes of people who are irrational or not well-informed about what the function of morality ‘is’.

      Two replies. First, you seem to be getting to a universal agreement simply by labeling anyone who disagrees as irrational and not informed. Even, if you do get agreement among a group of people, you do not get *objective* normativity that way. If 100 out of 100 people think that “X should do Y” then that is still *subjective* normativity: it derives from the opinions of those 100 people.

      It is not the case that if a value is held by 99 out of 100 people, then the value is subjective, but if it is held by 100 out of 100 people then it is “objective”. If you want *objective* normativity you need normativity that is entirely independent of all 100 of those people; it would have to remain objectively normative even if 5 or 50 or all 100 of those people disagreed.

      Second response, on what the “function” of morality is. The “function” of something is evaluated against a goal, and the goals of *humans* are not the same as the goals of evolution. Thus the *evolutionary* “function” of morality does not imply any normativity for *us*.

      For example, the evolutionary function of sex organs is procreation. Contraception runs directly counter to that function. But those two facts do not imply anything normative about contraception. That would be the naturalistic fallacy.

      In the same way, that fact that moral feelings evolved to facilitate cooperation does not carry any normative implications for us. We can decide what to so based on what *we* want.

      Also, as a practical matter, if there is general agreement that [… (1) and (2) …] then one has pretty much defined a culturally useful moral code.

      Yes, agreed. This is the very essence of a *subjective* moral scheme and how subjective morality does actually work — it derives from agreement among people based on their feelings and values.

      Fine, but if we can understand moral behavior and define a highly effective cultural moral code based only on common group goals and science that defines universally moral means, perhaps it is time to update our definitions of ”moral facts” and “moral realism”.

      Language can be adapted to be useful, yes. But, the whole discussion of morality is still bedeviled by the illusion that morality is some external system that we “ought” to adopt, and that our task is to work out what the “correct” moral scheme is and then adopt it.

      I suggest that explicitly rejecting “objective morality” and “moral realism” is a good way of conveying a rejection of such “top down” morality. Instead, morality is a “subjective” system that works in a “bottom up” way, deriving ultimately from people’s values and feelings, and the working out of collective agreements based on those (in the same way that we work out political agreements).

  5. Mark Sloan

    Cole,

    Good question about whether people, including psychopaths, with different values could all agree with the three claims from the science of morality!

    Actually, the three claims are value independent.

    As value independent claims, they would be supported by all well-informed rational people including psychopaths (stipulated here as rational people who value only their own well-being). However, my “will put forward” language (from Gert’s SEP definition of normative) I used above could be interpreted to imply a motivation to “put forward” and therefore a value. I’ve corrected that below in 2a) to “will agree” which does not imply any values.

    Also, note “well-informed” here refers to understanding the evolutionary origins of morality’s function as summarized by the below three claims in addition to necessary moral philosophy. I should come up with a less offensive phrase choice as you pointed out later in your comment, but bear with me on that wording for a bit.

    The three value independent claims are:

    1) All rational, well-informed persons will agree that moral ‘means’ (moral behaviors) are subsets of elements of cooperation strategies. – Just science, no values implied and no claims of any source of motivating force to act ‘morally’.

    2a) All rational, well-informed persons will agree that elements of in-group cooperation strategies are universally moral ‘means’. – Again just science that includes the factual observation that the morality of in-group cooperation strategies are, uniquely, not classified as immoral by any other cooperation strategy. So regardless of what cooperation strategies a group advocates and enforces (even strategies that exploit out-groups) or what values groups have, all will necessarily agree that in-group strategies are universally moral as a valueless observation.

    3) Moral intuitions and cultural moral norms such as “Homosexuality is evil”, “Women must be submissive to men”, and “Don’t eat pigs!” are elements of cooperation strategies that benefit in-groups by exploiting or excluding out-groups. – Again just science

    So I am claiming all well-informed (about the evolution of morality) rational psychopaths will agree that in-group cooperation strategies, such as the well-known “Do unto others as you would have them do unto you” (which initiates indirect reciprocity), are universally moral. There is no implication that the rational psychopath’s motivation will be anything except to follow or violate moral norms however he expects will be in his best interest.

    You said “If you want *objective* normativity you need normativity that is entirely independent of all 100 of those people; it would have to remain objectively normative even if 5 or 50 or all 100 of those people disagreed.”

    Right, but the “universal moral means” evolutionary science identifies is a valueless fall out of known cooperation strategies, independent of whose values support some other moral ‘means’. It is the only subset of cooperation strategies that is not contradicted by being judged immoral by any other strategy. As such, it is as objective, cross species universal, and value free as the mathematics of game theory.

    Motivation to conform to this “universal moral means” or any moral norm, distaste for cooperation strategies that exploit out-groups, and choice of ultimate goals for ‘moral’ behavior are all value dependent. All are therefore subjective as I expect you agree.

    Where we may still disagree is the existence of a value free, cross-species objective, “universal moral means” and the cultural utility of understanding the dichotomy of morality having subjective ultimate goals but objectively moral ‘means’.

    Reply
    1. Paul Braterman

      “All rational, well-informed persons will agree that moral ‘means’ (moral behaviors) are subsets of elements of cooperation strategies”; how question-begging can you get? There are millions of people who believe that moral ‘means’ following the will of God, or glorifying him, without regard to social consequences. If you reject these as not being “rational, well-informed persons”, then the argument reduces to “People who agree with me about what is moral would agree with me about what is moral.” True but utterly trivial

    2. Coel Post author

      Hi Mark,

      1) All rational, well-informed persons will agree that moral ‘means’ (moral behaviors) are subsets of elements of cooperation strategies.

      Agreed, if this is to be interpreted as “all well-informed persons will agree that feelings and behaviours that we call moral have evolved to produce cooperation”.

      2a) All rational, well-informed persons will agree that elements of in-group cooperation strategies are universally moral ‘means’.

      You are asserting that well-informed persons will apply the label “moral” to in-group cooperation strategies? How are we to interpret that claim? Is it a repeat of the previous claim? Is the applying of the label “moral” a value judgement here? Are you thus asserting that such people will make this value judgement? If not, what are you asserting?

      Again just science that includes the factual observation that the morality of in-group cooperation strategies are, uniquely, not classified as immoral by any other cooperation strategy.

      Hold on, applying the labels “moral” or “immoral”, or classifying acts as such, is essentially making value judgements. If that’s not your intention, what actually are you asserting by applying such labels?

      So regardless of what cooperation strategies a group advocates and enforces … or what values groups have, all will necessarily agree that in-group strategies are universally moral as a valueless observation.

      This seems to me to be self-contradictory. What does agreeing that a strategy is “universally moral” mean of it is not a value judgement?

      Are you using the phrase “is universally moral” to mean “has evolved to promote cooperation”? If so, I agree with you claim, but your claim then becomes “well-informed people will agree that cooperative strategies have evolved to promote cooperation”, which is trite.

      To me, what you’re doing is continually sliding between *descriptive* language and moral-realist language, as though it were obvious what phases such as “… is moral” actually mean. Yet, the whole point of my anti-realist critique is that I don’t know what a moral-realist “… is moral” is supposed to mean.

      So I am claiming all well-informed (about the evolution of morality) rational psychopaths will agree that in-group cooperation strategies, such as the well-known “Do unto others as you would have them do unto you” (which initiates indirect reciprocity), are universally moral.

      Again, I don’t know what that claim is supposed to mean. The basic claim is:

      “All well-informed people will agree that cooperation strategies are universally moral.”

      I can interpret the “… are universally moral” as meaning “evolved to promote cooperation”, but that makes your claim a simple tautology.

      Or I can interpret the “… are universally moral” as a value judgement, as a statement about what such people like, such that the claim becomes:

      “All well-informed people will agree that cooperation strategies are what they like.”

      But that (1) is then a highly contentious claim, and (2) even if it were true, it still gives you a *subjective* moral system, since it is rooted in what people like.

      So, perhaps you mean something else by the phrase “… are universally moral”, but if so I have not gathered what it is.

  6. Mark Sloan

    Hi Paul,

    So you are thinking that starting from descriptively moral behaviors defined as “behaviors motivated by our moral sense or advocated by past and present moral codes” assumes the truth of “moral ‘means’ (moral behaviors) are subsets of elements of cooperation strategies”? I don’t follow that.

    First, if you have a better definition (a definition that would not be “question begging”) of the data set science ought to study when it studies descriptively moral behaviors than “behaviors motivated by our moral sense or advocated by past and present moral codes”, please let me know.

    Second, it required a tremendous amount of supporting work (over about a 40 year time period) to show that the hypothesis that “moral ‘means’ (moral behaviors) are subsets of elements of cooperation strategies” appears true in the scientific sense (in the sense of what moral ‘means’ ‘are’, not what moral ‘means’ ‘ought’ to be). In no way is this astonishing (to me at least) result defined to be true by a premise (question begging).

    Reply
  7. Mark Sloan

    Hi Cole,

    Yes, “all well-informed persons will agree that feelings and behaviours that we call moral have evolved to produce cooperation” summarizes the claim!

    However, I would still clarify that the actual data set used to test and confirm this hypothesis is: “All known aspects of behaviors motivated by our moral sense or advocated by past and present moral codes”.

    What does “universally moral” mean as a descriptive, science based, value free claim about what ‘is’?

    “Universally moral”, as a value free claim, means 1) not contradicted by any other cooperation strategy (in the sense of not being judged immoral by any other cooperation strategy such as those that exploit or ignore out-groups) and 2) being the minimum strategy set necessary to maintain the benefits of cooperation in groups. The only such subset of all descriptively moral behaviors (means) I am aware of is in-group cooperation strategies.

    Because of 1) and 2), we can then expect that all cultures and species that have moral codes, regardless of their values, will judge in-group cooperation strategies to be moral, and therefore universally moral. Of course, their thinking on the morality of other than in-group moral means may be radically different.

    Also, these universally moral strategies are only about universal ‘means’. We agree that there are no universal moral ‘ends’ (ultimate goals for acting morally) because they are subjective.

    Reply
    1. Coel Post author

      Hi Mark,

      Because of 1) and 2), we can then expect that all cultures and species that have moral codes, regardless of their values, will judge in-group cooperation strategies to be moral, and therefore universally moral.

      Yes, I think I can agree with that. It is a descriptive statement that says that, because of how people (and similar species) have evolved, certain feelings will be universal.

      But, that still gives a subjective scheme. The scheme is founded in judgements that people make, which is the essence of subjectiveness. The word “moral” is a still a label of approval, applied by humans to things they like.

      That is very different from objective morality, which would be some external standard of conduct to which humans “ought to hold to” (whatever that means), regardless of what humans think of that.

  8. Mark Sloan

    Hi Cole,

    “…because of how people (and similar species) have evolved, certain feelings will be universal.”

    Well, “feelings” are only a proximate cause of motivation to initiate in-group cooperation strategies such as indirect reciprocity which risk being exploited. But the ultimate source of these feelings is the benefits of in-group cooperation strategies that selected for them. These in-group cooperation strategies are as objective, feeling independent, species independent, and universally moral as the mathematics that define them.

    C – “The word “moral” is a still a label of approval, applied by humans to things they like.”

    No, in-group cooperation strategies are universally moral independent of what individuals like. They are objectively moral ‘means’.

    But can moral ‘means’ without a defined ultimate goal be usefully called an objective morality? Kantianism and all deontological moralities (which are only about moral ‘means’) are generally accepted as moral systems with no defined ultimate goal. So in-group cooperation strategies define a universal moral system in the same sense that Kantianism claims to – even though neither defines an ultimate goal. The difference is that in-group cooperation strategies are objectively moral, Kantianism is not.

    Indeed, telling the truth to the killer where his victim is (which Kantianism can be interpreted to require) is objectively immoral ‘means’ regardless of what any individual feels about the matter.

    Thus, it seems to me inescapable that in-group cooperation strategies define an objective morality, even when that cooperation has subjective ultimate goals.

    C – “That is very different from objective morality, which would be some external standard of conduct to which humans “ought to hold to” (whatever that means), regardless of what humans think of that.”

    I don’t see that objectivity regarding morality necessarily implies what I call “magic oughts” that people ought to hold to regardless of their needs and preferences. All objectivity regarding morality implies is 1) people who think some behavior is moral that contradicts what is objectively moral are simply mistaken about what is moral and 2) if they do not conform are simply acting immorally. No “magic oughts” are needed to have an objective morality.

    Reply
    1. Coel Post author

      Hi Mark,

      No, in-group cooperation strategies are universally moral independent of what individuals like. They are objectively moral ‘means’.

      I’m still struggling with what you’re actually meaning, specifically what you mean by the word “moral”. You’ve never really explained.

      In that quote you apply the labels “universally moral” and “objectively moral” to in-group cooperation strategies. But what do you mean by “moral” as used there?

      You might be using “moral” to mean “promotes cooperation”. That makes your claim true, but also a tautology.

      Or you might be using “moral” to mean “is approved of by humans” (which is what I mean by “moral”). But you can’t be meaning that, given your inclusion of “… independent of what individuals like”.

      If you mean something else, can you explain? Until I know what your claim actually means, I can’t tell whether I agree with it!

      All objectivity regarding morality implies is 1) people who think some behavior is moral that contradicts what is objectively moral are simply mistaken about what is moral …

      Same questions! I honestly don’t know what you’re claiming! Let’s try some substitutions:

      Take: “moral” = “promotes cooperation”. Gives:

      “… people who think some behavior {promotes cooperation} [when in fact it ] contradicts what objectively does {promote cooperation} are simply mistaken about what {promotes cooperation}”. Is that what you mean?

  9. Mark Sloan

    Hi Cole,

    You ask if “moral” = “promotes cooperation”.

    Not exactly. What is “descriptively moral” does = “promotes cooperation”. But note that what is merely descriptively moral may promote the benefits of in-groups cooperation at the expense of out-groups. Also, as a data set of behaviors, they can be diverse, contradictory, and bizarre.

    Just the word “moral” in isolation is ambiguous.

    When I use “moral” on its own, I try to make the meaning clear by context (such as above: ”people who think some behavior is moral“) or by specifying either “descriptively moral”, “objectively (or universally) moral”, or “normatively moral”.

    Here, “descriptively moral” is as above. “Objectively (or universally) moral” refers to what is universally moral as descriptive scientific truth. For the sake of simplifying our discussion, we can assume your definition of “normatively moral” meaning something like binding regardless of needs and preferences (implying a magic ought).

    Then, as I have argued previously, “objectively (or universally) moral”, as a value free claim, means 1) not contradicted by any other cooperation strategy (in the sense of not being judged immoral by any other cooperation strategy such as those that exploit or ignore out-groups) and 2) being the minimum strategy set necessary to maintain the benefits of cooperation in groups.

    Assume your preferred (subjective) goal for moral systems is consequentialist, perhaps “the well-being of all people” or “human flourishing”. What moral ‘means’ (cultural moral norms meaning heuristics for moral behavior) ought (instrumental) you advocate and enforce that are most likely to achieve these goals?

    Due to the evolutionary origins of our moral psychology, advocating for and enforcing these in-group cooperation strategies can be expected to be more effective than any other moral ‘means’ (regarding interpersonal relationships) in actually achieving well-being or flourishing. So in-group cooperation strategies are the instrumental choice for achieving well-being or flourishing. Second, it can be expected be easier to get large numbers of people to agree on these moral ‘means’ (large participation is important for effective moral codes) when those moral ‘means’ are universally moral as a matter of science, not opinion. Third, even if some people have a different ultimate goal for morality, if they agree on moral ‘means’ (since there is a fact of the matter) then many moral disagreements simply vanish.

    How about a new rule-utilitarianism using science based rules? In-group cooperation strategies such as “Do unto other as you would have them do unto you” (which advocates initiating indirect reciprocity) sound pretty good to me.

    Reply
    1. Coel Post author

      Hi Mark,

      Then, as I have argued previously, “objectively (or universally) moral”, as a value free claim, means 1) not contradicted by any other cooperation strategy […] and 2) being the minimum strategy set necessary to maintain the benefits of cooperation in groups.

      OK, so you are indeed effectively using “moral” as a synonym for things that promote cooperation in groups.

      Suppose, though, someone were to say: ok, but I don’t accept the value system of promoting cooperation within groups, I want to develop and abide by a different value system. What would you say to such a person?

  10. Mark Sloan

    Hi Cole,

    Ah, I think I get your point now about “using “moral” as a synonym for things that promote cooperation in groups”. Yes, but that synonym necessarily follows from the observation that “all behaviors motivated by our moral sense and advocated by past and present moral codes are elements of cooperation strategies”. Given that elements of cooperation strategies are what moral behavior descriptively ‘is’, neither I nor anyone else has any other rational choice in the matter.

    However, while we may have no choice regarding what ‘is’, we can still rationally 1) decide that moral ‘means’ ‘ought’ to be something else and, of course, 2) what the ultimate goal of moral behavior ‘ought’ to be.

    So, to the person who says: “I want to develop and abide by a different value system”, I would say they have not necessarily made an error. If they are talking about what moral goals ‘ought’ to be, they are logically fine. Even if they are talking about what moral ‘means’ ‘ought’ to be, they can still be reaching a rational conclusion. Only if their different values about moral ‘means’ were claimed to be about what moral ‘means’ ‘are’ would I would I tell them they have made an error.

    But I also would warn any person who advocates that values regarding moral ‘means’ ‘ought’ to be something else, that their chances are comparatively poor for actually achieving common goals such as increasing well-being or flourishing in their cultures. Due to our evolutionary history, moral ‘means’ as cooperation strategies fit human beings like a key in a well-oiled lock because this key, cooperation strategies, is what largely shaped this lock, our moral psychology. That is, cooperation strategies are inherently harmonious and motivating to our moral psychology. Further, the biology underlying our emotional experience of durable well-being (famously experienced among family and friends) was arguably largely selected for by its power to keep us cooperating in groups.

    In sharp contrast, I don’t remember reading about Utilitarians or Kantianists waxing misty eyed over the positive effects on their durable well-being gained conforming to their moral ‘means’. Indeed, those moral ‘means’ that are distinctly dissonant with our evolved moral psychology are a source of decreased durable well-being.

    Reply
    1. Coel Post author

      Hi Mark,

      I’m still struggling to decipher what you mean by the word “moral”! Every time I ask you what you mean by “moral” you explain it with a sentence involving the word “moral”. This doesn’t help, because I don’t know what you mean by “moral”! For example:

      Yes, but that synonym necessarily follows from the observation that “all behaviors motivated by our moral sense and advocated by past and present moral codes are elements of cooperation strategies”.

      Substituting in for “moral” as meaning “cooperation promoting”, I get: “all behaviors motivated by our {cooperation-promoting senses} and advocated by past and present {cooperation-promoting} codes are elements of cooperation strategies”. Which, I agree is true, but it also seems to be a tautology.

      Given that elements of cooperation strategies are what moral behavior descriptively ‘is’, …

      Which translates to: “Given that elements of cooperation strategies are what {cooperation-promoting} behavior descriptively ‘is’, …”. Again, true but tautologous.

      we can still rationally … decide … what the ultimate goal of moral behavior ‘ought’ to be.

      What do you mean by “ought to be” here? Do you mean “what we want”?

  11. Mark Sloan

    Hi Cole,

    Cooperation morality, as a moral system, defines “moral” as something like “increase the benefits of cooperation consistent with in-group cooperation strategies”. But that definition in no way changes the meaning of “moral sense” and past and present “cultural moral codes”.

    Similarly, what our moral sense and past and present cultural moral codes “are‘’ is the same for Utilitarianism, Kantianism, and virtue ethics, as well as cooperation morality.

    Therefore, it is not a tautology to define what is descriptively moral (or universally moral) in terms of our moral sense and past and present moral codes (or what is universally moral according to our moral sense and past and present moral codes). There is a large, and to me profound, information content in claiming “all behaviors motivated by our moral sense and advocated by past and present moral codes are elements of cooperation strategies”. Tautologies provide no new information.

    C: ..”we can still rationally … decide … what the ultimate goal of moral behavior ‘ought’ to be.” What do you mean by “ought to be” here?

    What I meant to say was that we can state our subjective preference for what the ultimate goal of moral behavior ‘ought’ to be with no necessary contradiction of a known fact from science about what morality’s function objectively ‘is’. Though we obviously could state some preference for ultimate goals such as “please a god”, “act consistently with Kantianism”, or “exploit out groups to benefit my in-group” that would contradict what science tells us universally moral ‘means’ objectively are.

    Reply
    1. Coel Post author

      Hi Mark,

      But that definition in no way changes the meaning of “moral sense” and past and present “cultural moral codes”.

      OK, but you still haven’t give a straightforward statement of what you mean by the word “moral”.

      Similarly, what our moral sense and past and present cultural moral codes “are‘’ is the same for Utilitarianism, Kantianism, and virtue ethics, as well as cooperation morality.

      So can you straightforwardly state what a moral realist means by the term “moral”?

      As a moral anti-realist, to me the word “moral” is an indication of approval. It functions like the words “delicious” or “beautiful”, it is simply a declaration that the speaker likes or approves of something. And, if one accepts this, then the realm of morals is inevitably subjective.

      You seem to be arguing for moral realism, and some sort of objective morality that isn’t just a declaration of the speaker’s feelings, but I’m still not understanding what you actually mean by the term, and because of that I can’t really evaluate whether I agree with your claims.

      There is a large, and to me profound, information content in claiming “all behaviors motivated by our moral sense and advocated by past and present moral codes are elements of cooperation strategies”.

      But I’d suggest that many strategies advocated by many “moral codes” that have existed actually harm cooperation! One can only make that sentence work by restricting to the subset of “moral codes” that actually do promote cooperation.

      What I meant to say was that we can state our subjective preference for what the ultimate goal of moral behavior ‘ought’ to be …

      The trouble is, I don’t know you mean by “moral behaviour” and I don’t know what you mean by “ought to be”!

      If I’m interpreting you correctly above, by “moral behaviour” you mean “behaviour that promotes cooperation”. Then that sentence becomes “we can state our subjective preference for what the ultimate goal of {behavior that promotes cooperation} ought to be”. But that’s a weird thing to say. If it’s *defined* *as* “behaviour that promotes cooperation” then the goal is already specified.

      … about what morality’s function objectively ‘is’.

      But the “function” of something is always dependent on the viewpoint, function is defined in relation to some goal or objective, and that is then necessarily subjective. So I don’t see that one can have “objective” function.

  12. Mark Sloan

    Hi Coel,

    C: “So can you straightforwardly state what a moral realist means by the term “moral”?”

    Sure.

    An evolutionist’s moral realism defines “moral” as behaviors consistent with in-group cooperation strategies, such as direct and indirect reciprocity, which do not exploit out-groups. As a moral realism claim, anyone who says these behaviors are immoral is objectively wrong.

    An evolutionist also defines what is merely descriptively moral as cooperation strategies. Of those cooperation strategies, only a subset are universally moral and therefore the basis of a moral realism claim. Due to our evolutionary history, people’s moral sense innately triggers approval or disapproval of behaviors that exploit out groups as well as behaviors which do not.

    Coel, how do you, or moral relativism as a discipline, address human innate approval of what to modern sensibilities are grossly immoral behaviors such as genocide and slavery?

    As a part of science, an evolutionist’s moral realism is a claim only about what ‘is’ morally universal, and is silent on any obligation or bindingness independent of people’s needs or preferences.

    Reply
    1. Coel Post author

      Hi Mark,

      An evolutionist’s moral realism defines “moral” as behaviors consistent with in-group cooperation strategies, …

      Ok, noted, but this is your definition, not one common to all evolutionists.

      As a moral realism claim, anyone who says these behaviors are immoral is objectively wrong.

      So long as they agree with your definition of “moral”.

      Coel, how do you, or moral relativism as a discipline, address human innate approval of what to modern sensibilities are grossly immoral behaviors such as genocide and slavery?

      I’d say that modern sensibilities are often different from past sensibilities. Things that we consider grossly immoral (e.g. genocide and slavery) have in the past been considered acceptable (and even laudable).

      [By the way, I would not describe my position as “moral relativism”, since under some explanations of what that means I don’t agree with it, instead I’d call it “moral subjectivism”.]

  13. Mark Sloan

    Coel,

    Somehow I missed seeing your reply, or I would have responded earlier.

    I used the word “define” regarding the evolutionary perspective on moral realism because you used “define” in asking your question.

    This was unfortunate because using the word “define” gave the mistaken impression that I had a choice in the matter. I did not have a choice, any more than I would have in any other scientific truth.

    Again, of all descriptively moral behaviors (elements of cooperation strategies) there is only one universal subset – in-group cooperation strategies – which are objectively moral, regardless of our needs and preferences.

    So far as I know, I am alone in making this claim about the evolutionary version of moral realism in the form of what is universally (and objectively) moral and, of course, could be mistaken.

    In any event, I appreciate the opportunity to engage with someone as knowledgeable (and patient!) as yourself on the subject of a science based moral realism.

    Reply
    1. Coel Post author

      Hi Mark,

      This was unfortunate because using the word “define” gave the mistaken impression that I had a choice in the matter. I did not have a choice, any more than I would have in any other scientific truth.

      But the meaning of words is something that humans do have a choice over. We can make many different choices of words, as demonstrated by the fact that there are lots of human languages. It’s also now clear that you are using the word “moral” to mean something different from what I would mean by it.

      Again, of all descriptively moral behaviors (elements of cooperation strategies) there is only one universal subset – in-group cooperation strategies – which are objectively moral, regardless of our needs and preferences.

      OK, so in evaluating your sentence I now turn to what you mean by “moral”, where you have told me that you “define “moral” as behaviors consistent with in-group cooperation strategies”.

      So, substituting that into your sentence I get:

      “Again, … there is only one universal subset — in-group cooperation strategies – which are {consistent with in-group cooperation strategies}”.

      Which, again, is a tautology, which is why your approach is baffling to me. You explain that by “moral” you mean in-group cooperation strategies, and then you explain that in-group cooperation strategies give you in-group cooperation strategies. Then you say that of all cooperation strategies only a subset, the in-group ones, gives you in-group cooperation strategies. And then you once again emphasize that in-group cooperation strategies are what objectively are in-group cooperation strategies.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s