Maybe I’m having a philosopher-bashing week. After disagreeing with Susan Haack’s account of science I then came across an article in the TLS by David Papineau, philosopher of science at King’s College London. He does a good job of persuading me that many philosophers of science don’t know much about science. After all, their “day job” is not studying science itself, but rather studying and responding to the writings of other philosophers of science.
Papineau writes:
No doubt some of the differences between philosophy and science stem from the different methods of investigation that they employ. Where philosophy hinges on analysis and argument, science is devoted to data. When scientists are invited to give research talks, they aren’t allowed simply to stand up and theorize, however interesting that would be. It is a professional requirement that they must present observational findings. If you don’t have any PowerPoint slides displaying your latest experimental results, then you don’t have a talk.
I wonder, has he ever been to a scientific conference? “When scientists are invited to give research talks, they aren’t allowed simply to stand up and theorize, however interesting that would be.” Err, yes they are! This is entirely normal. Scientists who do that are called “theorists”; and yes, they do indeed stand up at conferences and talk only about theoretical concepts and models. Such people are a major part of science. Universities have whole departments of, for example, “theoretical physics”.
How could Papineau have such a gross misconception? I suspect it comes from trying to see philosophy and science as distinct disciplines. The philosopher knows that philosophy is largely about concepts, and also knows that science is about empirical data. So the philosopher then leaps to the suggestion that science is only about empirical data, and not about theorising and concepts. After all, if science were about both empirical data and theories and concepts, then philosophy would not look so distinct and exalted in comparison.
Yet the “not about concepts” claim makes no sense since science is just as much about theories and models as about data. Without theories science would have only raw, un-interpreted streams of sensory data. In actuality, science is an iteration between theories and models, on the one hand, and empirical data on the other. Both are as important, with the real virtue of science being the iterative interaction of the two.
Papineau displays further his lack of understanding of science:
Scientific theories can themselves be infected by paradox. The quantum wave packet must collapse, but this violates physical law. Altruism can’t possibly evolve, but it does. Here again philosophical methods are called for.
Not so. There is no physical law that prohibits wave-function collapse (which is not the same as saying we have a good understanding of it). And the theory of reciprocal altruism gives a satisfactory account of the evolution of altruism, even in unrelated animals (kin selection explains it readily for close relatives). In neither case have the advances in understanding been driven by philosophers.
But Papineau continues:
We need to figure out where our thinking is leading us astray, to winnow through our theoretical presuppositions and locate the flaws. It should be said that scientists aren’t very good at this kind of thing.
Ah yes, the conceit that only philosophers can do thinking, whereas scientists are not so good at it. This, one presumes, follows from the suggestion that science is merely about data, whereas philosophers deal with the concepts? Again, this is about as wrong as it is possible to get.
The theory of evolution; the theories of special relativity and general relativity; the theory of quantum mechanics and quantum field theory; the standard model of particle physics; the Big Bang model of cosmology; the theories of statistical mechanics and of thermodynamics — these are all the products of science, and are demonstrably highly successful in giving understanding of phenomena in the world, in making predictions about those phenomena, and in enabling us to manipulate the world around us and to develop highly sophisticated technology.
What have philosophers got that is even remotely comparable in terms of demonstrated success? But Papineau wants to suggest that it’s the scientists who are “not very good” at theorising and thinking!
When they are faced with a real theoretical puzzle, most scientists close their eyes and hope it will go away.
He really doesn’t know very much about theoretical physicists, or about scientists in general, does he! He then claims it a “great scandal” that:
Led by Niels Bohr and his obscurantist “Copenhagen interpretation” of quantum mechanics, [physicists] told generations of students that the glaring inconsistencies apparent in the theory were none of their business.
This is just wrong. It’s not that there are “glaring inconsistencies” — quantum mechanics is consistent and works well in accounting for the data — it’s that the interpretation of it is unclear.
“Shut up and calculate” was the typical response to any undergraduate who had the temerity to query the cogency of the theory.
No it wasn’t. Generations of undergraduates have been told about the difficulties of interpretation. Papineau doesn’t realise the degree of whimsy in phrases such as “shut up and calculate”; it is not intended literally! In fact there is no subject that physicists have deliberated and discussed more over the last 100 years than the interpretation of quantum mechanics!
So, after touting the alleged superiority of philosophers when it comes to thinking, how does he then explain away the blatant fact that science has been vastly more successful and makes vastly more progress than philosophy?
He concludes that philosophy is simply harder!
The difficulty of philosophy doesn’t stem from its peculiar subject matter or the inadequacy of its methods, but simply from the fact that it takes on the hard questions.
I beg to differ. I don’t see the questions of philosophy as any harder. Instead the lack of progress is fully down to its methods, and the principal culprit is in seeing philosophy as distinct from science, rather than as a part of science. By regarding itself as separate from science, philosophy divorces itself from empirical data, and so from information about the real world. Humans are simply not intelligent enough to get far by thinking alone, without any prompts from nature. Scientists aren’t, and philosophers certainly aren’t.
Papineau finishes by giving one example of what he sees as actual progress in philosophy:
The deficiencies of established views are exposed . . . The “boo-hurrah” account of moral judgements was all the rage in the middle of the last century, but no-one any longer defends this simple-minded emotivism.
But no actual deficiencies of emotivism have been exposed; it may be out of fashion in the philosophical world, but that really is just fashion. Is this really Papineau’s example of progress? It’s as likely that it’s a retrograde step.
Emotivism — the idea that morality is a matter of value judgements, pretty much akin to aesthetic judgements, and amounting to emotional approval or disapproval of certain acts — is a widely held opinion within science. Indeed it is the only account of morals that is consistent with the fact that humans are evolved animals. If philosophers move away from that position, and wander off to explore other conceptual possibilities that don’t relate to how humans actually are, they will be condemning themselves to further irrelevance.
Hi Coel. You’ve done a good job of criticising Papineau’s errors regarding science. I thought I’d address some of his other errors.
“Today’s philosophers still struggle with many of the same issues that exercised the Greeks. What is the basis of morality? How can we define knowledge? Is there a deeper reality behind the world of appearances?”
Yes, many (not all) philosophers still struggle with those questions, more’s the pity.
“What is the basis of morality?”
This question is based on the incorrect assumption that there’s an objective morality that can in some sense have “a basis”.
“How can we define knowledge?”
I think we have a good enough understanding of the word “knowledge”. Problems only arise when philosophers look for an artificially precise definition.
“Is there a deeper reality behind the world of appearances?”
There’s the reality that physicists are in the business of telling us about. That’s all. The idea of a metaphysical “deeper reality” is mumbo jumbo.
“Philosophical issues typically have the form of a paradox. People can be influenced by morality, for example, but moral facts are not part of the causal order. Free will is incompatible with determinism, but incompatible with randomness too. We know that we are sitting at a real table, but our evidence doesn’t exclude us sitting in a Matrix-like computer simulation.”
There are no paradoxes there, just confusions. People can be influenced by their moral values and beliefs, which are part of the “causal order”. The fact that the supposed moral facts are not causal is one very good reason for doubting that they exist.”Free will” is such a confused concept that I say there no good grounds for saying that it is or isn’t compatible with determinism. If “real table” means (in this context) a table that is not part of a Matrix-like simulation, then it makes no sense to say that we “know” we are sitting at a “real table”.
“In the face of such conundrums, we need philosophical methods to unravel our thinking.”
What kind of “philosophical methods” does Papineau have in mind? We certainly aren’t going to resolve those conundrums by means of the “philosophical methods” of Papineau and traditional philosophy. The primary problem with traditional philosophy is that it asks confused questions. There are some philosophers (such as Wittgenstein) who’ve made this point. In addition, I think that many philosophers and non-philosophers alike can sense that there’s something wrong with these “metaphysical” questions, even if they can’t quite put their finger on where the problem lies.
“The difficulty of philosophy doesn’t stem from its peculiar subject matter or the inadequacy of its methods, but simply from the fact that it takes on the hard questions.”
The difficulty of traditional philosophy stems (in large part) from our ability to ask questions that _sound_ meaningful without actually _being_ meaningful.
P.S.
“Emotivism — the idea that morality is a matter of value judgements…”
That’s not what emotivism is.
Hi Richard, what’s wrong with my paraphrase?
Coel,
I would much prefer philosophers to be coworkers in advancing the usefulness of science rather than adversaries.
However, “If acceptance of an idea threatens one’s job, it is remarkable how difficult it can become to understand that idea.” I expect this is part of the problem this philosophers of science has – that the philosophy of science is just not nearly as central to science as he would like to think it is.
In my experience, moral philosophers typically have an even more dysfunctional relationship with the science of morality because it even more centrally threatens their livelihood. Regarding moral philosophy, it seems to me a useful dividing line to demark science’s domain as about what ‘is’, and philosophy’s as about what ‘ought’ to be. I don’t know if there is a similar simple division for science in general and the philosophy of science.
I was aware there were philosophers who don’t understand science, but until now I never imagined there were any philosophers OF SCIENCE that get it so utterly wrong. It’s downright breathtaking. It seriously makes me wonder if he knows what he’s talking about when he’s talking about his own field – of philosophy. Based on his grasp of science and its method, I sincerely doubt it. I once heard a stern and frequent critic of philosophy refer to the field as “The Science of Pontification”, adding that “its mostly about making stuff up while delighting in the sound of one’s voice”. I thought, surely, that must be exaggeration. Yet here is just an example of one who constructs statements that satisfy an appearance of meaningful communication but are largely bereft of veracity. He can lay claim to that talent, at least: the skill involved isn’t inconsiderable. Thinking clearly – and perhaps sincerely – however, doesn’t seem to be necessary.
Coel writes, “So, after touting the alleged superiority of philosophers when it comes to thinking, how does he then explain away the blatant fact that science has been vastly more successful and makes vastly more progress than philosophy? ”
Progress towards what? What is the knowledge explosion leading to? Utopia? Collapse? It seems less than superior thinking to assume “more successful” and “more progress” without considering that question. How many hair trigger nuclear weapons have to be aimed down our own throats before the science clergy will stop blindly assuming an out of control knowledge explosion to be a huge success??
Again, science is great at developing new knowledge. Developing new knowledge does not automatically equal progress and success. The success of the “more is better” relationship with knowledge has created a radically new environment where that paradigm can no longer be assumed to be true.
Some people get this. Few of them appear to be scientists.
This is understandable. Why would a scientist inspect and challenge the “more is better” relationship with knowledge when their cultural authority and paychecks depend upon it? Why would a scientist stand back and observe the big picture of the knowledge explosion as a whole when the reductionist nature of science rewards those with a talent for burrowing deeply in to narrow areas of investigation?
What I see in many posts across the blog, including the last two, is the very human need to belong to something, and for that something to be declared superior, “the answer”.
We used to express this in the context of religion. Religion has been discredited for many, but this human need remains even after religion is dead, so we go looking for new flags to wave.
Some of us have chosen reason and science as our new “one true way”. And just like we used to do with religion, there’s a tendency for modern seculars to need enemies to push back against as a mechanism for reinforcing our allegiance to our chosen flag.
What observing this seemingly universal process as objectively as possible can teach us is that these divisions don’t arise from religion, or science, or any other philosophical perspective. They don’t arise from the content of thought, but from the nature of thought, from the way thought operates. Seeing this clearly tends to unite us, because we are all made of thought and subject to it’s properties.
This seems a very worthy topic for scientists and philosophers to study together, for it is when these thought generated divisions become the most acute that the dangers presented by knowledge can become the most pressing.
If we’re going to have a smack down competition between philosophers and scientists, here’s where I’d like to see the contest take place.
Which writers, in any field, are talking intelligently about the assumptions which are the motor driving the knowledge explosion assembly line? As example, who is willing to inspect the “more is better” relationship with knowledge and investigate what the limits to that paradigm might be?
Scientists might be seen as the factory workers who have built the knowledge explosion assembly line and continually tweaked it in to ever accelerating high performance. Thanks to them, we now have the ability to produce new knowledge at amazing rates. That’s great, applause is indeed warranted, but…
Who is asking what the appropriate rate for the assembly line should be? I honestly don’t know and would welcome education on that score.
I’m not really that interested in details about the new products coming off the end of the assembly line, AI, genetic engineering etc. Thus, most futurists tend to lose my attention.
I’m interested in the assembly which is producing such powers. Which group or individuals can speak the most intelligently to that? Let’s have that competition.
Science is not an assembly line.
“it is the only account of morals that is consistent with the fact that humans are evolved animals.”
I beg to differ on this point. I can think of ways to defend naturalistic versions of consequentialism (especially utilitarianism), contractualism, or virtue ethics.
But every one of those would have to be rooted in human desires and preferences (and thus emotivism). Otherwise, what standing would (e.g.) utilitarianism have?
Coel,
If it based it entirely on a specific human’s desires, it would be Egoism, which it isn’t. But Utilitarianism does indeed base its morality on utility, which is global pain and pleasure. This clashes with some of our moral emotions — loyalty to family, for example — AND its reliance on that causes major issues for it that result in it suggesting strongly counter-intuitive solutions at times, which weakens the idea that our intuitive — and thus, our evolved — idea of morality can boil down to specifically human desires.
But any measure of “utility” can only be based on what humans care about. Further, there is no such thing as “global pain and pleasure”, there is only the pain and pleasure of individual humans. Thus Utilitarianism needs some way of aggregating over lots of humans, and yet there is no way of doing that except for people’s opinions about how to do it. And no human values all other humans equally, nor would they agree on who to value. Thus Utilitarianism’s claim to objectivity is spurious and illusory.
This illustrates that attempts to make morality objective don’t work, but that *supports* (rather than weakens) the idea that all there is is people’s feelings and values.
Done! Everyone pretty much cares about pain and pleasure. Seriously, this reasoning is pretty much the entire starting point for all hedonistic moralities, which includes Utilitarianism.
Yes, which is why they solve that by … aggregating over all of the involved humans, and thus arriving at their notion of “global pain and pleasure”.
So, are you insisting that Utilitarianism has to consider each person’s specific opinion on that before it can put forward its theory — and justifications — for why aggregating the total pain and pleasure of all relevant persons is the way to go? This would be you assuming that they CAN’T have a justification before they even get around to telling you what their justifications are …
Sure, but Utilitarianism does not say or rely on all people agreeing, but instead insists that this IS the right way to go. And the only way to challenge them on this specific point is to build a morality that argues that it is okay for you to cause pain to someone else because you don’t like or don’t care about them. Any morality that would accept such a premise probably ought not be considered a morality at all (unless it’s explicitly Egoistic). From there, you can talk about the slightly tougher question of whether it is okay to withhold pleasure from someone because you don’t care for or about them, but again it seems pretty reasonable to say that someone who held that as a moral principle isn’t being moral at all. At which point, Utilitarianism seems to be off to a good start unless you can provide some justification for holding those sorts of moral positions, or at least that we ought not care about those implications.
How?
1) These moral intuitions are SEEN as desire-independent moral conclusions, which is why these are seen as a problem for Utilitarianism.
2) The point of that is that they EXPLICITLY DENY that calculating human desires and values is the determining factor there, which weakens the idea that morality can just be like that because we react rather violently to the idea that in those cases human desires and values trump morality.
You can argue that they are wrong about that, but they clearly do not support your position and provide something that you need to explain and justify.
But any scheme in which what is moral depends on people’s subjective feelings doesn’t give you an objective or moral-realist scheme (which is what Utilitarianism purports to be, isn’t it?)
[“Moral Realism (or Moral Objectivism) is the meta-ethical view (see the section on Ethics) that there exist such things as moral facts and moral values, and that these are objective and independent of our perception of them or our beliefs, feelings or other attitudes towards them.”]
But how do you do the aggregation? Does everyone count equally? If so, what’s your justification for that (if your justification is that it feels right to you, then that’s subjective)?
They’re very welcome to justify their method of aggregation, so long as it, at no point, depends on people’s subjective opinions on the matter.
Sure, but establishing that one way is *the* right way to go is a pretty tough hurdle. (No appealing to human judgement in doing that!) Even explaining what the phrase “the right way to go” means would be hard enough (“right” or “wrong” generally being value judgements, which need a person doing the judging).
I only have to do that if I first buy their claim that there is an objective “right way to do it”, and that morality is indeed objective. If I don’t accept those things I can simpkly tell the Utilitarians that they have not made their case.
Is that your personal feeling on the matter? 🙂 Again, such claims depend on us having agreed what “morality” actually is, which is what he have not yet done!
Again, is that an appeal to how people feel about the matter?
Again, I’ll readily concede that human *intuition* is moral realist (which is why so many people try to construct moral-realist schemes that work). I just don’t accept that as a strong argument itself.
Sure it can, at least, because you’re mistaking the underlying moral principles for how you determine what is moral in the real world using them. Any morality that doesn’t completely ignore the desires of humans is going to have to consider those at some point when determining what is or isn’t moral, but they will do so appealing to moral principles that are deemed correct no matter what any specific person thinks of them. So, for Utilitarianism, you will always calculate utility on the basis of overall pain and pleasure, which is the objective part, while all specific decisions will involve finding out what specific pain and pleasure all the relevant parties will feel. But if Utilitarianism is right then if someone who is, say, a Stoic denies that and insists that pain and pleasure are irrelevant to morality they will be wrong.
The subjectivism that objectivists like me worry about are the cases where the proposition “Slavery is morally wrong” depends on what a person or other grouping THINKS is morally right or wrong. That it may vary due to circumstances isn’t much of an issue for most of them, especially the typically objectivist Virtue Theories (who build that into their idea of virtues most of the time).
You count everyone equally in Utilitarianism because all people are moral units and there is no way to justify treating them unequally. This is indeed something that Utilitarianism gets challenged on, because there are arguments that at least sometimes you SHOULD do that. But it’s certainly not just based on feelings, and philosophically an objective justification for that is demanded of Utilitarians because they are expecting it to be objectively justifiable.
(Also, as a reminder, I am NOT a Utilitarian [grin]. I just know lots about it from my work on moral philosophy).
As to whether that is the correct aggregation to use, right? Because you often drift into arguing that if the aggregation is over subjective things that counts as well, which is not correct.
The “right way to go” here is meant to reference correct, not morally right, which is the OTHER right you are talking about here. If I can’t ever use simple colloquial phrasings without you jumping all over it, we aren’t going to get anywhere [grin].
But to deny that that is a moral implication — and thus that their moral view is incorrect because it holds that — you have to build a valid morality that doesn’t include or imply it. They have no reason to care about your insistence that they haven’t established it if you can’t show how a morality can work without, again, either explicitly stating or implying that statement. And if you can’t, then they have sufficient reason to think that they are on the right track.
Nope, it’s a consequence of examining it and asking the question “Could we have anything that even remotely resembles ANYTHING like what we think is a morality if it includes the idea that you can hurt someone just because you don’t like them?”. There don’t seem to be any moralities that do that, and it seems for good reason. If you have to accept that idea to attack Utilitarianism, while it’s certainly not invalid for you to bite the bullet and accept that you aren’t going to get very far with a morality that does that, unless you have a VERY good argument for why that works. Which you haven’t provided, probably because you don’t actually believe it yourself [grin].
But since those intuitions are pretty much the only empirical evidence we have of any kind of morality at all, if you want to dismiss them it seems that the burden of proof is on you, not on those who are at least consistent with them most of the time.
But my argument is that you’ll need a utility function (even if it’s just “utility is maximised if you maximise pleasure and minimise pain”), and that choice of utility function is subjective. You can only get to it by the advocacy of a human, based on their preferences and values.
But that’s not really subjectivism, it’s a bastard mixture of subjectivism and objectivism that makes no sense. It is effectively saying that propositions such as “slavery is morally wrong” do have truth values, but that the truth value is dependent on someone’s opinion. That seems to me to be incoherent.
In my form of subjectivism, “Slavery is morally wrong” amounts to the speaker declaring “I dislike slavery”. Someone else can, of course, declare “I like slavery”, but there is no more to it than those likes and dislikes.
There’s no way to justify treating them *equally*, either, except by human advocacy. There’s no way to determine what is a “moral unit” except by the advocacy of whoever it proposing that version of utilitarianism. There are no “defaults” here.
Yes. The method of aggregation is a subjective choice. Just as is who counts as moral units, whether they all count equally, and what utility function to adopt. All of this cannot be established a priori, it all derives from the advocacy of the human advocating utilitarianism. That’s why it rests on subjective foundations.
The good reason is that we all have a lot of human nature in common, so the moralities that we advocate all have a lot in common.
Except it isn’t. There is a truth value, for example, to the proposition “I like chocolate ice cream”, but it depends entirely on the subjective state of the person being referred to. Or perhaps a statement like “That hurt!” is better. It has a truth value, but its truth value depends on the internal state of that person; whether the thing hurt or not. So such a thing is not incoherent.
But this is progress, as since you reject that sort of subjectivism you are boxed into a non-cognitivist approach, insisting that statements like “Slavery is immoral” CANNOT have a truth value. If you say they do, then you have to reject the subjectivist line, and so would have to take the objectivist line there. Thus, for you to maintain your specific subjectivism, it must be a non-cognitivist approach.
What do you mean by “human advocacy” here? Utilitarians are defining it that way, and arguing for the definition. They are not merely declaring it because they like it better, and if they felt that their reasons were insufficient and so it was only based on personal preference they’d reject Utilitarianism. Thus, you need to deal with the arguments and not merely talk about a vague “human advocacy”, particularly in light of my other comments that someone may accept that X is the moral thing to do while still refusing to do it because they don’t see it as being in their own self-interest. So wanting something to be moral and believing it to be moral are two different things.
Who says? You need the second part to be true before we have any reason to accept that the first part is true, making this a circular argument.
No, it seems like no morality that did that could achieve any of the things that we use morality to do. It’s not a good reason to think that just because we have some common ideas that things they advocate in common therefore have good reasons to be so. See “sweet tooth”, for example.
Yes, I think that I am indeed taking a non-cognitivist approach. (Though I’m sure that I’m usually using such philosophical terms wrong. 🙂 ). It depends, though, on what one means by “slavery is immoral”. If by “is immoral” one meant “contrary to an agreed code as to how we treat each other” then “slavery is immoral” *would* have a truth value.
One of the problems with the whole field of meta-ethics is that the moral realists have still not told the rest of us what “is immoral” is supposed to mean.
I submit that that is exactly what they are doing! They arrive at a utilitarian framework based on their own subjective values, then they have a gut feeling that there must be some objective justification for that framework, and so look for post-hoc justifications for it.
Agreed, and I’m happy to examine the arguments, though I’m fairly sure that one cannot arrive at a utilitarian framework from a priori reasoning; at some point you need to add in “moral axioms” and those come from people’s subjective value system.
I’d be interested to ask such a person what they think they mean by “the moral thing to do” when they say “X is the moral thing to do”.
You always intepret me as claiming some sort of objective justification (“… have good reasons to be so”). I’m not. The whole point is that there is nothing such. All I was doing was pointing to de facto widespread agreement based on widespread commonality in human nature. I was not saying “therefore it is justified that …”.
Well, if you think that the proposition “Slavery is immoral” has a truth value, regardless of what criteria we use to determine what the truth value is, then you are a cognitivist. Which would mean that, as we agreed, you’d be boxed into an objectivist position, because you consider the subjectivist cognitivist position incoherent. And note that Hume’s emotivist position — that you seem to favour — is indeed non-cognitivist.
So let’s stop just tossing labels around and get into positions. The subjectivist non-cognitivist position says that moral statements have truth values, but that you can only determine that truth value by referring to a specific group. These, then, generally encompass relativistic theories: the truth of a moral proposition can only be determined relative to the appropriate moral grouping. Individualistic relativistic theories say that that’s relative to the individual, meaning that you have to refer to a specific individual. Thus, the proposition “Slavery is immoral” has a truth value, but since the truth of that is determined by the individual itself, you need to refer to the individual. Thus, “Slavery is immoral” is true if and only if an individual accepts that slavery is immoral. Cultural relativism does the same thing, but the reference is to a culture, not to an individual. Thus, “Slavery is immoral” is true if and only if we are referring to a culture where slavery is considered immoral. And so on.
Non-cognitivist theories say that asking if those propositions have a truth value is absurd, as they simply don’t have them. Hume’s non-cognitivism, is simply saying that we are expressing an emotional reaction or view of them, which is the “Boo”/”Hooray” theory. We would never say that it is true if we mean it a certain way, because we only EVER mean it, really, as that sort of expression. It’s the same thing as applauding or booing a performer; we aren’t really expressing a true statement like “This performer is good”/”This performer is bad”, but instead are merely expressing our personal reaction to them. And just like arguing from those reactions to a true statement about whether the performer is good or bad is an invalid argument, it is invalid to argue from that to the claim that a moral proposition is true or false. Yes, often people DO that, but it is invalid nonetheless.
Here’s where I think the confusion is coming in, or at least how I think your position shakes out. I think that you are really a cultural relativist: you think that what is moral is defined by the culture/society someone lives in. This is consistent with what you say above and pretty much how you argue for this. But I also think that you agree with a different argument of Hume’s, which is that values are required for any kind of action, and that values or any kind of motivation require an emotional connection, and thus emotions are important for motivation. How that all shakes out can get complicated, but it’s at least a coherent position. However, you seem to miss that that applies to ALL actions, not merely moral or even normative ones, and so make that an important part of the definition of, well, any normative claim and especially any moral claim. This is what draws you to emotivism, and therefore to consider yourself a expressivist because emotivism is an expressivist claim. This also explains why you try to make all expressivist claims emotivist ones, because as you yourself say all values reduce to emotions. But values aren’t just moral values, and so we want to know what distinguishes moral values from other values. Hume’s move here is about motivations, and so applies to ALL values, including pragmatic ones. If I want to act pragmatically, I’ll need an emotional motivation underneath it, just as I’d need an emotional motivation underneath acting morally. Thus, strongly aligning morality with emotion is not a move you need to make; we can emotionally value and use that as a motivation to act morally without morals being just defined by our moral reactions to them. Again, we can reason ourselves into a moral proposition and that assignment can ITSELF generate the motivating emotion without any contradiction.
So I think you conflate emotion and morality far too strongly, and in general fit the cultural relativist position better. But if you disagree, we can still use this framework to tease out what your position actually is without having to rely overmuch on labels.
The examinations and the labels that we are talking about are meta-ethics’ attempt to tease out what, in detail, that’s supposed to mean. One of the reasons you keep asking questions like this is, in my opinion, that you don’t have enough knowledge of meta-ethics to see what the various positions imply, including what they problems are. Thus, you ask that like it’s a simple question when if we’ve learned anything from meta-ethics it’s that that’s not a simple question [grin].
Except it is clear that that ISN’T what they are doing, given the empirical results:
1) Many of them come into, say, introductory ethics courses with no idea or with other ideas of what morality is, and are convinced that Utilitarianism is right by the arguments.
2) Utilitarianism is actually strongly counter-intuitive in a number of scenarios and yet these results rarely get them to change their position.
3) If you managed to convince them that their acceptance of Utilitarianism was nothing more than that, they’d abandon Utilitarianism.
So you really need to stop treating them like this is what they are really doing. It is possible that their arguments cannot be justified any other way, but they are clearly not justifying it consciously that way, and think the arguments work. Thus, going beyond the arguments in any other way to insist that they are really basing it merely on intuitions is not going to make any progress in the debate.
And this would be the first problem, as you see any “moral axiom” as being subjective, and they don’t. So if they ever introduce one — no matter how they support it — you will then claim that it has to be subjective. As we saw above, it’s easy to argue that your conflation of value and moral axiom is the real cause of the issue, but at a minimum that would have to be settled first before you could insist that they can’t do it a priori. In short, you often jump to dismissing their specific positions on the grounds that the axioms must be subjective when the real debate is over whether they could even possibly be objective.
I have no clue why you think that there’s some kind of interesting vagueness to explore here, but maybe I can clear this up with specific examples of how someone can choose to act counter to their own moral values, using Utilitarianism as the moral code:
We have two people, Person A and Person B. Both of them are convinced Utilitarianism accurately describes morality. Both of them are put in a position where they can choose to save the life of either a renowned scientist who is close to a breakthrough that will save thousands of lives, or their spouse. Both accept that Utilitarianism says that the action with the most utility is to save the scientist and not their spouse. A decides to save their spouse, knowing that they are committing an immoral act, but not being able to bear letting their spouse die when they could save them. B decides to save their spouse, because they have decided that they not only do not care to act morally, but wish to deliberately flout morality and act immorally as a way of thumbing their nose at moral expectations. Again, both consider the moral thing to do to be what Utilitarianism says, and yet both deliberately go against that.
In what sense is this puzzling?
Well, since you were talking to me, you should have known or at least assumed that when I said “For good reason” I meant an objective, in this case meaning that it is a conceptual impossibility for something to count as a morality and yet not accept that. You then offered the “commonality” argument as that good reason, which then implies that it would fit into that objective good reason I was looking for. If you didn’t want that implication, you should have said that there IS no such good reason, not offered one [grin].
Note that since some things that have evolved are now counter-productive — see the sweet tooth — it is even possible to demand a good reason like I did in the quote for an evolved sensibility, so if you accept evolution you have to accept that my demand is still reasonable: how do we know that we still have a good reason to act on that sensibility? It might now be maladaptive.
Here’s my best attempt to explain my position using your explanation of the terms.
1) The claim “slavery is wrong” does indeed have a truth value by reference to a particular moral framework or code. Thus, “according to Western moral codes, slavery is immoral” has a truth value. The sentence “according to Western moral codes, slavery is immoral” is a *descriptive* statement about Western moral codes.
2) Most people are intuitive moral realists. Thus, when a Westerner says: “Slavery is immoral” they are intending to say that it is not just immoral “according to Western moral codes”, but that it is immoral in an objective sense.
3) But, in thinking that, people are making an error. There is no such thing as “immoral in an objective sense”. “Immoral” is a value judgement and one cannot have a value without a valuer (that is a simple category error). Thus, to my mind, “slavery is immoral” does not have a truth value — in the shortened form its meaning is too unclear to have a truth value — though the longer form “according to Western moral codes, slavery is immoral” does have a truth value.
Thus I would say that the superficial purport of the language is cognitivist (it *attempts* to make objective moral claims with truth values), but that this is an error [I thus hold to “error theory” about morals].
4) So where do the moral codes come from? They are reports of people’s value systems. Thus what people are *actually* doing when they “slavery is immoral” is expressing their value judgement, and effectively saying: “I abhor slavery”. This latter is pretty much emotivism.
So, attempting to put on labels:
People saying “slavery is wrong” are purporting to make an objective moral claim with a truth value [the phrase *purports* to be cognitivist]. And if interpreted as meaning: “slavery is against my value system” then it *is* indeed cognitivist. But, the claim of objectivity is an error [error theory]. Thus “slavery is wrong” does not have a truth value [non-cognitivism]. The *underlying* meaning is a report of one’s emotional dislike of slavery [emotivism].
So this is both cognitivist and non-cognitivist, depending on exactly what phrase one is talking about, where the disjoint between the two is the error that error-theory points to, and where the underlying meaning is emotivist.
How does that sound to you? Am I misusing terms in the above?
Well, yes, so much so that you seem to be holding utterly incompatible positions and I can’t make heads or tails out of what you really think.
So let me break it down for you with two related questions:
1) When you use the phrase “X is immoral” what do you mean by it? What do you want me to take away from that and how do you want me to interpret it?
2) What is the right way to view or use the phrase “X is immoral”? What OUGHT we mean when we use the phrase?
In meta-ethics, that’s what we’re after: the right way to conceptualize morality. By mixing in so many diverse concepts, you end up with something that is conceptually contradictory, making it incredibly confusing. I’m not interested in what people other than you DO mean when they say that, but what you think they OUGHT to mean, if anything, and that should be consistent with what you mean when you say it if you are not inconsistent. So without using the terms, tell me what you mean by it. That should help clear everything up.
I mean by it: “I dislike X”. (In some contexts I might mean: “I dislike X AND I consider that X violates accepted societal norms”.)
As above, “X is immoral” indicates that the speaker dislikes X (and, again, in some contexts it could also indicate that the speaker also considers that X violates accepted societal norms).
Edit: “I dislike X AND I consider that X violates accepted societal norms” could also be phrased: “I dislike X and consider that most people also dislike X”.
I intend to get back to the other posts — including the ones on rights that I skipped the last time — but I want to get something out on this first so that it might help clear things up or move things along better.
What you say is fine, but it is a bit like saying “Humans evolved from apes”. It’s a not-unreasonable summary, but a lot more needs to be fleshed out before we can understand what it all means.
So, on this, I would say that I dislike raspberries, rap music, and walking in the rain so that I get wet. However, I wouldn’t consider any of these MORAL dislikes, and neither would pretty much anyone else. And the same thing applies to accepted societal norms. If you’ve read my essay on my blog “Fearlessly Amoral”, you’d know that we generally have a moral/conventional distinction, where we distinguish between moral maxims and conventional ones despite the fact that both of them might be accepted norms. In fact, it seems the reason that psychopaths do not act in a way we consider moral while autistics do is related to how psychopaths cannot make that distinction while autistics can. So, again, there seems to be a difference between something like a rule of etiquette — which fork should I use to eat my salad, for example — and a moral rule.
For you, do these distinctions exist? If they do, how do you determine the difference? Note that “moral rules are more important” isn’t going to work as an answer here, because while we might align on which are more or less important, for most people the reason we consider the moral maxims more important is BECAUSE they are considered moral maxims. Since you’d need to describe the moral in terms of those being independently important, you’d need to show why we should consider those maxims more important without being able to say that it is because they are moral, or else your argument would be circular.
Yes, those distinctions do exist for me. But, given my anti-realist stance, there is no “fact of the matter” about what is a “moral” rule versus what is merely an agreed convention. Whether someone (or people in general) puts something in the “moral” category is itself largely a societal convention.
The underlying concept of morality is about ways in which humans treat others humans, that advantage or disadvantage them. So acts of that sort tend to be regarded as in the “moral “category.
But that’s not all there is to it. For example, some would put teenage masturbation in the “immoral” category, those mostly these days we think that’s silly. That illustrates that people will disagree on which acts are “morally” salient.
Coming back to my point that most people are intuitive moral realists (though erroneously so in my eyes), I suggest that people put an act in the “moral” category if they think that it is *objectively* right or wrong (as opposed to mere conventions, such as driving on the left or on the right, where either would work as well; and as opposed to things like whether marmite tastes nice, over which people are happy to differ).
Since their moral realism is (as I see it) an error, their judgements of what goes in the “moral” category would then be their own construct (and again there would be no fact of the matter).
The important question here, though, is how to YOU determine what goes into what category? Same as before, the point is to suss out what your view is, not what you think the view of others is, especially if you think them wrong.
Oh, ok. In that case, I’d put something in the “moral/immoral” category when people are being treated well in ways that I approve of, or are being harmed in ways that I disapprove of.
So, after all of this, let me try to outline what I think is your view in a way that aligns with the philosophical terms and is consistent with both what you say and what the terms mean.
I think that you’re a cognitivist. The reason, ironically enough, is that when you try to decide whether something is moral or, for lack of a better term, “conventional”, you a) have a distinction that b) is based on reasons that are c) propositional. In short, you think that being moral relates to how people are treated, whether in good ways or whether it causes them harm. This, then, means that you can give reasons for every determination you make — including whether it is moral or immoral — and those reasons mean that to you the statements always have a truth value. For non-cognitivist views, reasons don’t really apply, and it makes no sense to appeal to them. “Just ’cause” is not only a valid move, but the only one ALLOWED for non-cognitivist views. You at least allow for moral pronouncements to be based on reasons, and in fact generally insist on it.
So, now we can turn to objectivism vs subjectivism. In general, despite the issues raised earlier, you do seem to be subjectivist. You allow for reasons, but those reasons are always constrained with “For you”. To turn it back to my comments long ago about the two types of objective vs subjective, you don’t really claim that someone CAN’T justify their morality to anyone else, but instead say that they don’t HAVE to justify their morality to anyone else. Yes, you use the PHRASING a lot, but in general you’d don’t use that to indicate that doing so is nonsensical, but instead use that to indicate that it is pointless: unless you can give reasons that apply that that specific person — eg they happen to have the same values as you — then it’s not going to matter to them. At all.
I don’t think you’re an emotivist because your reasons never actually apply to or utilize emotions in any significant way at all. I think you got confused with Hume’s “Boo/Hooray!” theory and thought that your subjective view — that it’s based on what you like or dislike — implied that as well. You also — and I think this is the result of that as well — are conflating values with emotions and insisting that therefore it’s all emotional, but that’s not the level we’re talking about here. Additionally, we tend to feel emotional about our values but as I’ve said a number of times we can have and act on values without any strong emotional reaction, and weak passions (as per Hume) aren’t sufficient to make a position emotivist. In this discussion, I see no place where emotivism would make a difference in my understanding of your position, or in how you generally use it.
So, that’s my take. Feel free to disagree or point out things you think I’ve missed (also feel free to agree, of course [grin]). But I think this is consistent and pretty much captures everything that seems important to you about morality.
Well this is an interesting take! You seem to be saying that I’m not a non-cognitivist because, while I think the superficial moral-realist purport of moral language is an error, I do ascribe cognitivst status to the underlying meaning of moral language.
Thus, “I dislike murder” is clearly cognitivist. But does that make me a congitivist? When you say:
… you seem to me to be making non-cognitivism and emotivism into something that can have no sensible content at all. You’re allowing those views only the rawest of emoting — a toddler throwing a tantrum and saying “because!”.
If one gives any account beyond that of why humans use moral language or what they might mean by that, then you’re suggesting it no longer counts as emotivism. But has any non-cognitivist or emotivist really proposed anything so crude?
The wiki page gives what to me seems a clear intro to non-cognitivism:
“Non-cognitivism is the meta-ethical view that ethical sentences do not express propositions (i.e., statements) and thus cannot be true or false (they are not truth-apt). A noncognitivist denies the cognitivist claim that “moral judgments are capable of being objectively true, because they describe some feature of the world”.[1] If moral statements cannot be true, and if one cannot know something that is not true, noncognitivism implies that moral knowledge is impossible.[1]
Non-cognitivism entails that non-cognitive attitudes underlie moral discourse and this discourse therefore consists of non-declarative speech acts, although accepting that its surface features may consistently and efficiently work as if moral discourse were cognitive. The point of interpreting moral claims as non-declarative speech acts is to explain what moral claims mean if they are neither true nor false (as philosophies such as logical positivism entail). Utterances like “Boo to killing!” and “Don’t kill” are not candidates for truth or falsity, but have non-cognitive meaning.”
That seems spot on to me, and pretty much summarises my position. So am I really actually a cognitivist?
Yes, because you violate the main criteria for a non-cognitivist stance, because the moral statements you make are propositions and have a truth value, as I demonstrated.
I’m not talking about statements like “I dislike murder”. That’s a statement that would be true for most objectivists as well. In order for that statement to make you an emotivist, it has to be that what it means for a statement to be moral is JUST that. Yes, there’s more to it than that — that’s the equivalent of “Humans evolved from apes” — as I pointed out when I talked about linking these things to specific moral emotions — but at the end of the day that’s what the meaning of any moral statement boils down to.
This is not true for you. For you, what it means for a position to be moral relates directly to how people are treated. For something to be moral, it means that people are being treated well, and for something to be immoral, it means that people are being treated poorly. This clearly has cognitive content, clearly makes them propositions, and clearly gives them truth values. By this, your view seems cognitivist, and there’s nothing in what you’ve told me to suggest otherwise.
So now it’s on you: what do you think is missing in a subjectivist cognitivist account, that it can’t reflect what you really believe? What do you need or think fundamentally true that a subjectivist cognitivist approach can’t accommodate?
This reflects a very impoverished idea of emotions, as you refuse to distinguish between anger as tantrum and anger as, say, righteous indignation. Thus, the views are indeed far less crude than you understand, which only indicates that you don’t really understand the view you purport to hold.
Note that even in non-cognitivist or emotivist views we can find regularities and the like. But those regularities can’t be used as arguments to justify the moral position taken. As an example, imagine someone doesn’t like a particular food because they find it too spicy. If you measured the spice in a food that they like and note that it had more of that spice, that would not justify them changing their position to liking it, or even that they no longer think it’s too spicy. The same thing would apply to righteous anger, for example. Someone can find traits that they can apply to the things that make them feel righteous anger, but arguments that another case that bothers them less are worse wrt that trait are meaningless to non-cognitivists.
If you find this position untenable, then that’s only further evidence that you aren’t an emotivist.
But if I’m expressing *my* ideas about what is or is not moral, then other people will disagree, and that’s why the raw statements (“sex before marriage is immoral”) are non-cognitivist — there is no truth value to that statement.
But surely an emotivist would deny that there is any fact of that matter as to what “is moral”? Therefore they would not discuss what it means for something “to be moral”.
To me there is no such thing as “what it means for a position to be moral”, unless we’re asking about that I like and dislike (and to which I might apply moral labeling). But if we’re doing the latter then other people will differ, which is why such statements (“X is immoral”) are non-cognitivist.
I find the arguments in that paragraph entirely tenable. In essence they are an appeal to some sort of objective ranking (“if you find A bad then you should find B worse”), and to me there is no such ranking.
I’m going to shuffle things around a bit to make the arguments flow better.
Except this is flat-out false, because I specifically asked you what it meant for a position to be moral — ie for it to be a position that relates to morality — and you specifically replied that it was about how people are treated, with people being treated well meaning that it was moral and with people being treated poorly meaning that it was immoral. That position has a truth value, and certainly isn’t “I like it/I don’t like it”. So that doesn’t seem like that’s the case for you, by your own words. At the very least, at least ONE of the things you’ve said that characterizes your position can’t do so.
This is why the question I asked and you mostly ignored is so important: what is it that you think a subjective cognitivist position won’t let you say that a non-cognitivist position won’t? I’m not asking you to try to align your thinking to definitions — since I think you’re applying the definitions incorrectly — but to instead talk about what functionality or true statements can’t be expressed by cognitivism that makes you think that we need non-cognitivism. And again I’m not asking for what other people do or for you to attempt to characterize how they approach morality, but instead to simply talk about how YOU do it, and what you think is the right way to do so.
Like your focus on like/dislike, this simplification ends up missing the point. As I said in the last comment — and you ignored — even objectivists can truthfully, in general, say that they like moral things and dislike immoral things. Here, again, all of the categories can have disagreement, but what matters is what it means when people disagree.
For objective cognitivists, it means that at least one of them is wrong, and they need to figure out what the right answer is. This is because they think there’s a global right answer to those questions.
For subjective cognitivists, there is no global right answer. But there IS a right answer for each of them. So if they want to convince the other person to change their mind, they have to look at their beliefs and desires and use them to convince them. Taking your example, is I wanted to convince you to change your mind that something is moral that you thought immoral what I’d have to do is convince you that despite your determination that it treats people poorly by your own standards it actually treats people well. What I COULDN’T do is appeal to a global standard of morality, or use MY definition to convince you. Those are both irrelevant in a subjective cognitivist model.
In a non-cognitivist position, there isn’t really disagreement, because non-cognitive positions aren’t justified by reasons (that’s why they have no truth value). All a person is doing is expressing their position, but it’s not a reasoned position in any way. So if someone says “I like chocolate” and someone else says “I don’t like chocolate”, there’s really no disagreement there. The first person likes it and the second person doesn’t, and that’s all there is. Obviously, the first person giving all the reasons they have for liking chocolate are irrelevant to the second person’s position, and there’s no consistent set of reasons that the second person has that can be appealed to to get them to change their mind.
To be complete — and since you seem to have mischaracterized this view as well — Error Theory would say that there’s no such thing as disagreement because the statements don’t actually have any meaning, even though we think they do. That’s the hallmark of that position: when we examine moral statements, they are either meaningless, logically incoherent, or both. The error is not making a mistake about what is moral, but is in fact that we erroneously think they are meaningful when they aren’t.
So saying that there is disagreement doesn’t mean the statements don’t have a truth value — and in general since disagreement relies on saying “X is not true/false” tends to imply that there IS a truth value –and so talking about disagreement does not in any way support the idea that any moral position — even yours — is non-cognitivist.
Since the statement “X is immoral” still has to have meaning to a non-cognitivist, they still have to distinguish between statements that are about morals and statements that aren’t. For emotivists, that means attaching the statements to moral emotions, meaning that when they say “X is immoral” what they mean is that “X causes a negative moral emotion in me”. There is no objective fact that justifies this, or any chain of reasoning — even one that relies entirely on their subjective assessment — that they use to derive that emotion. That’s why they aren’t cognitivists but aren’t Error Theorists: moral statements have a meaning, but aren’t propositions and so aren’t the result of nor are amenable to reasoning (because logic manipulates and produces truth values, which they deny moral statements have).
Except the statements in that paragraph are merely providing more examples of the position that you pretty much implied was making a strawman out of the non-cognitivist position. That you reacted that strongly immediately suggests that you don’t really hold that position.
Also, you should note that there not being an objective ranking of those things would ALSO apply to all subjective cognitivist positions, by definition. So if that’s the big thing driving your non-cognitivism, it turns out that a subjective cognitivism would work at least as well on that point.
Hi verbose,
Reading this reply, I have a better understanding of what you mean by “subjective cognitivism”. So let me respond to that as the main issue:
So, for an objective cognitivist the “right answer” is the one that is right independent of any human opinion. For a subjective cognitivist the “right answer” is one that is in line with their own values and desires?
While I may be wrong, I would not call the latter “cognitivist”. Let’s presume that Peter’s values are incompatible with slavery. Then the statements: “Slavery is morally wrong within Peter’s value system”, and “Peter holds slavery to be morally wrong” and “Peter would prefer that we didn’t allow slavery” would all be cognitivist.
But that still does not make the bald statements: “slavery is morally wrong” or “you ought not hold slaves” cognitivist. *Those* statements don’t have truth values. They do have truth values given Peter’s value system” but that’s not the same as them having truth values, since “Given Peter’s value system slavery is morally wrong” is a very different statement from “slavery is morally wrong”.
Wiki says: “A noncognitivist denies the cognitivist claim that “moral judgments are capable of being objectively true, because they describe some feature of the world”.
The claim “slavery is morally wrong” is a moral judgement, and is normative, and cannot be *objectively* true or false, and so is non-cognitivist.
The statement: “Given Peter’s value system slavery is morally wrong” is indeed objectively true and cognitivist, but it is a descriptive statement and not a “moral judgement”. (E.g. “Within Hitler’s value system killing Jews was morally virtuous” is not approval nor a judgement, it’s mere description).
The Internet Encylopedia of Philosophy says: “In other words, non-cognitivism claims that the principal feature of normative sentences (their lacking of truth values) is a consequence of the illocutionary role of such sentences. In fact, these sentences are not bearing any cognitive meaning (such as assertions or descriptions), but they are just used to utter prescriptions”.
That seems in line with how I’m using the terms. The *descriptive* statements have truth values, but the normative ones do not (they all depend on someone’s advocacy), and truth values for *normative* statements seem to be required for cognitivism.
It seems to me that you’re constructing a straw-man version of non-cognitivism and of emotivism. I’d say that you can seek to persuade a non-cognitivist in the way you’ve just described: “what I’d have to do is convince you that despite your determination that it treats people poorly by your own standards it actually treats people well”. Nothing about non-cognitivism requires that one’s own value system be irrelevant or arbitrary or inconsistent. And nothing prevents appeals to values to try to persuade one another.
As I see it, the “error” is in thinking that normative statements of the form “slavery is morally wrong” have objective standing and are cognitivist, when actually only descriptive statements of the form “Given Peter’s value system slavery is morally wrong” are so.
I don’t see that that follows. If the meaning of “X is immoral” is “I disapprove of X” coupled with “X is the sort of thing about which I use moral language” then one doesn’t need there to be a fact of the matter as to what things are in the category “moral”. The “sort of thing about which I use moral language” could be arbitrary and inconsistent and different from person to person — and will be if such language is largely rhetorical.
I’d be happier about what you’re calling “subjective cognitivism” if I were convinced it was a proper use of the terms!
Sorry for the late reply, as I’m getting into “hurry up and wait” mode at work, which isn’t conducive to writing thoughtful comments. I’ll try to dribble the comments out over the next few days.
Let me add something that might make it clearer: when you see “subjective cognitivism” you can probably replace it with “relativism” without losing too much. Which then will change my question to you to be “What’s missing from cultural/personal relativism that you feel you need emotivism?”. Relativism explicitly says that moral truths can only be defined by making reference to some kind of sub-division, which can be the individual or the society itself, and is not an objectivist theory. So why doesn’t it work for you?
If the statement “X is morally wrong” always has to be evaluated against what a specific person or group defines morality to be, why can’t that statement have a truth value? We have individually specific truths all the time.
The problem is that both relativists AND objectivist non-realists — hello! — ALSO deny that objective claim. Relativists because they claim it isn’t objective and non-realists because they don’t think it has to apply to any real feature or object in the world. That wouldn’t mean that they aren’t cognitivists, at least in the sense that they would deny that the statements have a truth value.
I was going to comment on how bad that article was, but it’s probably not worth taking the time to do so (in general, the author argues for their position instead of describing it, making what he says there somewhat suspicious). At any rate, the issue here is that there is a vanishingly small number of objective cognitivists who would accept his contention that if it is objective then it can only be descriptive. Almost all if not all of them would insist that moral statements are prescriptive, and then of course claim that prescriptive statements can have truth values. His statement about illocutionary role doesn’t help because while some illocutionary statements clearly don’t have truth values — commands and interrogatives, for example — that doesn’t mean that prescriptives don’t have truth values. And, in fact, it’s entirely possible to argue that any “ought” statement HAS to have a truth value, because we can always ask if it is true that that person really OUGHT to do that. So, if we say “X ought to do Y”, then it is always possible to say that that statement is false, and that X at least is not obligated to do Y. Thus, if a statement is going to be properly prescriptive it has to have a truth value, which would demolish his entire argument here.
So that argument doesn’t seem to work, and certainly doesn’t work as a definitional statement as you seem to be using it here.
That they depend on someone accepting them as true makes them relativist, not necessarily non-cognitivist.
This would run into the problem of “moral dumbfounding”, which is one of the biggest sources of empirical evidence for emotivism: where even when all of your reasons for considering the statement “X is immoral” are proven false, you maintain that it is nonetheless. This is the stance Prinz took in his book “The Emotional Construction of Morals” (although I think he claimed to be more relativist than non-cognitivist, but I don’t think non-cognitivism was that popular then). If I can easily reason someone out of their believe that X is moral or immoral, what role does emotion have at all? Essentially, I’d be arguing that “X is immoral”, say, is false and they’d be accepting that, no? Or how else can it work?
That’s not error theory, though. Again, claiming that people are making an error is not sufficient to claim that your view is an error theory [grin].
First, they’d need to be able to define what counts as moral language, and second what they did here would indeed be the means they use to distinguish the moral from the immoral or amoral, which would then result in them answering that challenge. It would be a bit odd to insist that you don’t need to answer the challenge by demonstrating that you’ve answered it [grin].
Or also if it is largely relativist, especially personal relativist. And the more arbitrary and inconsistent it is the less likely it will be to be able to convince them otherwise using arguments, so the more it has these features the less your persuasion example will make sense, which then shows the distinction between cognitivist and non-cognitivist approaches (relativistic cognitivist approaches CAN be inconsistent, but that’s pretty much the hallmark of non-cognitivist ones).
Hi verbose, I’m finally getting round to replying:
That, to me, is a very confusing way to use terms. As I see it, “X is morally wrong” has a truth value if and only if that truth value is fixed and independent of everything else, including human opinion.
Thus, the sentence: “within Fred’s moral system, X is morally wrong” has a truth value. The sentence “X is morally wrong” does not.
Because that is not what I understand “having a truth value” to mean.
Yes, but they then only have truth values in the specific case, not the general case. Thus “it snowed in London on Jan 3rd 1985” has a truth value. But “it snowed in London” could be either true or false depending when one is talking about.
Facts and reason can affect how people feel about things, so just because moral judgements are effectively emotional ones doesn’t mean they can’t be altered by facts and reason.
Why can’t it just be a widespread agreement? Thus, what “counts as moral language” has no fixed and precise definition, but humans do generally (but not always) agree on what comes into that category?
In the same way, we don’t need a clear-cut definition of the concept “art”, we can just all agree, to a large extent (but not completely) about what things we regard as “art”.
Whether or not we can use moral talk to persuade people is a very different question from what morality actually is. Most people (I assert) are under grave misapprehensions about what morality is. Thus, what they find persuasive will not be a good guide to the underlying reality about what morality actually is.
This starts to get into semantical nitpicking, though, because to the relativist what is meant by “X is immoral” just is “Within Fred’s (or a culture’s) moral system, X is prohibited (avoiding the circularity)”. This is the same move you make when you say that what we mean by moral really is … whatever you think it means, because to be honest I STILL don’t know what you mean by that [grin]. So if you admit that that statement has a truth value, and that’s what they say the statement “X is immoral” really is, then their statement has a truth value. But it’s not an objective one because it only applies relative to a specific grouping, and isn’t a universal moral statement like moral objectivists demand.
It also sounds an awful lot like what you argue when you try to say what morality really is. So, again, why doesn’t it work for you?
Sure, but it definitely HAS a truth value, and is absolutely a proposition. It’s just that we have to appeal to something specific to interpret it (we have to deambiguate it). Thus, any position that can be explained that way can’t be non-cognitivist. So, again, at what point do you feel that non-cognitivism is required to explain something?
While that might be true, we don’t TYPICALLY reason people out of emotional reactions, and for your statement to hold here it would have to be the typical case. The typical case is that we use emotional appeals to change emotions, and rational appeals to change reasoned beliefs. Even by your reasoning here, you advocate using reason instead of emotion most of the time. That seems to imply that our moral beliefs are reasoned instead of emotional.
So, to summarize, if in order to work with moral beliefs we have to or are supposed to treat them like reasoned beliefs (and, in general, even objective ones) how does that not give the advantage to the side that says that the reason we have to do that is because that’s what they really are?
Two problems here:
1) This is incompatible with both philosophy and science, who would both insist that if we have that sort of widespread agreement there must be properties that we can discover that determine what that agreement is based on, and that using those properties we can filter out even agreed upon examples of moral language and say that people are in error when they say that this is an example of moral language. Think “gold vs fool’s gold”, for example. And since you think that most moral language is in error, you can’t even argue against this without causing massive problems for your own position.
2) You ignored the, more important, second part of that comment, which said that whatever line they used to distinguish moral language from non-moral language would count as that sort of distinction. If you even have a widespread agreement to appeal to, we can still tell the difference, and that would be a fact of the matter about what makes something moral vs not making it moral. In short, just by positing a distinction between moral and non-moral language immediately means that there is a difference between the two and a way, no matter how loose, to determine that difference. Again, you’d be trying to refute the idea that there is such a distinction by providing the details of that distinction, which will never work [grin].
Bad example, as not only is determining this definition the basis of an entire field of philosophy — see my essay on “Is Art Necessarily Aesthetic?” — but it does seem like there is an objective, if blurry, distinguishing line between the two.
You ignored the main point: that relativistic views can be inconsistent as well, but the more inconsistent and arbitrary it is the less likely it is that reasoning, your preferred method, will work. The more non-cognitivist a view is, the less likely rational persuasion will work. Thus cycling back to the same question: why do you insist on non-cognitvism when relativism seems to give you everything you want and avoids the problems you keep running into, by going strongly against intuitions and even the empirical experience of how we go about using or dealing with morality?
Also, since you dropped this line, do you accept the idea that any prescriptive statement MUST have a truth value? Because if you do, then if you’re a non-cognitivst you’d have to deny that your moral statements are actually prescriptive. I’m not sure what impact that would have on your view, but it’s an important thing to consider when talking about morality.
To me the label “moral” denotes approval or favourable feelings, similar to the words “delicious” or “beautiful”. The label “immoral” is like “ugly” or “disgusting”.
As you’ve stated it I would agree, yes, that’s right. It’s not how I would phrase it though. E.g.:
To me the statements “X is immoral” and “Given Fred’s moral code, X is immoral” are very different. You say that the latter is what the relativist means by the former. But to me the former seems to imply that X is immoral regardless of what Fred thinks. As you say, this might be merely a semantical difference, but I’d prefer to state that the latter has a truth value but the former does not.
To justify that, take the additional fact that “Given Sue’s moral code, X is not immoral”. Then “X is immoral” has two truth values, and the truth value changes from Sue to Fred. That’s incompatible with what I take “truth value” to mean, which requires that a given sentence (“X is immoral”) has one and only one truth value and that the truth value is independent of everything else, everything other than the sentence itself.
I also dislike the term “moral relativism” for all sorts of connotations that it sometimes takes. These include the notions that such moral claims are “valid” (what’s that supposed to mean?), that one cannot or should not criticise one moral system using another one (why not?), and that such “relative” moral codes carry normative force (that begs the whole question).
I think it much clearer to say that moral claims (e.g. “X is immoral”) do not have truth values, since they all derive from people’s values, and that people’s values are subjective.
So the disambiguated claim has a truth value, but the vaguer claim does not, since it needs an “interpretation”, and only the disambiguated claim is specific enough to have a truth value. If one can disambiguate it in different ways such that the truth of it would be different, then it itself does not have a truth value.
I’d suggest that the typical case is that we use a mixture of both in both cases. Especially given how prone people are to cognitive biases.
First response. Hundreds of millions of English-language speakers can agree on what certain phonemes mean even though those phonemes are arbitrary and have status only by collective agreement. (Agreement which changes over time.)
Second response, people have a lot in common and much of human nature is shared in all of us. Thus the fact that we can reach widespread agreement doesn’t necessarily mean there is an objective property that we are all referring to. All it means is that in many ways we think alike.
But while we have a widespread agreement it is not full agreement. There are plenty of things where we cannot agree on whether something is or is not in the “moral” category. Examples are gay sex, premarital sex, teenage masturbation, blasphemy, heresy, and lots of other things. The proper manners of children and how they act towards adults is, for example, something that has changed radically since Victorian England. Treatment of slaves, blacks, women, etc have also changed radically.
All of this suggests that there is no fact of the matter as to whether something is or is not a “moral” issue.
An entire field of philosophy that attempt to discern the fact of the matter when there is no fact of the matter to discern! Yes, it was a provocative example, but I’m not convinced that that quest is well conceived; indeed as a rule of thumb, if an area of philosophy tries for ages to do something and doesn’t succeed then maybe they are conceptualising the issue wrong.
If that’s an appeal to intuition then, as usual, I’ll place less weight on it than you do. Humans do seem to have a cognitive bias to making “realist” interpretation when it is not appropriate.
As above, your version of relativism seems pretty in line with my version of non-cognitivism. But see above for why I adopt my labeling.
Prescriptive statements are instrumental, deriving from a human want or desire (e.g., “I want to be a Catholic priest; that requires celibacy; I should be celibate”). Or they could be commands: “you should tidy your room” (= “I want you to tidy your room”).
In the descriptive sense they do indeed have truth values (it is indeed true that to be a good Catholic priest one needs to be celibate; and it is indeed true that the mum wants the child to tidy their room). The normative force, however, comes from human wants and desires. I would only apply truth values to the descriptive form of the statement.
One of the main issues here, it seems to me, is that you seem to have a cobbled together view of morality (and a number of philosophical concepts). This means that often your discussions and arguments are inconsistent, and so it’s hard to figure out what’s going on. If I haven’t already recommended it, Prinz’s work might be something for you to look in to, as he’s far more systematic and yet still pretty much aligns with the view you at least claim to have.
Anyway, moving on:
All this does, though, is cycle right back to the idea of specific moral emotions. I mean, if you came across a weeks dead animal, you’d probably consider it disgusting, but presumably wouldn’t find it immoral. So it seems to me that either you have a more detailed idea of what makes something moral and immoral, or else I find it implausible that this is really your view.
This is why I commented above on the cobbled together theory. I suspect that you have taken on ideas that intuitively make sense to you or sound good, but don’t have a fully developed theory behind that to tease out what that really means. And there’s nothing wrong with that, as long as you don’t try to tell other people that THEIR view is wrong based on that incomplete view [grin].
But the question is, in fact, whether subjective statements have truth values or not, and we again have entire fields dedicated to teasing stuff like that out. And for the most part, it seems to be the case that subjective statements, properly dereferenced, DO have truth values by the standard and accepted definition of truth value. After all, the statement “It snowed in London last night” is clearly a proposition, even if we have to figure out what London we’re talking about first, as is “I have a headache” even if we have to figure out who the “I” is first. And settling this is important because it seems to me that your view of what it means for something to have a truth value is what drives your notion that you have to be a non-cognitivist, and thus an emotivist, even if a number of your other positions are, at least, not easy to reconcile with the position.
So let me look at what seems to be the biggest issue here:
The problem here is that you conflate the logical proposition with the English statement. This is, of course, perfectly reasonable to do for simple declarative sentences, but it isn’t always the case. There’s an entire field of philosophy — philosophy of language — that has as a major component figuring that out (as it’s an important part of determining meaning). So let me ask you this question:
Does the statement “Turquoise bicycle shoe fins actualize radishes greenly” have a truth value?
Well, in order to figure that out, we’d need to determine if it expresses a proposition. In order to do that, we’d need to know what it really means or expresses first. So the first step in determining if a sentence has a truth value is figuring out what it is expressing.
When it comes to the statement “X is immoral”, for objectivists the meaning is, essentially, just that. The proposition and the sentence, therefore, are identical. However, for subjectivists — or at least some subjectivists — the sentence “X is immoral” means “X is considered immoral by a specific person/group”, which is then the proposition that we are examining. Of course, this proposition is true for non-cognitivists as well, so we need to go one step further to find out what is really meant by moral. For cognitivist subjectivists, they would argue that the person or group stores moral claims AS propositions, while non-cognitivists would say that it is stored as something that isn’t an actual proposition (like an emotional response or state). Thus, to sum it up:
Objectivsts says that moral claims are universal and independent propositions.
Cognitivist subjectivists say that moral claims are propositions that are held by a specific group.
Non-cognitivists would say that moral claims are not propositions at all.
Thus, the first two positions say that they are propositions and have truth values, while the last one says that they aren’t. For the second, when we build the proposition properly, we can see that it is a proposition in some group’s structure and thus, when we refer to the right proposition, we can see that it, in fact, can only have one truth value, even if different groups have different truth values for the “same” proposition, because when properly understood they actually aren’t referring to the same proposition.
And, again, the propositions don’t have to be universal to have a truth value, once we understand what the proposition really says. For example, the statement “I really like chocolate” definitely has a truth value and is true about me, even if for some other people it would be false.
While I have no idea what you mean by saying that there is an implication that relative moral claims have normative force — the normal OBJECTION to relativist morality is that the claims can’t have normative force — for the others that’s not a connotation, but a consequence of the theories, and particularly of the fact that they are subjective. The big problem that relativist theories have is with moral disagreement, and what that can mean if they are true. Imagine that you and another person are arguing over whether something is moral or not. If morality is subjective, then from the perspective of a third person, well, both claims are indeed equally valid, because it’s just them expressing their own personal view on the matter. I have no additional principle to appeal to to adjudicate the matter. Even if I, say, agree more with you than with them, being honest I’d have to concede that I don’t really have a REASON for that. It’d be like you and they were arguing over country vs rock music. Even if I like rock music more than country, that wouldn’t do anything to say that either they don’t really like country music or, more importantly, that they SHOULDN’T like country music more than rock music. They like what they like, and that’s all there is to it. And if you insisted that they shouldn’t like country music more than rock music, we’d see that that was an invalid argument. What you like doesn’t in any way mean anything with regards to what they like, and they are not making any kind of mistake or are in any way inferior if they like something better than you do.
For these sorts of claims — music, food, etc — though saying “Country music is good” usually does just mean “I like it”, and so while we often make mistakes in arguing that there is some kind of objective truth here for the most part we can drop into the subjective meaning and still have the statements do the work we want them to do. It isn’t clear that we can do that for moral statements, though. If you say to mean that something that I think is moral is immoral, you generally want me to take that seriously and adjust my beliefs and actions as as response to that. If all that statement means is that you dislike it, I have no reason to listen to you, and certainly no more reason than if you said that it would upset you if I listened to country music. At which point, you probably should be appealing to personal, non-moral emotions instead of anything having to do with morality … but eliminating all cases where we appeal to morality does not seem to preserve morality at all, and so seems to be more eliminatory than explanatory. You might be right, but it’s a pretty dramatic claim that needs lots of proof before anyone will feel obligated to accept it.
Except that they can only disambiguate it to a specific person/group. If two people disambiguate it to two different people or groups, then they aren’t talking about the same thing anymore and need to clear that up. In short, they’d disambiguate it to different propositions, come to different truth values, and then realize that and correct the misunderstanding.
Yes, we have an arbitrary assignment of sounds that map to concepts. There are in general still properties in each language that drives that general agreement, and what links them in the minds of people are the concepts and things they reference. So, for example, if people agree that tree points to a specific type of thing in the world, then there are properties that link all trees together and determines what it is to be a tree. So there is a set of properties to appeal to at all the levels where it really matters.
But even by listing those things, you implicitly agreed that those are, in fact, moral questions, which is what I’m referring to there. That we don’t agree on what the right moral answers doesn’t mean that there isn’t a way to determine what questions are or aren’t moral ones, as again you assume that we can tell the difference by listing those questions and not others.
There is often a lot of disagreement and change in scientific theories. Does that imply that there are no scientific facts of the matter? You can’t simply point to things changing or there being disagreement and use that as your only evidence that there is no fact of the matter, because that has been true for pretty much ALL facts throughout history.
The claim that there is no fact of the matter to discern would be a claim in that field of philosophy, so you’d be dismissing the field while making claims that can only or can be best evaluated by it. And since it is a claim, you’d need to demonstrate that your claim is reasonable when measured against the reasons the field thought there was a fact of the matter to be discerned in the first place, which to be honest is the part of this discussion where I feel you are most lacking: you seem to, in general, be confused about why they think there is a fact instead of demonstrating that their presumptions are incorrect.
And, in general, they’re way ahead of you in considering that, and have even tried the concepts that you suggest, only to find that they seem to cause serious if not more serious problems. You often do come across as someone wandering into a physics discussion with set ideas of how to solve tricky physics problems and then getting upset when they say that they’ve already considered it and it won’t work.
It’s generally the results that we can, indeed, distinguish art from other things. For example, painting your walls isn’t art, but painting a mural on a wall is. You can say that this us just “human intuition”, but then again I’ll say that without that sort of human intuition we have NO way to determine any kind of meaning for the word or category referred to as “art”. For details on how to work out a meaning for that, you’ll have to read the essay.
I followed on from your own source and showed that prescriptive statements must have truth values, and so must be propositions. Commands aren’t propositions, and your first derivation is an argument, not a proposition itself. Since you gave that quote and source to clarify your view, I have to say that all it’s done is made it more confusing since you don’t even seem to agree with it [grin].
Also, I disagree that normative force comes from wants and desires, because as I have said repeatedly it can be a normative statement for you to do something that you don’t want to do. So saying that isn’t in any way going to settle anything, so we’ll really have to dive into that in detail to work things out.
Hi verbose,
Agreed. So “moral” and “immoral” are words of approval or disapproval, functioning similarly to “beautiful” and “ugly”. But, we use moral language for a subset of the situations where we use approval/disapproval language.
But that’s again similar. If we are tasting dish we would usually use the words “delicious” or “nasty”. But we could use the word “beautiful”; on the other hand you wouldn’t use the term “ugly” if you disliked it. All of this is just convention, and one imagine the language evolving such that “ugly” came to be a normal usage for “tastes bad”.
The point is that “moral” language is a subset of aesthetic language, but there is no fact of the matter delineating that subset. There is no “extra meaning” applying to that subset, that the label “moral” connotes!
[Clarification, some people think that there is an extra meaning, in that they use moral language when they *think* there is objective status to the good/bad, not just their opinion; but I regard this as an error.]
Agreed.
OK, let’s accept that. But then does it follow that “X is immoral” has a truth value? “X is immoral in Group A’s value system” has a truth value. As does “X is immoral in Group B’s value system”. But the truth value can be different for those two. It thus seems to me wrong to say that the un-disambiguated “X is immoral” has a truth value.
That’s where I’m disagreeing with you. I find it clearer to assert that “X is immoral” does not have a truth value (though “X is immoral in Group A’s value system”” does).
As spoken or stated by someone, “I really like chocolate” would indeed have a truth value, since who the “I” is entailed by the context.
But in “X is immoral” the value system being referred to is not specified. So it’s not a concrete enough statement to have a truth value.
There can be a connotation, under moral relativism, that someone’s value system is “right for them” and that one shouldn’t judge it using some other value system. Thus for example, the connotation can be that if one culture has adopted FGM then it is wrong for another culture to say they shouldn’t.
OK, fine, so the dis-ambiguated version has a truth value, the un-disambiguated version does not (it is too vague to have one).
No, by listing those I’m just giving examples of how people use the language. There is no fact of the matter as to what is in the moral subset of aesthetic responses and what is not in that subset.
That’s what my list and examples were trying to say. There is no fact of the matter as to whether teenage masturbation is a moral issue or not. Some people regard it as such; others don’t. It is not the case that one group is right and the other wrong about that. There is nothing too this beyond how people use the terms.
No! I’m trying to say that we cannot tell the difference because there is no difference to tell!
But it’s not my only piece of evidence. Indeed, the fact that people can’t agree on what is in the “moral” category is itself only a weak piece of evidence. A stronger piece is that people cannot even give an account of what it would mean to place something in the “moral” category as oppose to outside it.
The only reason for supposing that there is a fact of the matter as to what is in the “moral” category is intuition, which I regard as weak evidence.
Feel free to expound on these!
Is art by 5-yr-olds or by chimpanzees “art”? Some people say yes, some people say no. Why would one think there is a fact of the matter about that?
Then it must either derive from another of my wants, or from someone else’s wants.
Getting back to this now because I have a little time, although this comment is not something that addresses little issues [grin].
I’m going to go a bit out of order, too.
The thing is that here you are making the same mistake that I called out in my LONG discussion of the meaning of moral sentences. To reiterate:
English sentences do not have truth values.
Propositions have truth values.
So when you say that it’s clearer to say that “X is immoral” doesn’t have a truth value, I can’t see how that is possible because when you do that I HAVE NO IDEA WHAT YOU’RE SAYING! Are you treating “X is immoral” as a proposition, the way objectivists do? Then it clearly has a truth value (if it has meaning at all). Are you treating it as an English sentence? Then, yes, it doesn’t have a truth value, but then neither does your alternative and that’s not meaningful. And you can’t treat the two statements differently if you want to compare them. So all this does is make things really, really confused.
On top of that, doing this seems to confuse you as well, because you seem to think that the big disagreement is over whether or not these statements can have a truth value or if the truth value is vague or can even be determined, when the big clash between moral positions here is really over what those terms MEAN:
Objectivists think that moral terms refer to a proposition in a universal and objective moral system.
Relativists think that moral terms refer to a proposition in the moral system of an identifiable group or individual.
Non-cognitivists think that moral terms refer to something that is non-propositional.
Error Theorists think that moral terms have no possible consistent meaning.
It seems to me that you miss this, and this causes you to build out the confusing system that you try to argue for. I think you take the shallow interpretation of “Non-cognitivism means that moral terms don’t have truth values” too strongly, and since you don’t think that the objectivist terms, taken as propositions, have a truth value that can be determined — because there is no such universal moral system in your mind — then you must be a non-cognitivist. But then you also assert that the proposition “X is immoral in Group A’s value system” has a truth value, and often treat that as if that is what moral terms mean, which would make you a relativist. And then you slip Error Theory into there as well, making this entire thing really, really confusing [grin].
I think you need to stop thinking so hard about whether or not the statements have truth values and far more about what moral terms refer to. Especially since when I — roundaboutly, I admit — asked about what they refer to you ended up giving multiple, incompatible answers [grin].
As I pointed out at length — and you didn’t respond to — that’s not a connotation, but instead is a consequence of ANY subjectivist view. If moral terms only refer to something possessed by a particular group or individual, there are no MORAL grounds you can use to condemn them that apply in their moral system except those that already exist inside of it. Thus, if their system allows from FGM then no moral argument from outside that system has meaning, and so they are being “proper” in maintaining that, and someone outside of both systems cannot choose between the two on the basis of anything except the system that they themselves possess. You”d have to appeal to something outside of the moral systems, like pragmatism, but then you wouldn’t be talking about morality anymore, making any kind of moral claim meaningless. So either you accept that their moral claims are as valid as yours or you simply stop talking about morality. The choice is yours.
Again, you seem to be getting hung up on analyzing specific English words instead of analyzing concepts, and it means that you end up equivocating. When we refer to aesthetic concepts, we are referring precisely to specific aesthetic experiences. When we use the word “beautiful”, for example, we are referring to different experiences when we talk about “a beautiful painting” or “a beautiful sunset” or “a beautiful woman” (although the first two are actually fairly close). And since we can get to different contexts and experiences in those cases, we can discover that they have different properties, and those differences define them. But through it all, there is a specific subjective experience that we appeal to and, in fact, refer to when we use those terms.
Now, we CAN use the term “beautiful” to refer to generic approval, and ugly to refer to generic disapproval. But when we do that, we aren’t referring to specific aesthetic experiences anymore, but are instead making a comment where we determine the meaning by analogy: we tend to find beautiful things pleasant and ugly things unpleasant, so calling the situation beautiful means that I liked it and ugly means that I didn’t like it, and strongly so. But at this point we aren’t using the terms AS AESTHETIC TERMS anymore. “Beautiful” in that context is NOT an aesthetic term anymore, but is far closer to an idiom.
So the same thing applies to moral terms. If you want to claim that they are aesthetic terms, then you have to be able to point out the specific aesthetic experience that they refer to. If you can’t and merely insist that they refer to generic approval or disapproval, then even in your view moral terms don’t really have any kind of meaning, and so using them in any way as expressing any important fact or providing any kind of argument is just plain wrong.
And since factually we CAN distinguish what we would think of as moral emotions vs non-moral emotions, you’re actually far worse off if your position forces you to deny that such a distinction is possible.
You really need to stop using moral disagreement as evidence that there is no possible right answer to these questions. In the disagreements you cite, most people think that there IS a right answer, and we can find rational links and differences there that suggest some kind of underlying concept that they are all trying to refer to. Disagreement doesn’t get you as far as you think because the sorts of disagreements you cite are precisely the sorts of disagreements that we had when discussing objective FACTS. You need to find a disagreement that we couldn’t have if we were appealing to a fact, and so far you haven’t given anything even remotely like that.
As usual, I have no idea what you mean by this, and we’ve cycled back and forth on this before. Putting aside whether such characterizations would be obviously correct and so no one could disagree with their assessment, what are you looking for that the myriad objective moral systems have not provided? For the most part, you seem to be, again, ultimately relying on moral motivationism, insisting that if the moral system doesn’t automatically make you want to pragmatically do it then it can’t be providing that sort of account. I counter that that insistence reduces morality to pragmatics, and so insisting on that simply means that you ELIMINATE any kind of morality whatsoever, and so that CAN’T be a proper demand.
The problem is that you include all of our experiences with morality in the “intuition” bucket, as I said leaving NO possible evidence for ANY of the properties of morality left, which includes your position. Since you even exclude how morality actually evolved in us, what’s left? By your stance, there is no evidence that can convince you because any evidence cited would be dismissed by you, but your evidence isn’t any different.
I did already. The big one is about moral disagreement, which is about how can we have any kind of meaningful moral disagreement when the truth of a moral statement depends on some kind of subjective marker (or they don’t have truth values at all). The outcome is that under pretty much all subjectivist views — which, you’ll recall, to me includes emotivism — there is no reason to ever make a moral claim or debate a moral disagreement because it always either reduces to appeals to things that are meaningless to the person you are debating with, or to nothing moral at all (pragmatics, for example). This is a HUGE problem that many have tried to solve (Prinz, again, tried to do that in his work, but I don’t think he managed to succeed). And you can summarize problems with the views around that distinction:
Objectivists clearly have meaningful moral disagreement, but have difficulty demonstrating the universal principles that allow for that.
Subjectivists don’t have worry about demonstrating universal moral principles, but have issues explaining why someone should care about someone else’s moral view of their actions, which is required for any kind of meaningful moral disagreement.
So far, from what I’ve seen you tend to insist that moral disagreement is meaningful — mostly by demanding proof that it isn’t — while always reducing it to either appeals to their specific system or to pragmatics, which as I commented reduce it to a similar question to “Do you like rock music or country music?”, which we can see isn’t really that sort of meaningful question to support your use of the questions.
Why did you ignore my example of how painting your walls clearly isn’t art, but that painting a mural on your wall is? The example you give here might be a corner case or it might be us conflating the colloquial with the conceptual, but that there ARE clear examples pretty much suggests that there IS a fact of the matter about it, even if we can’t always determine what it is. That most people consider your example to be corner cases AND can explain what considerations are driving their positions is only further evidence that we’re dealing with a fact of the matter here.
And since you are using that some people classify them differently as evidence, how is that NOT appealing to intuition just as much as you insist everyone else does?
Why? What’s the argument that someone cannot have a normative requirement to do something that they personally do not want to do in any way, and that isn’t itself derived from a specific want? And why would anyone else’s wants be at all relevant here?
Hi verbose,
OK.
OK, let’s go with those.
I’m treating it as an *attempt* *at* a proposition. I’m saying it doesn’t have a truth value in any objectivist sense, since there is no “universal and objective moral system” against which to evaluate it. It is neither demanded by nor contrary to a non-existent moral system, so one cannot assign either “yes” or “no”.
Nor does it have a truth value in the relativist account, since it does not identify the moral system it is refering to. Therefore in neither case it is a well-formed proposition about how things are. It is an english sentence but not a properly formed proposition. Therefore it does not have a truth value.
Quoting Me: “There can be a connotation, under moral relativism, that someone’s value system is “right for them” and that one shouldn’t judge it using some other value system.
You: “that’s not a connotation, but instead is a consequence of ANY subjectivist view. If moral terms only refer to something possessed by a particular group or individual, there are no MORAL grounds you can use to condemn them that apply in their moral system …”
Agreed, I can’t condemn them using THEIR moral system, but I can comdemn them. Sure, nothing gives me moral *licence* to condemn them but nor is there any moral *prohibition* on me condemning them. The problem with the term “moral relativism” is it seems to imply that there is a moral prohibition on using one moral framework to judge another, but there isn’t (you’d need an over-arching framework to do that), instead the issue is undefined (in “moral” terms).
No, they’re not being “proper” maintaining that, they’re being “proper given their framework”.
Agreed. And I can indeed judge their system using *my* system. There is no prohibition on that! Though nor is there licence to do it. But then, de facto, I don’t need licence.
I don’t have to appeal to anything! Really, I do not need “moral licence” in order to try to influence the world to my liking. And yes, moral claims are indeed meaningless, if they’re taken to refer to anything objective. To me they are just words of approval or disapproval, aesthetic language akin to “nice” or “not nice”, “beautiful” or “ugly”.
The “aesthetic experience” is the dislike you experience when you learn that a trusted member of the group has just cheated and betrayed the group, or the like that you experience when you learn that a member of your group in need was helped by a stranger.
I bet you can’t produce an account that would tell us which would be placed in which category, that would apply to all humans.
OK, but equally, the fact that most people think there is a right answer is not a strong argument that there is a right answer.
I’m not aware of even one “objective moral system”. All moral systems are reflections of the subjective values of those who promote them.
We can’t. Why is that a problem for my account?
Agreed, it is pure rhetoric, pure emotional appeals. (Though rhetoric and emotional appeals can be persuasive, which is why we use them!) Why is this a problem for my account?
It doesn’t need “solving”, it needs accepting! That’s how things are! The problem with philosophers is that they have an *intuitive* sense that there must be more to it, and so go off on a wild goose chase (and along the way they invent whole edifices of conceptual constructs and associated terminology that don’t necessarily illuminate the matter).
Well, de facto, how other humans act towards you depends on exactly this sort of thing, on how they view you.
There is no meaningful moral disagreement! We could replace all moral discourse with the terms “I find that nice” and “I find that not nice”. And, really, things would be far clearer if we did that!
Only in the sense that the phrases: “I find that nice” and “I find that not nice” are meaningful.
There is a fact of the matter about how humans use language. And because humans have a lot in common there are often examples where we all use language the same way. That does not always mean that there is an external, objective fact of the matter that humans are attempting to describe.
Becuase I have no conception (literally, no conception at all) of what a normative requirement that does not derive from a want even is.
“Mother wants me to tidy my room; I ought to tidy my room, even though I don’t want to.”
If no human cares whether the room is tidy, then there is no sense in which he “ought to tidy” his room.
But that’s the problem. The relativist would say that OF COURSE the proposition, if formally written out, has to explicitly reference the system. That’s what the proposition really IS to them, and what the English sentence really means. To ascribe the proposition “X is immoral” to them and then declare that their position is a failed attempt at a proposition is a huge oversimplification of their position that almost means that you strawman it.
Thus, instead of saying “It doesn’t have a truth value”, it’s clearer to say in opposition to objectivists that there is no such universal or objectively justifiable moral system for them to appeal to, and for relativists … well, I’m not sure how you object to their position because you seem to agree with it most of the time [grin]. The claims about truth values seem problematic to me because it seems you either use those to derive or defend your non-cognitivism, but it does so in a way that logically doesn’t quite fit; you can’t get to any reasonable position of “Moral statements don’t have truth values” that way.
No, actually, you COULD condemn them using their moral system, because it’s the moral system that they have to respect and so have to respect arguments from that moral system. You can’t meaningfully condemn them using yours, because they have no moral reason to care about yours, meaning any moral claim you make about them or their actions that differs from theirs is either something that they can dismiss without thought or is actually another claim, like pragmatics (“You have to care about my morality because I can use force to get you to follow it”).
It’s more that if you accept the relativistic position, you have no grounds for meaningfully doing that. Unless you are denying the truth of that or slipping in moral objectivity, you are making an error in attempting to apply your morality to them in a way that matters. For cultural relativism, for example, if a society thinks slavery is moral and institutes it, a society that doesn’t is making an invalid argument if they say that people in that slavery ought not own slaves because it would be immoral, because by their moral system it is in moral to own slaves. And this is true of ANY subjectivist philosophy: you can only reasonably say that someone is acting immoral according to their moral system, whatever it is, and not by yours.
Attempts to impose your moral system on them, then, is like trying to impose your food preferences on them: unjustified and something that we generally think is an unacceptable imposition of force.
That’s the only meaning that “proper” can have given that system, and so the meaning that they’d use. You certainly would agree that you couldn’t use “proper” in any objective way, right? Thus, what they are doing is proper by what the word means if we accept that proposition, as is what you are doing, even if they are incompatible actions or judgements.
But it would also have no meaning to them, or at least it wouldn’t if they agreed with you about morality. So why even bother? Saying to someone “I think X is immoral” is a meaningless statement, like saying “My favourite colour is green”. Why say it in a context of judging or condemning them?
But if they actually accepted your position, moral claims based on your moral system and not theirs WON’T influence them. So you’ll have to appeal to pragmatics, like saying that it will work out better for them to follow your moral system. But that’s an appeal to pragmatics, not morality. So, at that point, don’t all moral claims become meaningless statements, things that we should stop making and that thus will eventually disappear from our vocabulary? Without their link to judging people and influencing their behaviour, we don’t even seem to have the reason of reporting our preferences for others to consider — like considering what our favourite food is — in order to drive actions. And since we can derive our actions from pragmatics, there doesn’t even really seem to be a reason for US to maintain our moral systems, or act on them in any way.
So, then, the question is: if your view is right, what use are moral statements? It’s certainly not what we use them for now.
Great. These experiences have commonalities, and so if we break them down we can find those commonalities and thus determine when someone is having a moral experience vs a non-moral experience. But isn’t this what you were denying we could do?
Of course not. That’s certainly not possible for that kind of subjectivist account. But I COULD derive the qualities of moral experiences, and so know what someone has to be feeling if they’re having one. Which, BTW, is what aesthetics in philosophy is pretty much doing …
But this is just you assuming your conclusion. As I’ve said before, I disagree, and specifically disagree for myself, since I think my subjective preferences are in some ways derived from that moral system, and a disagreement between my subjective preferences and my moral system means, to me, that I need to adjust my moral system. You can’t declare that they all just ARE that because your view says that that’s what they are. You need a convincing argument for that.
Because you and others still seem to be trying to have meaningful moral disagreements nevertheless.
Because if people actually accepted your position as true, the rhetoric wouldn’t work anymore. Your use of moral claims only works because it is parasitic on the belief that morals are more objective than your position says they are. You can’t reconcile how you use moral claims with what you think they really are, because your use depends on others not accepting your position. In short, you’re a cheater [grin]. You want to think that your moral view is right while everyone else thinks it wrong so your moral rhetoric will have an impact. That, right there, is a sign of something wrong with your position that needs to be resolved one way or another.
But if they accepted your position, then those considerations would be irrelevant, and they’d view you poorly for trying to make them relevant. Thus, these claims, if everyone accepted them, would become irrelevant and meaningless.
And everyone would reply “That’s nice” and not at all care about those statements. Are you willing to accept that?
I think you’re still trying to be too realist about these sorts of things. Here, there is some criteria that we can use to derive a concept of art such that certain statements about art are just plain wrong, like that someone merely painting their steps is doing art. Do you deny that? If so, why? Surely it’s just wrong to say that painting your steps is doing art.
And I say that normative requirements have to drive wants, not be driven by them.
At some point, we still really have to hammer out a definition of “normative” [grin].
But this relies on the child WANTING, in some way, to do what their mother wants. So the desire categorization always has to end in a want inside the agent. Those wants can link to the wants of other people, but at the end of the day if the agent doesn’t want to do it, then they can use someone else’s wants to trigger it.
Which is what I disagree with. Take pragmatic normative statements. It can be said that someone ought to do something that is in their self interest even if they are mistaken about their self interest and want to do something else. They would just be wrong and doing the wrong thing. The same thing applies to moral normativity, and normativity in general: someone can indeed be wrong about what they ought to do, and not want to do what they ought to do.
OK, so the proposition when formally written out and explicitly referencing a value system has a truth value. A bald “X is morally wrong” does not do that and does not have a truth value.
Cutting to the crux of the next bit:
To me the “moral” labelling is largely rhetorical. It signifies approval or disapproval. Thus “X is immoral” amounts to “I dislike X”. In the sentence “X is morally wrong”, the “morally” acts as an intensifier, amounting to “very”.
I *can* meaningfully condemn them! As just stated, me condemning them morally amounts to me stating that I disapprove. That is meaningful. Now, yes, whether they care about my disapproval or not is another matter.
Well, actually, human beings *are* influenced by the approval or disapproval of others!
Yes, we could dispense with “moral” vocabulary and just talk about approval and disapproval.
De facto they are rhetorically persuasive, because most people misunderstand them and treat moral statements as moral realist. If that were understood then, yes, they would become much less powerful.
Yes.
There are indeed correct descriptions about how humans use the language and the term “art”, such that no-one would use the word about that.
Yes and yes. All “oughts” are instrumental, deriving from what some human wants.
The ought here refers, presumably, to what they would want if they had full knowledge of the situation? That still sounds pretty instrumental.
It’s fine — if a bit confusing — for you to say that, as long as you understand that this has no relevance to the cognitivist/non-cognitivist debate. It seemed like either you used that argument to justify your non-cognitivism, or else presented is as a conclusion of your non-cognitivism. Neither is true.
Yes, but that generally only happens in two cases:
1) Their disapproval is based on something, some kind of logic or reasoning that the other person cares or should care about.
2) The disapprover is in a position of power and so their disapproval has pragmatic risks for the other person.
If neither of these are true, we tend to find the other person’s disapproval irrelevant at best and invalid and them overstepping reasonable bounds at worst. Since you deny that the first is true, you’d have to be relying on the second. But, in general, you saying “I disapprove of you!” is not going to fit into that as well. _I_, for example, certainly have no reason at all to care about your disapproval due to a power imbalance, and neither do most people that you DO criticize. So, again, it seems like how you use morality doesn’t survive contact with your own moral theory.
This doesn’t work if you treat it like an aesthetic property, though, because aesthetic properties have degrees, and if we are appealing to moral emotions — which is what you at least try to do here — then we can see that, yes, they have degrees as well. Thus, you can have a mild moral dislike of something, which might be overruled by a stronger emotion from another source. To use the example of beauty, someone can find a painting kinda pretty, but approve of it strongly because of nationalistic pride, as it’s a popular painting by a countryman. It’s also possible for someone to mildly disapprove of an action because of its morality, but approve of it because of its practicality. So even you can’t just say that disliking it morally means that you, overall, dislike it, let alone that you dislike it strongly, as long as you want to maintain the like to aesthetic properties and moral emotions.
But what if someone decided to? Would they be wrong, or at least referring to a different concept that they’d need to explain?
First, under your theory it has to derive from what THAT PERSON wants, not what “some” person wants. That was the whole point of my saying that, so claiming to agree and yet ignoring that entire point isn’t good. Second, why do you think that normative claims are instrumental? In general, in philosophy most normative claims are seen as being intrinsic, things that have value just because they do. Otherwise, you run into issues with them because they would always have to be justified by appealing to something else — in your case, desires — but then we have to ask why those are intrinsic and don’t need to be justified. And since we can evaluate and change our desires, you’d leave no rational way to do that or, at least, have to have base desires that we can’t ignore and don’t need to be justified. But you can’t get that from evolved desires because even evolved desires can be maladaptive, and even that would require that you use evolutionary benefit as the justification anyway, which can be challenged.
So it doesn’t look like “human desires” can be themselves intrinsic, so you’d need a set of intrinsic desires anyway, and I don’t see any way for you to get there without running into the same problems as objectivists, or else becoming entirely Egoist, which has its own problems.
Nope, that’s Carrier’s schtick. As I’ve said before, it is possible for someone to have full knowledge of the situation and decide not to be moral. All that means is that they are, at best, acting amorally, but does not mean that there is no normative statement about what the moral action would be in that case.
I’m not so sure. People do care what others think of them. If someone is disliked and disapproved of by others then they care about that, even if they think that the reasons are unfair or wrong. Agreed, they then rationalise to themselves about others being wrong to have such attitudes, but it’s still true that they care.
I’m not sure what your argument is here. Yes, our values and emotions have degrees and some of them conflict with others and can override others. Why is that in conflict with my position? Human psychology is complex and we’ll have a whole slew of partially coherent and partially conflicting values.
They would be out of line with how the language is generally used, and if they were attempting to communicate then yes, they’d do well to explain, but they would not be wrong about some external fact of the matter.
I don’t see that. That Johnny “ought” to tidy his room can derive from his mum wanting him to, not from himself wanting to.
Because I don’t see any other form of oughtness that makes any sense to me.
Philosophers claiming that have hit a dead end and don’t want to admit it!
Well it is a fact that humans do have desires! They are not justified rationally from anything else, but we do have such desires and we have them because it’s in our nature to have them, because that’s how we have evolved to be.
Again, you are trying to be profoundly literal. There are two cases where we care if someone disagrees with us on these matters, to the point where they disapprove of our determination:
1) We think that they might have access to some kind of fact that we don’t, and so by our own criteria we might have to reassess our assessment.
2) There are negative consequences to us if we don’t gain their approval.
Do you have any examples of any other case? After all, 2) is CLEARLY a case of “caring about their approval” but at the end of the day it’s all about the consequences of them not approving. If someone whose approval I care not one whit about disapproves, then that has no meaning to me.
It means, as I said, that if you say that you find something immoral then I don’t know if you dislike it strongly or mildly, and you claimed it meant that you disliked it strongly. Any aesthetic judgement can relate to strong or mild “feelings”, and again so much so that your pragmatic judgements — for example — might trump that: you think me immoral for doing so but respect my intelligence in using that to achieve my goals. From that, I can’t even really tell if, overall, you actually disapprove of the action. So it becomes a meaningless statement.
So they’d be referring to a different concept, then. So it would only be the case that they’d be wrong if they were referring to it as a concept as if that’s how everyone refers to it, but if they were just referring to theirs they’d be completely right and just have to clarify their use. This, then, implies that most people would always be referring to their own concept, and not to anyone else’s. Given that, you calling something immoral and them calling something immoral would be the two of you talking about different concepts, especially if you disagree, and thus always result in you talking past each other. At which point, again, talking about things as “moral” seems meaningless, because in general no two people will ever really mean the same thing by it.
But as I said, if he doesn’t desire to do what his mother wants him to do, then by your own standards there is no reason he ought to do that, so it wouldn’t be normative for him by your own definition of normative.
Not good enough. There are huge issues with trying to make normative claims instrumental, and you have not provided any reason to think that intrinsic values don’t make sense, and in fact you yourself NEED intrinsic values to make your case. That you don’t understand the work that has gone on in the field doesn’t make it wrong.
Um, no. It is, in fact, instead a basic logical issue: every desire/claim can’t be instrumental because that means that it exists only in order to satisfy some OTHER desire/claim, so you need to have something that exists just because it does. Even you need that or else you can’t have any desires at all, even if you end up simply reducing it to desires that evolution has programmed us to instinctively hold. And as I said, those don’t work because they can be maladaptive, so you need something to evaluate them against.
But we can ask if we ought to have and/or ought to follow the desires we have, and that’s precisely what we mean by normativity.
True, but that’s hardly a major flaw in my scheme! The same applies to a lot of other things. If someone says that takiong toiletries from a hotel for home use is “immoral”, that doesn’t tell you whether they think it milder or worse than, say, murdering a child.
But this is inevitable, and is just how things are. There is no objective standard of morality that we can all compare to.
The reasons why he “ought to do so” include not wanting to incur the disapproval of his mum and not wanting to be punished by his mum.
Of course I need intrinsic values! These are values people hold, part of their nature. They are not objective (independent of humans).
It is the *oughts* that are always instrumental, not the *desires”.
Agreed, and human values and desires exist like that, because we have evolved to be the sort of animals who have values and desires.
But we can ask if we ought to have and/or ought to follow the desires we have, and that’s precisely what we mean by normativity.
And the answer can only be in terms of our other values and desires.
It is indeed a major flaw in your scheme when I asked you what meaning it could have that we would care about and you said that that’s what it meant! If when you utter that statement I can’t know that you disapprove strongly or even disapprove, then that’s not what is being communicated when you utter that statement, and again that is what you said was being communicated to justify that someone else should pay attention to you when you utter those statements.
Yes, but this assessment is based on assuming that there is an assessment or judgement of moral or immoral that isn’t just a personal feeling, and that can express that without in any way expressing how strongly that was felt. Recall that you DENIED that that was the case for moral judgements and we only got here because I pointed out that you couldn’t hold that strong a view of moral judgements if you considered them aesthetic preferences, because aesthetic preferences allow for precisely those sorts of degrees. So, again, by retreating to this position while you may believe that you’ve addressed the objection it comes at the cost of invalidating the defense that you used against another argument. This tendency of yours is precisely why the debates go round and round in circles, as you retreat to more “moderate” views when hit with things that seem strange but miss that when you do that you change the position enough that you can no longer make a number of points that you made earlier, because the new view doesn’t allow for them. I’ve noted that this is a common mistake that amateur philosophers make, where they seem more concerned with defending against specific criticisms rather than defending the position as a whole. At some point, you can defend yourself into an entirely different position, but if you don’t realize that then you still try to use the benefits of the previous position while appealing to the different position to buttress that one.
Again, here you argued that the strength of a moral criticism is what makes it one that others will or should pay attention to, and then when it was pointed out that your underlying system didn’t support that — because that’s not how aesthetic judgements work — you essentially accepted that that wasn’t how they work but ignored that the argument that got us here required that or else it didn’t work. And you’ve done that a number of times over the course of this discussion, which is why I’ve always been so very confused over what your position actually entails. At the end of the day, your position just comes across as incredibly incoherent.
Which then means that we can’t have any kind of serious and meaningful discussion using moral terms and judgements, because we never mean the same thing. Thus, moral disagreements are meaningless, and we should stop doing that. Recall, again, that that was my objection in the first place, and you were trying to defend them as still being meaningful. But if it reduces down to the two people not actually even talking about the same thing anymore, how can it be meaningful? It’d always be equivocation which would be logically invalid. So then the person who wants to discourage someone from doing something that they consider immoral should either appeal to something other than morality, or appeal to the OTHER PERSON’S morality to make their case. NEITHER of these are what you tried to defend when you tried to defend judging the morality of other people and having them take that as a meaningful criticism, when you tried to defend moral criticism.
Desires and values can be instrumental. You need a way to determine what values for desires are or should be intrinsic and which ones are or should be instrumental.
Except that by your base here ANY of our values and desires are things that we could ask that normative question about, and so ALL of them are instrumental and subject to normative questioning. We can ask whether we ought to have or ought to follow ANY of our values and desires. So, then, how can you answer the normative questions based on our other values and desires when they themselves can and possibly must be evaluated the same way? You need some values and desires that we simply cannot logically or reasonably ask if we should follow them or not. But what could those be? Even our evolved values and desires can be questioned because they might be maladaptive. And you can’t just say that we should try to maximize our current desires because we don’t know before looking if the majority of our desires are ones that we ought to have or follow. So what basis can you have for evaluating which desires are normatively valid and which ones aren’t until you have some that are just intrinsic? How do you determine whether a desire or value is intrinsically good and so cannot be normatively questioned so that you can have a basis for determining whether the OTHER ones are normatively valid or not?
You can take Carrier’s tack and argue that there is something that someone just desires more than everything else, and so everything follows from that. You can even escape some of his issues because you can insist that there’s no right answer to that question, and so whatever they happen to value most is that basis, no matter what reason they have for doing that. Putting aside that this can lead to some VERY bad behaviours, you are still vulnerable to this reply from me:
What I value most is being moral, whatever that means.
This, then, requires some kind of external to me, at least — even if it isn’t “objective” — concept of morality, so I can’t just appeal to my own desires. For your view to work, this would have to be logically or conceptually invalid, but it doesn’t seem to be. It really does seem like someone could want to be moral more than anything else, and in fact such a person, conceptually, would really seem to be a far more moral person than someone whose morality was instrumental , where they act morally only because doing so will get them another desire that they want more (it gets them more money, for example). So it doesn’t seem just plain wrong for me to use that as my base desire, but if that I did so then your entire model would collapse into incoherence and impossibility.
Moreover, the form seems to work as well as a base, intrinsic desire as pragmatism does. I can clearly set as my base desire my own self-interest and have that work. And yet, again, in general doing so and working from that basis is seen as resulting in, at best, AMORAL behaviour, not moral behaviour, so the two aren’t conceptually the same thing, at least not obviously. And the only moralities that DO equate the two — Egoisms, generally — actually argue FOR equating them because they know good and well that you can’t just equate the two, because the concepts don’t align that way naturally.
All of this cycles back, again, to a question that you’ve never really addressed: I can have an internal idea of what I, at least, think is moral and yet not desire to act morally. Under the stance you take here, I’d have to still have a normative commitment to that, but earlier your entire point was that if I didn’t want to do it then I couldn’t have a normative commitment to it against my own specific desires. This is because here the normative ought is what I should be using to assess my wants, but then I can’t rely on my wants to determine what I normatively ought to do. This is a vicious cycle that you cannot resolve without introducing something outside of the instrumental wants and values of the agent, and you haven’t given any way to do that so far.
Just getting back to this:
I don’t see it as a major flaw. Moral language expresses the speaker’s approval or disapproval. People do indeed care about what others think. Maybe they don’t care that much, but, regardless, that’s all there is to it.
You can know that I disapprove. That’s what the term “immoral” conveys. You’re right that it alone doesn’t convey how strong the sentiment is.
While I confess that I can’t remember every detail of the discussion, I don’t think that’s fair. As I see it I’ve presented a consistent position. I think that at times you misinterpret my position, perhaps because you can’t actually believe that that’s what I actually mean. (People defending moral realism often feel that way!)
No, I’ve not said that. I’ve not argued that people “should” pay attention to moral criticism (that’s a realist’s position), and I’ve not made claims about what gives “the strength of a moral criticism”.
All I’ve said is that (1) moral langauge expresses approval or disapproval. (2) As a general rule, people do, at least to some degree, care about what others think. And if they don’t? Well, then they don’t. Then they don’t care about someone else’s moral criticism. And that’s all there is to it. The fact that “moral criticism” amounts to vastly less in my anti-realist scheme than under moral realism is not a flaw in my scheme, it’s a feature of it. Moral language really is to a very large extent empty rhetoric.
Sorry about that! I’m trying my best to explain it. To me, you come across as continually trying to read into my anti-realist scheme a moral-realist version. You’re continuing interpreting me as saying things that I’m not.
Yes! Exactly!! Moral language functions as rhetoric, trying to big-up one’s own subjective feelings by implying that they are more than that. But they’re not, all there is is approval or disapproval akin to aesthetics.
No, they’re not meaningless. Expressing approval or disapproval has meaning. If I say “I dislike X” or “I disapprove of Y” then you know what I mean. But there is indeed no meaning *beyond* that pronoucement of aesthetic opinion. There is no moral-realist meaning. Again, this is not a flaw of my stance, it’s a feature of it.
Moral language certainly causes far more confusion than clarity, given that most people interpret it in a moral-realist way.
The moral language *does* have meaning, it expresses a speaker’s emotions and values.
But, when I say it does have meaning you seem to be interpreting that as saying that it has more meaning than that, that in a “moral disagreement” there would be some objective standard or measure which the disputants could refer to. There is no such thing.
By expressing someone’s feelings on the matter!
What do you mean by “should” there? There is no objective shouldness. As a matter of fact, someone expressing disapproval based on their *own* value system *can* in some instances influence someone else.
Your argument assumes the position that moral argument “should” proceed with reference to some moral framework that someone abides by. Well, that might indeed work, but there is no basis to *require* that approach. Some other approach might be rhetorically successful.
Sure, there’s no *requirement* to adopt one of those. As a matter of fact, someone arguing and trying to influence others by promoting their *own* value system can indeed work.
This is an example of you reading into me what I’ve not said. I didn’t argue that they should “… take that as a meaningful criticism”. I merely pointed out that people often do care what others think (not always), and that de facto such a tactic can indeed (sometimes) influence others.
No, I don’t need a way for us to *determine* which values are intrinisic. All od our values can be influenced by all sorts of things. That doesn’t alter the fact that humans have values as part of our nature.
Agreed! And there is no bedrock, there is no primary starting point. Why would there be? The presumption that there need be comes from taking a moral-realist interpretation of what I’m saying.
Humans have values. Those values can be influenced by all sorts of things including other people. That causes humans to have different values. Moral language is a report of those values. By using moral language we can influence each other. That’s all there is to it.
But any attempt to look for bedrock or objective reference points is misguided — there aren’t any.
And there is no objective oughtness, so there is no answer to that question. The question is misguided. The only form of oughtness that exists is instrumental, deriving from our values. To ask whether we “ought” to follow our values is a misunderstanding of what oughtness is.
You can’t. There is no objective normativity. no objective oughtness, no moral bed rock. That’s the whole point of my stance.
No I don’t. Why would I need them? I would need them if I were trying to construct a moral-realist scheme but I’m not.
That's a misguided and ill-posed question. There is no being "normatively valid" in the way that you're asking for. Moral realism is false!
There is no such thing as “intrinsically good”! That’s another misguided and ill-posed question. Notions of “good” are value judgements we make. There is no objective or intrinisc goodness, nor any such oughtness nor shouldness.
All along you’re making moral-realist presumptions, and trying to work our how my scheme fits in with a moral-realist presumption.
Which I don’t. Carrier is yet another who has fallen for the delusion that there must be some objective standing to morality at the heart of all this. He’s wrong.
That comes when we have competing desires. (Which we do a lot, such as the competing desire to eat cake and the desire not to get fat.)
You’d have both a desire to do it and a desire not to do it.
There is no “normative ought”, there is no external and objective measure that tells you which of your desires “should” win out.
Again, your presumption that I need anything such is a presumption that morality is objective; it isn’t.
Sorry it took so long to get back to you; I’ve been distracted and figured I’d have to quote past comments to show you your inconsistencies, which takes some time. I’ll also warn you that in places I’m going to be pretty harsh to get across precisely why the inconsistencies are critical to your position.
So, let’s start with this:
I disagree strongly, so let’s start with the discussion that started off the comment and led to this statement. You said this earlier:
This to me implies strongly if not flat-out states that moral emotions are always strong and always express strong disapproval. And then later I said this:
You then claimed this, after completely ignoring my explicit statement that you had earlier claimed that saying it was moral meant that someone disapproved strongly:
Which led to my response that since you claimed that moral claims just meant strong disapproval it definitely WAS a problem for you, which you then again completely ignored to restate a discussion about approval or disapproval and completely ignore the discussion about strength. So not only did you contradict yourself, at least seemingly, you completely ignored my attempts to point out that contradiction AND didn’t even address the strength point in your reply. Is it any wonder, then, given that this is just ONE relatively minor path through this discussion that I’m so frustrated [grin]? Especially since the most you can say about me is that you think I’m reading realism in (which I think incorrect, but more on that later, since in other points I’ll try to show that it’s YOU who is trying to maintain the implications of realism while holding a view that doesn’t allow for them)?
As I have pointed out repeatedly, aesthetic properties — and you think that moral experiences are similar enough to them (or possibly even just ARE those sorts of things) to make this comparison valid — don’t work that way. Presuming that when you talk about “disapproval” you mean that the person doesn’t want them to take an action, we can see lots of cases where aesthetic judgements don’t mean that. For example, a parent is likely not having a pleasant aesthetic experience when they look at their young child’s artwork, but they certainly want them to continue to do it. It is also possible for someone to not care for experimental music, but want them to continue doing it because it produces something new or for an audience that is not them. And even with morality, we can use the example of the first Survivor season on TV, where despite the game being set up so that if a player shafted others they’d face the judgement of those they shafted, at least one person voted for the manipulative Richard Hatch because even though they hated what he did and considered it immoral, they ultimately decided that he played the game better than everyone else, thus providing an example where they considered it immoral but still “approved” in the most relevant sense. So, no, I can’t know that you disapprove if I know that you consider it immoral. The most I can know is that if you bother to tell me about it you probably disapprove.
And no, you CAN’T use conflicting emotions here, because at that point saying that it is immoral would STILL not tell me what the most rational response to that statement would be.
Despite my telling you on multiple occasions to stop conflating “ought” and “should”, you continue to do so. Why do you keep doing that? What other word can I use to reflect the idea that a person who is rationally assessing the situation will come to a conclusion about what is the best and most rational/reasonable response and so that that is the preferred response to make, if “should” is so confusing for you?
And if it is indeed that, as you assert, and I come to accept your view of that, then clearly the most rational response would be for me to ignore it. This means that I would, in general, rightly ignore all moral propositions from you as being empty rhetoric. And yet you still use them and insist that they have use. But we can clearly see that they only have use or an impact against people who think that they express an objective truth. Thus, you insist on their utility by appealing to people holding the WRONG idea of morality, according to you. Thus, you rely on them understanding morality incorrectly in order to get the effect you want. This is, of course, intellectually dishonest … and it does nothing to show what would happen in a world where everyone accepted what you think is the RIGHT view.
See, that’s the big problem here: you always talk from your side, the side of the person trying to convince someone not to do something, but constantly ignore what the person who accepts your view and yet has you say that to them would do. So do that here. Imagine that I’m convinced of your view and you say “X is immoral”. What response should I give, and what reasoning should I use to determine what the most reasonable response is?
(And if you get caught up on arguing about “should” this conversation is over [grin]).
So, then, clearly in a world where everyone accepts your view we would stop using it for that purpose, correct? Thus, almost all of the cases where we currently use moral language would be cases where we don’t do that anymore, and appeal to something else. THIS IS MY OBJECTION TO YOUR VIEW! Moral language would be used completely differently than it is now, so much so that it would be hard to imagine that they were EVER THE SAME THING! And yet you keep at least dodging that and continue to talk as if for the most part we could use them in those cases. Can we? In what real, practical sense would moral language be used? Please give an example of a conversation using moral language to show what the meaning would be and how it would be used in a world where everyone accepted your view of morality.
No, it’s more that I want to know why anyone should care at all about the moral character of the statement, in short that it is, itself, an expression of moral emotions or a moral code. As I said before — and you ignored — someone saying that they like the taste of something has a primary meaning of expressing that emotion, and we care mostly because it gives us an idea of their tastes so that we can make better decisions about them in the future, or because if we have similar enough tastes it gives me an idea of something that _I_ might enjoy. But there doesn’t seem to be any similar benefits to talking about morality unless we rely on the view of morality that you insist is incorrect. And from that we can even wonder if it would be better for us to extirpate moral emotions entirely, which doesn’t make sense for the other aesthetic emotions.
So, again, what reason would we have for keeping moral emotions at all, let alone telling others about them? What purpose could that serve?
Yes, because if you’re going to make a moral argument presumably it’s going to have to relate to morality specifically — and not something else — and it had better relate to some kind of relevant morality to the people considering the argument. Otherwise, it becomes an irrelevant statement, like saying that “Parallel lines never cross in Euclidean geometry” when we’re discussing how long to cook the turkey. It is of course a true statement, but it’s hard to see how it relates to the discussion at hand [grin]. By the same token, if someone makes an explicit moral argument about a moral system that I clearly don’t hold, then it is also hard to see how it can relate to a discussion where moral character — and thus the moral argument — matters. If morality is objective, then it clearly does because there is only one right moral system to use in such discussions. But if it doesn’t, then surely if moral argument is going to matter at all it’s MY moral system that is relevant here, not anyone else’s. And, again, you can’t argue that sometimes people are influenced anyway because we have to be considering the case where both people accept the “right” way to view morality, and you have given no reason to think that in that case a reasonable person would indeed be influenced by the moral arguments of someone who holds a different moral system than they do.
Again, assuming that all parties accept your view of morality and are acting rationally, HOW can that work? Because the only way I can see that working is if the person being influenced holds the wrong view of morality, which then would be intellectual dishonesty on the part of the person who is making the moral argument and hoping that it will work, as they would be hoping that the person thinks morality is objective while knowing that it isn’t.
Despite my describing this to you at least once YOU STILL HAVE NO IDEA WHAT IT MEANS FOR A DESIRE TO BE INSTRUMENTAL. An instrumental desire is a desire that is only held of the sake of achieving other desires. For example, if you go to the fridge and look for a can of soft drink and notice that there aren’t any in there, you will form an instrumental desire to go to the store to get some. That desire only exists because you want a can of soft drink. If before you went to the store you found that there was some left in the fridge, you would no longer desire to go to the store because it no longer served any purpose. So if all desires are instrumental, then when we try to do this (which is your claim and not one I pushed you into):
You end up with infinite regress, since every desire we have can be assessed in the same way, even the ones that we are appealing to in your own statement above. Thus, we need something that we cannot reasonably assess in that way. This is PRECISELY what you denied we needed, despite clearly needing it. So you can see why I’m confused and frustrated here [grin].
But YOU claimed that we could assess them to determine what values/desires we ought to follow, which is what normativity is. That means you can’t rely on “We just have them” to get you out of this fix, and remember YOU SAID THIS YOURSELF EXPLICITLY. So which is it? How do you square what seems to be a circle?
No, it isn’t. Everyone has a large number of values and desires that are in no way related to morality. We can have a subset of values or desires that FOLLOW FROM our views of morality, but that’s not what moral language just necessarily is. You conflate normativity with morality, as usual, which just gets things more and more confused.
And yet, it is the question that you explicitly said WE COULD ASK. If I can’t even ask the questions you say we can ask without being totally misguided, what does that say about your view?
At which point, that thing you said we could do by appealing to our other values can’t be done, and so that entire point was eliminated … BY YOU. Hope you didn’t need that for anything [grin].
(Like, say, having rational desires …).
You’d be right if I was even TALKING about morality here, but I’m not. I’m simply talking about desires, so this entirely misses the mark.
It is most frustrating that you took the time to respond pointing out the obvious that Carrier is an objectivist about morality AND IGNORED WHAT I SAID ABOUT THAT TACK … which included that, as a subjectivist, you had an out THAT HE DIDN’T and my point on why that still wouldn’t save you. Let me restore it so that you can give an actual response this time:
Maybe you don’t take that tack, but it’s not its objectivity that will get that for you, and it’s better than the absolute non-response to the issue that you’ve been giving. And that’s even if it isn’t true that this is the move you at least subconsciously make.
And you say this despite my making it clear that the desire WAS COMPLETELY ABSENT? This is indeed one way to get amorality: someone has an internal moral idea of what is and isn’t moral, but has no desire to actually act on that in any way, either morally or immorally. It seems like this is a contradiction by your view, and yet it’s perfectly sensible and likely even happens in the world.
Since normativity essentially applies to “oughts”, what you’ve just said is that there is no ought ought, or no oughtness at all. Either you need to be more careful in your terminology or you’ve just contradicted yourself. And this is even more egregious since as I’ve already quoted you TALKED about what normativity is, which then could be applied as oughts as per normal, so taking that into account your position becomes nonsensical. This, then, is why I conclude that your position is at least inconsistent.
And since, again, I’m not even REFERENCING morality here this is an utterly irrelevant answer, which is as good an indication I can give that you really aren’t getting what I’m talking about.
Hi verbose,
No problem! 🙂
First, let me go meta a bit. If moral realism is correct, then one could expect that there is a rational and coherent account of morality, akin perhaps to mathematics, in which there would be clear-cut answers as to what moral language means, and about what is moral, and these answers would be attained by rational analysis.
However, my whole stance is to reject that whole conception. My anti-realist conception is that moral language is effectively aesthetic language and is part of human psychology. That means that moral language will not be coherent or consistent, and that different people will use the language in different ways, and may not even be consistent themselves at different times.
Thus, there is no fact of the matter as to what moral language means, and so there is no fact of the matter that can be arrived at by rational analysis. What moral language then means will be highly dependent on the speaker and on context, in the same way that the meanings of value-judgement terms such as “naff” or “cool” or “scrumptious” are. The term “wicked” means a very different thing if spoken by a 14-yr-old about a computer game, as opposed to a 60-yr-old evangelical about abortion.
So, saying that something is “immoral” indicates disapproval, but asking how strong a disapproval it indicates is like asking how strong a dislike “naff” entails.
I stand by the suggestion that — as a rule of thumb — “moral” in “morally wrong” is often used as an intensifier, so that “morally wrong” means “very wrong” or “I dislike it a lot”. People tend not to use moral language about minor stuff. But these are just rules of thumb about how humans use language, there is no fact of the matter underlying any of this.
As I see it, a lot of your questioning supposes that there is an underlying, coherent, moral-realist fact-of-the-matter about morality, and that you’re faulting my scheme for not mapping to it clearly and coherently. But, the rejection of this is the whole point of my scheme, not a flaw in it.
Yes, I think that moral feelings simply are aesthetic feelings, or very much akin to aesthetic feelings, being human subjective value judgements, though applied to different subject matter.
Yes there are differences, but these result from the different subject matter that the aesthetic judgements are about.
Tom loves Mozart and classical concerts in general; therefore he wants to attend the Mozart-gala concert next week.
Fred strongly dislikes it if children are mistreated; therefore he wants Dan to stop mistreating a child.
I don’t see that the differences refute the underlying point, that moral value judgements are human subjective judgements akin to aesthetic ones.
I’ll bet that in many cases they are!
Because as well as having aesthetic feelings about the artwork itself, they have aesthetic feelings about the child and about the child’s development and exploration, etc.
So here we have competing value judgements, dislike of aspects of his behaviour outweighed by recognition of and admiration for other aspects. Human psychology is complicated! We all have myriad different competing value judgements. Thus we can have a loveable rogue, we can enjoy a film about and side with Butch Cassidy and the Sundance Kid, even though they were violent criminals and we disapprove of violent bank robbing.
Yes you can! The labelling as “immoral” expresses disapproval. That disapproval might co-exist with other emotional evaluations including approval. Human psychology is like that, it’s hugely complex and not necessarily consistent.
Oh yes I can!
You’re right, it wouldn’t. Why are you supposing that morality and moral talk would be about “rational responses” when actually it’s all about emotion and value judgements? That, again, is a moral-realist presumption.
Because I hadn’t grasped that you were trying to make the distinction that you now clarify:
Morality is not about rational responses, it’s about values! A completely rationa robot with no values or desires would not make any response, indeed it would not do anything because it would have no motivation to do anything, because reason alone does not provide motivation. Aims, desires and values provide motivations.
Therefore you cannot get a “should” from reason, you can only get a “should” from an aim or desire. The only form of “shouldness” that exists is instrumental “shoulds” deriving from our desires and values.
Therefore, again, your whole attack on my position presumes a moral-realist conception of morality that is independent of subjective human values and that is arrived at by rational analysis.
Yep! (And the reason that moral language is effective in the world is that humans are not purely rational creatures.) Though it is indeed rational to assess and take into account the emotional state of your fellow humans, since such states will affect how they act.
They do, they have rhetorical use!
I’m not so sure, humans can be influenced simply by the fact that other humans have strong opinions and emotions. Expressing purely subjective opinions can influence other people. Thus, if someone notable expresses an opinion about an artwork or novel or about clothing fashion, it can influence how others feel about those things.
Let’s take a clear example: in many cases, a young boy will support the same sports teams as his father. This choice is purely subjective, not in any way rational, but the fact that his dad supports a particular team can strongly influence how the boy feels about that team.
But, I do grant you, the concept that “moral” claims are backed up by objective truth does assist the rhetoric and make the claims more convincing — which is exactly why we’re programmed to think that way.
I suggest that if everyone accepted the subjectivity of moral claims then de facto moral discourse would continue much the same (though being less fraught). For comparison, aesthetic language is common and useful in society.
Again, your question supposes that morality is about reasoning and rational responses. It isn’t, it’s about values. But to answer the question:
If I said, “X is immoral”, let’s say I suggest “separating immigrant kids from their parents is immoral”, then you would interpret me as saying: “I dislike kids being separated, I consider it harmful, and want the rules changed to prevent it”. Note that all parts of that (“dislike”, “harmful”, “want”) are subjective value-judgement opinions.
Your response would then be to agree with me or disagree with me, doing so based on your own values, and evaluations of harms, and on what sort of society you want to live in.
And that, actually, is pretty much how these things work, with people discussing competing values (desire for an orderly and controlled border versus desire not to harm kids).
We could do, we could drop moral talk entirely, and instead have the above discussion in terms of subjective values. But, de facto, we won’t, at least in the medium term, because that language is too deeply embedded into how we talk.
But likely it would not be that different. Take the above case, the discussion does *not* just go: “it’s immoral”, “no it isn’t”, “yes it is”. De facto, the discussion quickly becomes about values, harm to kids, wanting a secure border, etc. Thus the discussion *does* become about subjective values, with the “moral” language being a rhetorical veneer. Anyone who can’t back up their veneer of “immoral” in terms of values gets ignored.
Indeed, that has to happen, since even if moral-realism were true, we don’t have an agreed method of discerning what the moral-realist truth values actually are, and so de facto the conversation proceeds in terms of our subjective values. That would hardly change.
It would be used as expressing value judgements.
Does the above example suffice?
If you’re asking for a reason based on rational analysis for why you should care, then there isn’t one.
If you’re asking why you should care about other humans’ value judgements, it’s because such judgements affect how they act and that can affect you.
Not at all! Nothing in my scheme implies that. As humans, values are a central part of our nature and are all-important to us.
A baffling question! Just imagine if none of us even cared if people got killed, raped, or if children starved on the streets, or died of treatable diseases because no-one cared. Would you want to live in such a society? If not, then there’s your answer.
Note your phrase “and are acting rationally”. The idea that morality is about rational concepts is a moral-realist one; in my anti-realist position morality is not about rationality, it’s about values.
Humans are not purely rational creatures. For an example of how someone “promoting their *own* value system” can influence others, consider the above example of a father supporting a football team can influence his son to support the same team.
Agreed.
It’s not the case that all our desires are instrumental. Evolution has programmed us with “human nature” which includes a whole raft of innate desires and values. That doesn’t alter the fact that those desires and values can also be influenced by environment, as well as by our genes.
We have plenty of non-instrumental desires and values, these are desires that we just have, as a result of our genes and our development and upbringing. These are where the regress ends. Obvious ones are desires to breath and eat, to avoid pain, sexual desires, etc. These are not instrumental, they are part of our nature.
Where did I say anything like that? My whole stance here is that there is no such thing as “what we ought to follow” in the abstract. All “oughtness” is instrumental, deriving from our values.
What I have said is that some desires and values can be over-ridden by other desires and values. But, at root, having desires and values is part of our nature.
Where? My whole stance all along is that we have values as part of our nature, which is the foundation of all of this.
Agreed. But the “moral” values and desires are a subset of our overall set of values and desires. The ones that tend to get labelled “moral” are mostly ones about how people treat each other. Thus a desire that people don’t murder each other is one salient to morality, but a desire for a snazzy car is not (though some would say that spending money on a snazzy car, instead of third-world aid, is a moral issue; again, there is no fact of the matter).
Where did I say that? I strongly suspect that you’re reading into me moral-realist presumptions that I’ve not said.
What I’ve said is that we CAN over-ride one value using another value. That is NOT saying that we can assess the two values and determine what we OUGHT to do!
Those are not the same thing at all! The latter is a moral-realist notion, and my whole stance rejects anything such. All I’ve said is that humans do have a whole host of competing values, and that often some values over-ride others. That is purely descriptive.
That is not at all the same thing as “But YOU claimed that we could assess them to determine what values/desires we ought to follow, …”.
Let me start by doing the dirty work of proving that you really did say what you now claim you didn’t say:
You said this in a previous comment:
Yes, you added that that follows from our values, but that wasn’t what I was challenging there nor was it relevant to my interpretation. You argued that we can assess our desires and whether or not we should follow them, and my objections were based on that. Then when I followed up with that pointing out that later you were contradicting that, you denied saying the thing that you explicitly and directly said. This is not conducive to rational discussion [grin].
Okay, I don’t want to have to do that anymore, so denying that you said things that I explicitly say that you said is probably something you should avoid doing in the future. At least ask first where I got that from before denying it. Let me move on to a bit of a long preamble trying to untangle the mess we’re in, because you keep asserting that I’m trying to make morality rational — by which you seem to mean objective — in places where I’m not talking about morality itself, but instead about our reactions to moral claims.
So let me start here: what it means for an action to be rational is that given a set of beliefs about the world and a set of desires, there are better and worse approaches one can take to satisfying their desires. The rational approach is the one that maximizes the satisfaction of those desires. This does NOT mean that those beliefs and desires are themselves objective or must be the same for all people. By their very nature, they belong to the person who has them and differ from person to person. However, if I know the beliefs and desires that a person has then I can determine rationally what the best and therefore most rational way to satisfy them would be, and I can do that regardless of whether or not they agree. There’s a best way for them to satisfy their desires even if they don’t believe that that is the best way to do so. This is a minimal rationality to me, but again does not make beliefs, desires and values objective.
So when I talk about determining a rational response to something, that’s all I mean: given a set of beliefs and desires there is a best way to respond to any given situation or stimuli. In this case, the question I’ve been constantly asking is what the rational response to someone saying “That is/would be immoral!”, in terms of what action they should take or what beliefs they should change or whatever.
This, then, gets into what morality has meant and what it means under your view. Typically, when it comes to taking action morality has been assumed to be an overwhelming motivation. If I decide that an action is immoral, then that gives me all the motivation I need to not do it, and pretty much trumps all other possible values or desires I have. If immoral, don’t do it is the operative structure here. And if morality is objective then if someone else says that an action is immoral then that would be treated like a violation of my own moral code, and so again give me an overwhelming motivation not to do it. Thus, in general, we have the belief that morality trumps all other motivations and that morality being objective means that someone else expressing that an action is immoral also means that their assessment trumps all other motivations as long as they are right about that.
Making morality subjective takes away the second assessment. Even if I hold that morality always trumps all other motivations, that would have to be assessed against the moral code that _I_ hold. I granted your assessment validity when there was only supposed to be one true moral code, but now that we accept that there are multiple moral codes and that the only one that’s really valid for me is my own, your assessment no longer has that force. That’s why your comments of “Well, sometimes people respond to those assessments!” don’t work, because for the most part they respond because they still hold the outdated idea of what morality is — which, as I said, it is intellectually dishonest for you to rely on — and beyond that they may be able to find other motivations to take your assessment to heart but none of them would be based on that assessment being a MORAL one. It would only be if they happen to agree with you that their actions would be based on morality, and in most cases if we accept subjectivism about morality that won’t really be true, or will at least be coincidental (or culturally programmed).
But wait, there’s more! Your next move is to reduce morality to an aesthetic preference, but this ALSO has consequences, because while morality is always seen as a trumping motivation — and, in particular, that moral motivations trump self-interested ones — aesthetic preferences are always seen as being subordinate to self-interested ones. To us the Buckley’s cough syrup ads as an example, it tastes awful, but it works, and so the rational thing for us to do is sacrifice our aesthetic preference to the self-interested greater good of curing my cough. In general, while we always have some motivation to seek comfort and pleasure and pleasurable experiences and avoid uncomfortable and painful ones, we are always expected to accept those if our overall self-interest would be enhanced if we ignored those aesthetic preferences. If morality is the same way, then we would be expected to, again, only act morally if it was in our long-term self-interest, and if it wasn’t, then we shouldn’t do that. This is a radical departure from how we use morality, and would change the dialogue entirely if we actually accept the rational consequences of our own beliefs. I should only act according to your assessment of the morality of a situation if a) I agree with it and it is in my self-interest to act on my morality in that case or b) it is in my direct self-interest to act according to your assessment even though my own morality doesn’t agree. This is why I said that the only motivations one can have to caring about your assessment of their morality under your view are that they agree with you or that you have power over them, because those are the only rational motivations that could trump their own moral assessment AND where morality can trump self-interest.
And you can’t really deny this sort of basic or minimal rationality, because to do so would wipe out about 90% of what you do in blog posts and in comments. You can’t criticize people as being irrational for believing God exists because you would have denied even minimal rationality as being a motivating factor in determining what someone ought to do. And you can’t criticize them for claiming that faith is a good way to for beliefs because you’d have to call them irrational for doing so, and you’d have eliminated rationality as any kind of argument. So in defending the irrationality of morality, you strongly risk making rationality pointless.
So morality really should work differently under your view if we are minimally rational, unless you can distinguish morality from aesthetics in a motivated and properly argued way. And that’s what you haven’t done. Case in point:
The problem here is that you considered them “different” by comparing a positive case in the case of the aesthetic and a negative one in the case of morality, but of COURSE those are going to be different. They’re different in pure aesthetic judgements, too. So you never DID show how context and subject matter, well, matters. You are PRESUMING it does because those things trigger more strongly in your “moral” case than in the aesthetic case, but if you accept your view then you’d have to accept that that isn’t the case for all people, especially considering that we know of people — psychopaths, for example — where that ISN’T true. If you make morality subjective and based on moral emotions, you have to accept that others will not have the same emotions as you and so cannot simply use those as demonstrations of how they’re different, because to someone else it might not be AND since we know that then it isn’t necessarily so and so there’s no necessary difference there, so it can’t be used as an argument or example to prove your point.
Which then gets into the nature of morality itself. If morality is subjective, then at first blush it seems that what is moral just is what a person THINKS it is. Thus, it is entirely reasonable for me to conclude that that Stoic idea of morality is what morality really is to me. Except that the Stoic idea eliminates moral emotions as being what it means to be moral. As long as I accept that that moral code is, in fact, only mine, I’m not violating moral subjectivism, and so it’s hard to see how you could oppose my holding that view. But if I do so, then it is clear that what defines the moral is NOT moral emotions. That’s what it is to YOU perhaps, but not what it is in general. So, again, by trying to deny a common meaning to moral language you end up leaving yourself unable to say what morality is and so unable to actually argue that morality is in fact an aesthetic property. It is for you, perhaps, but to me it’s like a personal code of honour, and both views seem valid under your own system.
Which leads me to this comment in response to my question about why we need them at all:
Except that we can get to that purported “caring” though other means. I can adopt a Stoic moral code, declare that morality trumps all else, and act accordingly. I could also, however, adopt an entirely self-interested view and come to the same conclusion. Why? Well, it’s as you said: a society where those rules didn’t exist and weren’t followed is a society that is not in my personal self-interest, thus I accept those rules existing and follow them … for as long as it IS in my self-interest. And both of these rational approaches are superior to moral emotions because moral emotions can misfire and cause me to act in ways that are not in my self-interest or are not consistent with the moral code I have chosen to adopt. Moral emotions, then, risk me violating minimal rationality, and since minimal rationality always tells me what’s in my own best interest given the beliefs and desires I have it’s not a good thing to violate them. Thus, I have good reasons to eliminate the very things you think absolutely critical.
Okay, let me clean up a few dangling things from the rest of the comment.
But in light of the above, what would be important is whether your OVERALL assessment is positive or negative. If positive, then I can expect a positive response from you if I do it and if negative I can expect a negative one from you. Presuming that I have reason to care about your reaction, if emotions can conflict then what I really need to assess is what your overall state is or will be at the end of all the assessments. Saying “That’s immoral” doesn’t tell me that, unless you insist that someone wold only say that if that was their overriding feeling (which would contradict aesthetic language). But in that case you’d still be relying on the old notion of morality, as you’d expect everyone to know or accept that morality was overriding when, in general, it isn’t. If morality is an aesthetic judgement, then expressing it does not, in and of itself, express strength NOR does it imply that it’s overriding, both of which you need to claim that moral language would be used pretty much the same way if everyone accepted your view.
The problem is that you keep using children as examples of how it can happen, when children aren’t expected to be rational. The only reason adults can maintain this sort of belief is because it isn’t important; WHICH team I cheer for doesn’t make me happy, but rather it’s cheering FOR a team that makes me happy. So while there might be better or worse teams to cheer for — cheering for a team in another city so you can never see them live seems not ideal, for example — that’s not what it’s about and so we are willing to accept that simple assessment for that belief.
This is not true of all of them, however. For example, I doubt you would be so sanguine about my belief in God and holding Catholicism, despite the fact that it came from my parents. I also doubt that you’d accept someone who was racist because their father was. In general, subjective beliefs are beliefs that we claim it is generally difficult or impossible to reason someone out of while objective beliefs are those that we think we can reliably do that for, and moreover that people who don’t do that are wrong not to do. That is clearly the case for you for God beliefs, is generally the case for racist beliefs, and is not at all the case for aesthetic beliefs like favourite team or favourite style of music.
The problem is that our evolutionary desires are ones that can and often are not conducive to our overall happiness. Thus, we quite often need to override them to achieve happiness. If they are intrinsic, then we couldn’t do that without undercutting our entire value structure; we’d have nothing to value because the things that we value just because are, at least in that instance, no longer things we really value. In short, we stop valuing the things we are supposed to just value, or at least value other values THAT ARE NOT THEM more. This, then, breaks minimum rationality. So, in general, we have one overarching value and assess all of our other desires and values wrt how well they help us achieve that one. In general, happiness is assumed to be that one value, but for you — unlike Carrier — we can legitimately decide what is our base value. This still leads into how to handle my saying that being moral is my base value. Carrier’s view becomes logically circular at that point, but yours can survive it by simply claiming that I’m wrong to pick a logically unachievable base value. But then we can see that your view of morality is again quite different from the typical view, as it subordinates morality to happiness. Thus, we wouldn’t use moral language in at all the same way, which gives us reason to doubt that you, yourself, are talking about the same thing we are when you talk about morality.
I think I’ll end it here. But the short summary is that your view reduces morality from an overriding motivation to a subordinate one, while you still use it as an overriding one to claim that we’re even talking about the same thing, which to me seems like an inconsistency. Moreover, your own view doesn’t let you make the bold statements about moral emotions that you’ve been making. So we need to settle all of that before there can be a consistent view here to discuss.
Hi verbose,
There’s been some miscommunication, owing to a screwed up blockquote (my fault, sorry). The words you attribute to me was actually me quoting you, and then replying to it. But I messed up the quoting thus misleading you. Reviewing the thread:
Me: “Well it is a fact that humans do have desires! They are not justified rationally from anything else, but we do have such desires and we have them because it’s in our nature to have them, because that’s how we have evolved to be.”
You: “But we can ask if we ought to have and/or ought to follow the desires we have, and that’s precisely what we mean by normativity.”
My reply: “And the answer can only be in terms of our other values and desires.”
And in giving that reply I quoted your above sentence, but made it look like it was part of what I was saying. From there, you thought I had said something, whereas I was confident that I hadn’t.
Agreed.
Agreed.
By your own account as just stated (and which I agree with) the rational response that someone should take on hearing such a thing would then depend on their own values and desires. So one can’t give an answer without knowing what those are.
[The previous times you’ve asked the question I’ve interpreted it as asking about a rational response that they “should” make, regardless of their own aims and desires, which is a moral-realist notion.]
Agreed, that is how a lot of people have conceived of “immoral”. I consider this to be a moral-realist conception that is false.
Agreed.
I agree that to a large degree that is why they respond to moral assessments. But, that it is not the only reason. People are indeed influenced by other people’s approval or disapproval, quite regardless of notions of morality.
Let’s take an entirely subjective topic such as fashion in clothing. It is a fact that what people decide to wear is influenced by other people’s approval or disapproval of those choices. You wouldn’t turn up at a job interview for a law firm while dressed for the beach.
Thus it is not the case that moral language (that is, expressions of approval or disapproval) would have zero effect unless we accept moral-realist interpretations.
As I see it, the desire to be fit and healthy is also effectively an “aesthetic” preference. Or, rather, all our desires and values (whether about being healthy, or liking the taste of food, or enjoying art, or disapproving of how someone is behaving) are all pretty much the same thing.
We can usefully use labels to denote different types of preferences and desires, but they’re all variants of the same thing and they can all trade off against each other. Evolution has programmed us with these preferences, but evolution doesn’t care whether it is a “moral” preference, a “self-interested” one or an “aesthetic” one. They are all just patterns of activity in our neural-net brains, whizzing around, interacting with each other, trading off against each other, and competing to influence the “output commands” to our muscles.
It is indeed a radical departure from our *commentary* *about* morality, but I don’t think it’s that radical a departure from how we de facto think and act.
Not really true; again, humans *do* care about how others think about them. Let’s suppose a prominent Canadian person says something derogatory and insulting about Mexicans. Many Mexican people would care about it, even if (a) they disagreed with it, and (b) the Canadian had no power over them.
Recall from above that we’re agreed that the “rational” response here is in terms of one’s own desires and motivations. Well, one desire people (in general) have is to be well regarded and well thought of. It’s about reputation, social standing and “face”; it matters to people. If someone is being called “immoral” it matters to them! That is still the case even if that “immoral” label is (rightly) interpreted as “mere” aesthetic disapproval.
Just imagine being a 15-yr-old girl who gets criticised by the popular kids in the class for having a lousy dress sense and being utterly uncool. This *will* have an effect on them! She will care! And it will influence her, even if everyone accepts that ideals of dress sense and coolness are entirely subjective.
We are social animals, de facto we are influenced by the evaluations of our follow humans.
OK, agreed, psychopaths and sociopaths are less influenced by these things, but don’t those exceptions show that most of us are?
Under my subjectivism, there is no such thing as “what *is* moral”. All there is is reports of people’s values.
So, based on your value system, you’re choosing a Stoic moral system as encapsulating your values and being how you want to act. That’s fine, but it is a choice based on your value system. It’s not what morality “really is” whether “really is to me” or not. (And then error theory is a large part of my analysis; most people’s assessment of what morality “is” is false.)
If all you’re doing is reporting your moral codes and your value system then I don’t disagree. If you’re arguing that it’s what morality “really is”, then I’ll suggest you’re in error.
But I don’t see how you can have an emotion-free code of honour. Codes of honour are reports of value and emotion systems.
Yes you can, but you can only do so based on your own values and emotions, how you want to act and what sort of society you want to live in.
True, but evaluations of “self-interest” can only come from what you value, and what sort of society you want to live in.
I think you’re taking way too narrow a conception of what “emotion” means here to an emotivist. It includes all “I care” thoughts, and is contrasted with statements of pure fact or reason. Thus all values, all desires, count as “emotions” to an emotivist.
So, both of your above analyses are emotivist. In one you’re adopting a Stoic moral code owing to your *values*, in the second you’re evaluating “self interest” which can only be done based on what you *want*.
The contrast here is with traditional moral-realist conceptions of there being things we “should” or “should not” do that are entirely independent of our values or of what we want. These are attempts to derive shoulds purely from facts or reason. The emotivist denies that this is possible and says that shoulds can only derive instrumentally from our values and desires (aka emotions).
And nor are adults! Yes, I’m using children as examples for that reason, that it makes things clearer. But adults are just grown-up children, they’re still humans and have all the foibles of human nature.
That’s not true. It would be true if a value system were foundational, deriving from axioms or fixed “intrinsic” values. But human brains don’t work like that. Values are actually patterns of activity in our neural network, influenced by all sorts of things (genes, upbringing, environment, sensory inputs, etc). Thus our values are a web, and one can change parts of it, any part of it, with knock-on consequences for other parts of the web, but without the whole thing collapsing.
I don’t think so. We have a whole web of values, and all of them influence, reinforce, or compete with, other values.
I don’t accept the concept of a base value (though I readily accept that some values are more influential than others).
Not really, morality is not something different from or apart from motivations and values, moral language is simply a *report* of motivations and values. It’s a way of talking, but not the underlying reality.
To be honest, I probably would have caught that myself if I had been able to reply to the comment earlier. Which, of course, I’m still failing to do. I’m less busy and more distracted, which causes me to not really find the block of time to dedicate to replying to comments.
On top of that, though, you didn’t actually disagree with me about what normativity is. This, then, makes me wonder what reasoning you’re using to say that normativity doesn’t exist, or what definition you’re using to rely on that. Your comment talks about having to reference other values and desires, which might be true — although it’s vulnerable to my conceptual view of normativity — but is another debate entirely. So, again, at this point I’m totally confused about what you mean by normativity.
To be honest, I think focusing on, at least, refuting objective morality would be better for you. Trying to define and then refute/refine normativity is probably above your philosophical pay grade [grin]. I myself, if I wanted to, say, claim that normativity couldn’t be subjective would either have to do a lot of research to see if there are views where it can be, or at least have to come up with a very strong argument that it is conceptually impossible for normativity to be subjective. You seem to be going further than that without any such strong arguments. Is the lack of normativity that critical to your views on morality? If it is, how is it that critical? What does it do for you?
Note that in discussions of morality the big reason to make them normative is to avoid making them descriptive, and thus to avoid simply saying that we are merely cataloguing the existing moral beliefs of people and that there is no notion about what morals — or actions — someone should take. We do this to avoid the problems I’ve been raising and you’ve been kinda dodging, which is the idea that if someone considers it moral to rape and murder people, all we can do is report that that is what their morality is. Whether we end up appealing to their own values or to objective values, at the end of the day we still want to be able to say that despite what they believe they are mistaken, which we can’t do if morality is merely subjective-descriptive. And note that aesthetic properties are, in fact, subjective-descriptive; I can’t be wrong about what I find visually pleasing.
How many different sets of values and desires would be in play here? I’m not asking for a complete table, but at a minimum an idea of “This is what it means to them, and thus I must consider that this way in my decisions”. Remember, we’ve chased the “They have to care about their disapproval” chain around quite a bit, and I still maintain that there are a limited number of cases where someone has a set of values and desires where if morality is just an expression of disapproval they should care about it if they accept that. I still also maintain that you keep relying on the fact that people think that morality is objective — and thus that a challenge to the moral claim they are making is an actual claim that their morality is incorrect — to support your contention that we should still consider moral claims important and react to them the same way we do now, both consciously when you talk about it as useful rhetoric and unconsciously when you insist that people still do often care about the disapproval of others.
So, knowing that all it means is that they don’t like it, again why should anyone care about their disapproval? More on that later.
You are expanding moral realism far beyond the realm it actually has. Even asking what one should morally do doesn’t require moral realism, as all moral subjectivisms and even moral emotivisms have an answer to that question. It’s only error theory and other eliminativist theories that considers the question pointless or unanswerable. PLEASE be more careful in what terms you use, because at times you seem to make moves like this and then use the fact that you reject moral realism to refute the other claim, when they don’t rely on each other at all.
Also note that I myself am not a moral realist, but remain a moral objectivist, so in general refuting moral realism is going to do nothing to my position.
But if I reject that, then all moral values will be instrumental, and thus must be justified by some other value or set of values. This would be, as I have said, a radical departure from how we view morality. In general, I think the best option is always going to be “I’ll only act moral if it is my best interest — given my evaluation of my own values, desires, and beliefs — to do so”. Since most traditional moral claims clash with self-interest and seem to exist — and have evolved — to limit my self-interest, this would again result in radically different moral systems. And since all of those moral claims would ultimately be justified by self-interest — for example that I keep promises because if I don’t no one will accept my promises and won’t keep them with me — then moral values and claims seem redundant. I can simply drop down to rational self-interest and leave morality out of the picture entirely (see Objectivism for a morality that does exactly that).
But I can’t really see any other sane and rational way to go from here. Do you have any other options, or is this where you expect things to end up?
Because they wouldn’t hire me if I did, therefore it is not in my self-interest because they have power over me. I don’t care about their moral assessment and don’t consider myself morally bound to do so. If I had a personal moral code that said that I couldn’t dress that way, doing that would be choosing self-interest over morality. So again, at that point you take morality out of the picture, which makes it meaningless to actually take the claim into the realm of morality.
At which point, your view and terminology becomes so eccentric that discussion seems impossible. You don’t mean by words what everyone else means, but then just toss them around and expect everyone to follow along with them, which they can’t. This really, really comes across as — it seems inadvertent — equivocation. You are making the amateur philosopher mistake of redefining words and positions but treating them as if they have the same implications. They don’t.
If all values are the same, then I say that there is NO SUCH THING as moral values and it is simply equivocation to say there are and to make any appeal to morality, and thus that doing so is intellectually dishonest. Since you still do that, you’d be at a minimum making an incredible mistake, and at worst would be deliberately equivocating (if you’re aware of that and yet don’t care).
The problem is that we label things with different labels because they have different properties and thus implications than each other. So to collapse them all into each other as a response to an argument simply doesn’t work, because the argument relies, in fact, on those different properties. And as it turns out your arguments about neurology and evolution are incorrect, because those properties prompt different behaviours in us, and evolution selected them and molded them based on those differences. Essentially, here your argument is like saying that because both water and strychnine are, at the base of things, made up of atoms you should be able to drink both without any problems. The arrangement of atoms is important, as is the different properties of things like aesthetic preferences, morals and self-preservation instincts and desires. Thus you can’t just declare morals and self-preservation instincts and desires to be the same as aesthetic preferences when it suits your argument, since that has implications for what they are that goes beyond the argument you’re making.
Instead of simply saying “These are essentially aesthetic preferences”, you really should take the long road and outline what specific properties you think these things have that deal with the refutations or support your argument, without making any appeals to them being the same as other things except perhaps as analogies. You’ve done the analogical argument before, but ruin it by insisting on an ontological similarity of them being “the same”.
But couldn’t that be because most people do not actually consistently ACT on their moral beliefs, and instead act on self-interest and rationalize it? Thus, even if you’re right, that’s only because most people already have reduced everything to self-interest and aren’t willing to admit it, and thus if they accept your view they, rationally, would have to accept it and just cut morality out of the picture entirely.
And you want people to act rationally. As I said — and you disturbingly dodged — that’s the basis of your opposition to religion and advocacy of scientism. So for you to abandon it as you so often do in these cases smacks of “convenient argumentation”: it’s a reasonable response to say that we sometimes act irrationally only when it suits your argument, and not when it works against you. That leads to frustrating discussions, so you really need to address this with some sort of consistent position, and one that matters.
They’d be offended, sure, but not so much that it would cause them to change their actions based on it without one of the other factors being in play. And that’s what moral claims are always aimed to achieve: a change in behaviour, either yours or theirs.
Except in general that WOULD be self-interest; she’d be conforming because not doing so will make her life miserable. And, of course, there are those that won’t conform based on other kinds of principles, including moral ones. So at this point, it’s not a moral claim anymore and again reduces to an appeal to self-interest, and so isn’t in any way trumping self-interest.
We want to make moral claims to convince people to not take actions that are in their self-interest (see objections to Objectivism, for example). Your view doesn’t allow for that unless they themselves decide to value morality above self-interest and are acting on their own moral code, at which point you telling them what your moral code says is irrelevant, and thus if it reduces to “disapproval” reduces it to an appeal to self-interest, which as I said means it can’t trump it.
No, because you have no cause to say that they are in any way wrong or deficient — and thus are important exceptions — AND can’t really defend your view from the charge that WE are wrong and should be more like them.
Then you’re not a subjectivist, but instead are some kind of eliminativist. I’d say that you were a straight error theorist except you use that term wrong [grin].
On what grounds can you argue that if I say that when I say “X is immoral” I mean “X violates my personal Stoic moral code” that I’m wrong? That requires you to make SOME kind of objective claim about what morality really is, which is what you deny you can do.
Error theory doesn’t mean that people sometimes get moral claims wrong. From wikipedia:
https://en.wikipedia.org/wiki/Moral_skepticism#Moral_error_theory
You in general don’t seem to be a moral skeptic, and emotivist positions aren’t morally skeptical. And yet you seem to be trying to claim that you are both, which seems logically impossible [grin].
It’s the other way around, at least for society. I can’t decide what sort of society I want to live in without knowing what sort of society it is in my best interest to live in. Unless I push self-interest aside and make ANOTHER value paramount, but doing so would pretty much destroy your view, especially if I made being moral paramount.
Self-interest is one motivation or value that pretty much everyone has and is always important to them. Generally, then, we need reasons to put it aside, which is why so many moral systems ultimately reduce to it, and why so many moral arguments end there as it’s the only thing that everyone pretty much has in common.
It’s clear that you have been influenced by Hume — either directly or indirectly — and so you make the same mistakes he does: insisting that “calm passions” are the same thing as emotions because they have an emotional underpinning. It was an argument against Stoicism that since calm passions are emotions and the Stoics want to eliminate emotions, they would have to eliminate that as well and have no motivation and be unable to reason. The problem is that that is not what the Stoics mean by “emotion”, and so would allow all of the emotions that are directly required to reason while eliminating or limiting the ones that are in opposition to it. And since we know from things like the treatment of phobias and anger management that that can be done without eliminating reason their view is plausible. Thus, no emotivist can extend emotions to the calm passions and have any kind of argument since all rationalist or anti-emotion positions would have access to all of those “emotions” as well. So for an emotivist to have a distinct position from a rationalist, they HAVE to be talking about the more limited view of emotions. And, on top of that, those emotions ARE directly comparable to actual aesthetic properties and experiences, and so fit your own position better, whereas calm passions aren’t.
So I’ll stick with my view of emotivism, which happens to be the more common one (again, see Prinz for that).
Since deriving normative facts from descriptive facts about the world violates the is-ought fallacy, almost all moral philosophers and objectivists won’t do that. Some moral objectivists WILL derive moral values from other values (Carrier, for example, tries to do just that). Not all objectivist positions are rationalist as well. And you have emotions wrong anyway, as you can’t reduce all values to emotions and still have a position that has any meaning. So this isn’t the way to build a consistent argument [grin].
No, adults ARE expected to be rational, except in cases where reason isn’t relevant. They aren’t always rational, but any adult who acts irrationally when they should act rationally is not forgiven for it like children are. Again, this is the thrust of your arguments for scientism and against religion, so it’s far too convenient for you to abandon that now.
But we end up with a disjoint and confused web if we do that, and thus a massively suboptimal one, which is why we generally have at least one overarching value that we evaluate it against when adjusting our values. And “it evolved” is not usually that criteria, because evolved values can be incredibly harmful to us at times, and we can override pretty much any evolved value if we want to.
Hi verbose,
I’d say that objective normativity doesn’t exist; subjective normativity does.
From wiki: “Normativity is the phenomenon in human societies of designating some actions or outcomes as good or desirable or permissible and others as bad or undesirable or impermissible”.
Any such normativity then derives instrumentally from human desires and values (and is thus subjective).
If we had objective normativity, wouldn’t that effectively be objective morality?
There’s nothing at all to stop us deploring their views and actions based on our own values. We can argue for the sort of society we want to live in, and we can argue that they should not be allowed to rape and murder. If we persuade enough people we can then pass laws prohibiting such things. This, de facto, is how morality works in the world, and it doesn’t need objective foundations, it is based on each of us advocating for the sort of society we want.
Agreed, humans do indeed want to say that others are mistaken in an *objective* sense, that they’re violating *objective* standards. That would add weight to people’s critique of others. But, I still maintain that all such notions are misconceptions, and that there is nothing but subjective notions of right and wrong deriving from subjective values.
Yes, maybe you are right. Maybe it is the case that, because people consider morality to be objective, they care more about moral condemnation by others. I fully agree that the fact that people think that morality is objective makes moral talk more rhetorically effective.
But I don’t accept that that is a good argument that morality is therefore objective. To me, all it indicates is that humans have been fooled into thinking that morality is objective precisely because it makes it more likely to influence behaviour (and influencing behaviour is why evolution has programmed us with moral sentiments).
Why should one care about others and how they feel? Because their attitudes and conduct can affect you. If most people in society deplore rape and murder, and so have instigated laws against it, then self-interests suggests one should avoid such behaviour, regardless of one’s own attitude to rape and murder.
For my education, can you explain the difference? I’ve never really understood the distinction.
Yes, agreed. Except that at some point our values are just the values we have, rather than being “justified”.
Agreed, it is.
Ok, agreed …
Not really. We need to distinguish between narrow-sense “self interest” and broader-sense self interest. It might be narrow-sense self-interested to cheat, lie, steal and be selfish, but in the wider sense whether we prosper depends on how others think and feel towards us, and being thought honorable, kind and self-less can actually be more in our long-term interests.
Agreed, one could. One could re-word moral discourse in ways that don’t use moral terminology.
Indeed, that’s a necessary feature of having explained morality and what it is, that we can translate it into other terminology. Moral realists who cannot translate moral language into non-moral language don’t understand what they mean by their moral language!
Yes, that’s where I expect things to end up. That’s meta-ethics stripped down to bed-rock and settled. Moral language is language we use to discuss our values, how we want people to be treated, and what sort of society we want to live in.
Of course applied ethics is then still wide open, we then have to argue for the sorts of society we want and to discuss with and seek to persuade each other.
As I see it, what I’m doing is using a scientific perspective to get at the truth of the matter, and suggesting that the distinctions that philosophers make are not really tenable. All our “values”, desires, and aesthetic preferences are much the same thing in that they’re all attitudes that our neural-net brains have, and they’ll all inter-twine and affect each other, and trying to make them distinct and separate is not really tenable.
OK, so there is no such thing as “moral” values, at least there is no substantive distinction between “moral” values and any other value.
Or, as I would phrase it, moral language refers to my values. By using it I am referring to my subjective values (just as everyone else does, de facto). If other people misunderstand it by thinking that I am trying to refer to objective values then they are misunderstanding owing to misunderstanding morality. The idea that morality is subjective has a long history, so it’s not that outlandish to use the language in this way.
But I really do mean that “moral sentiments” are essentially just aesthetic preferences. This is not just some hare-brained scheme of mine, it was first suggested by Darwin in Descent of Man. It’s a pretty mainstream opinion in scientific circles. Darwin suggested that evolution had simply co-opted existing aesthetic preferences (what sort of food we like etc) and re-purposed them to police social interactions (adding in aesthetic preferences about how we treat each other).
In both a hardware sense and an evolutionary sense, it doesn’t make sense to distinguish between “aesthetic” sentiments and “moral” sentiments. By that insight, Darwin effectively settled meta-ethics.
No, I don’t want people to act rationally, if by that one means solely rationally. Reason alone is never a motivation to act. We act owing to our values. Reason then informs our acts, telling us how to attain what we want, but reason is only ever “a slave to the passions”.
Your comments seem to carry the implication that if one can translate a moral claim into a non-moral claim, then this is a problem for or a refutation of my scheme. I entirely disagree. That is a *feature* of my scheme, and is a necessary feature for any account of morality that actually explains meta-ethics.
Agreed, people *want* moral claims to be rhetorically persuasive, and for most people they *are* more persuasive if they are held to be objective. But that doesn’t mean that that is the case! It just means that humans are being fooled by rhetoric.
Correct!
I wouldn’t say you were wrong! I’d say that you, based on your values, have adopted a Stoic moral code as an encapsulation of your values, and so when you say “X is immoral” you mean “X is out of line with my values”, which is also what I mean by it. 🙂
Agreed. I’m using “error theory” to mean: “people think that there is such a thing as objective morality, but they are wrong to think that”.
I would have thought that I was, but then I’m often told that I’m using philosophical terms wrong. 🙂
Aren’t they?
But you can’t decide what is in your best interests except by reference to your values, to what you want. “Best interests” cannot be some abstract concept unrelated to your prior desires and values.
True!
Yes, as above, I’d regard them as the same thing. Again, in a hardware sense or an evolutionary sense it wouldn’t be possible to make a clear distinction between them.
I’d expect that emotivist would reply “but I am indeed taking the broader view of emotions, and if others wish to join me then great!”.
One thing I’ve learned about philosophers is that they are very keen on micro-splitting of positions into distinct “-isms”. Scientists tend much more to be synthesizers, looking at different positions and saying that the differences are not significant.
So, let me re-state that I regard all emotions, desires, values, passions (calm or ferocious), aesthetic preferences, etc, as much the same thing. They’re all subjective. And the only sort of normativity that exists derives instrumentally from these subjective values. And moral terminology is a report of these preferences. And thinking that moral terminology is more than that, and instead refers to objective values, is an upping of the rhetoric that has no basis in fact.
That’s my scheme in a paragraph. You’re welcome to tell me how to label it! 🙂
Sorry it took me so long to get back to this, but things have been a bit hectic lately. But I definitely wanted to get back to this to talk about realism. so let me start there:
There’s an implication about realism that I think is confusing and causes issues, as it implies that if morals are “real” that there’s some kind of object in the world that we can examine in some way and read its properties off to determine what it is, and thus those properties are “objectively true”. If we don’t have a material object for that, then it would have to be some kind of actual immaterial object, which then devolves into a host of philosophical complications. I think we fall into this for two reasons. The first one is the scientistic idea that all knowledge has to be empirical, and so in some way has to be “read” from reality. As I’ve already explained, my notion of conceptual truths clearly shows that this isn’t necessary: there are truths and therefore knowledge that we can come to using logic and definitions, even if we may not be able to do that for all relevant propositions. Second, there’s an overextension of the common line that objective truths are “mind-independent”, which people then conclude can be objects in the mind at all, which then forces there to be an object outside of mind that it refers to. However, mind-independent does not refer to location, but in fact to JUSTIFICATION. Does the TRUTH of the proposition depend only on what justifications exist in my own mind, or does it have an independent truth value where I need to have the right justifications to come to the right conclusion?
While I know you slightly disagree with this, consider mathematical truths like 2 + 2 = 4. While you insist that this truth is still “empirical”, what’s relevant here is that we don’t go looking out in the world to find 2, 4, + and = objects whose properties we read to determine the truth of that statement. It’s true because of what the concepts of those things mean, not because of any material or immaterial objects that we’ve observed. I say it’s the same thing for moral claims: we don’t need moral “objects” to have objective moral truths, and realism implies moral objects.
You didn’t link it, and I can’t find the quote again (I found it a month ago, oddly), but you’re talking about NORMS, not NORMATIVITY. To define normativity this way would be to beg the question of whether we can have true normative statements outside of societal approval, and if you go that specific way you end up as a cultural relativist, not an emotivist, which is important because it undercuts your aesthetic argument and your evolutionary one: the statements are true because the society says they are, and the aesthetics and the evolution thereof are irrelevant to the truth of the statements or the nature of normativity/morality itself.
Why would that necessarily be subjective? We can certainly care about what humans desire and value inside an objective moral system while still having there be specific notions about how one determines what is moral or not that is, in fact, objectively true. Again, the big issue with subjectivism has ALWAYS been that a moral statement may be true for one person and not for another, in the sense that for one person it may be true that murder is morally wrong and for another it is false that murder is morally wrong IN THE EXACT SAME CIRCUMSTANCES. So we’re not looking at conditions, but at the moral truths at their base (to forestall a constant confusion that many people often make, and that I believe you’ve made in the past, to treat the case where the truth of the proposition may change based on the conditions under which it’s evaluated as it is a case of subjective morality, which it isn’t. It’s only if the proposition’s truth value can reasonably vary by subject when all relevant conditions are accounted for that makes a morality subjective).
So if human desires or values are involved doesn’t make it subjective in and of itself, or at least not in a way that matters for this part of the discussion, at least. (Yes, you tend to shift around arguments in ways that make them irrelevant to what we’re actually disagreeing about [grin]).
No, because not all normative claims are moral claims. It’s possible for us to have some objective normative claims while morality still ends up not being objective because it can’t use any of those mechanisms. This is another common confusion you seem to make, where you assume that all normative or value claims are moral claims somehow. But they aren’t (more on that later).
So at a minimum you’re going to have to show that any objectively normative claim would have to be an objectively moral claim, because at a minimum that’s not obvious.
Except that’s NOT how morality actually works in the world. We retreat to laws and the like when we accept that not all people will act morally. We don’t think that someone who doesn’t consider rape immoral but who choose not to commit it because it is illegal are actually moral people who are acting morally. So that’s not how it works at all. Now, you will likely maintain that those people who think that way are wrong about that, but this leads to a tension in your position, where you appeal to how people think and act wrt morality when it supports your position, but dismiss all evidence from that when it contradicts it. You can’t have it both ways. Either how we ourselves act on and view morality is relevant, or it isn’t. If it isn’t, then this argument is meaningless and can be dismissed. If it is, then the fact that we do distinguish between morality and these cases undercuts your argument here.
My point there is not to use that as an argument that morality is objective, but is instead to point out that your subjectivist view has consequences that you seem to be dodging. In short, to show that if your opponents accepted your own view of morality your own charges that they are actually acting immorally would not have the weight that you need them to have to work as arguments in the way you’re using them. In short, that if you really bought your own position you’d have to act a lot differently than you do now. And your common response up until now has been to deny that those consequences existed, hence the discussion.
So why call that a moral choice rather than a self-interested one? Why care, then, if rape and murder are considered immoral at all?
In practice, we start from morality and then switch to self-interest if the first is not successful. Under your view, though, there is little reason to not go straight for self-interest and leave morality out of the picture entirely.
Um, you agreed with me and then completely disagreed with me here. “Instrumental” means that the values are justified by appeal to other values. We CAN’T have any instrumental values that we “just have”. Any value that is not justified by an appeal to other values is “intrinsic”. Now, what’s important about intrinsic values is that we don’t usually include values that we just happen to have without any known justification, but instead have them be ones that we can’t reasonably or at least have no reasonable need to justify in any way. Thus, for values, they will either be instrumental and thus be justified by how they help us achieve other values, or will be intrinsic and be not really reasonable to question. Any value that just exists will have to either be justified instrumentally, proven intrinsic, or abandoned in order for the person to have a rational set of values, which you yourself do, at least at times, think is important.
Can you give an example of a value you have that you just have with no justification?
So how come you’re not an Enlightened Egoist or an Objectivist, then? They’re the only moral systems that work that way, and they get there by starting from morality and working to those conclusions, which is not what you do (more on that later).
But your reduction is definitional, not explanatory. Sure, you have to be able to talk about moral terms in ways that aren’t just limited to the technical terms, but that’s done in ways that don’t make the moral terms the non-moral terms by definition, and even includes things like analogies and the like. Even the Objectivists, as I just pointed out, start from a notion of what someone’s concern should be an CONCLUDE that one can calculate the moral thing to do by calculating their own broader self-interest. But you don’t do that. You say that morality JUST IS self-interest. But the realm of self-interest already exists. It’s called pragmatics. So you end up saying that morality just IS pragmatics. Fine. But then it’s reasonable to ask why you still insist on talking about morality and not just switch to talking only about pragmatics. Does morality even exist if it’s “really” pragmatics? And your answer to that has been to essentially argue that calling it morality works out better for you. I’m sure it does, but that’s only because the people you’re arguing with haven’t made the connection you’ve made. If they have, then it obviously wouldn’t do that anymore. So, then, what use morality?
You have never really been able to say what morality is if it isn’t just a misunderstanding of broader or societal self-interest, and that, then, is the issue we’re running into here, because you keep using the term despite having no real definition of it that I can see that would allow you to rationally keep using it as an argument if people accepted your position.
Except, again, it’s not. We have all sorts of distinctions between our various values, including ones about morality and self-interest, and lots of ways to talk about how to organize society and treat people including morals, social norms, laws, rules of etiquette and so on. If you try to collapse them all together you’re still going to have to account for the differences AND explain why they should be considered moral instead of something else.
Except that you’re making the same mistake that determinists like Jerry Coyne make: reducing everything down to its lowest level — it’s all determined! — and ignoring the different consequences that those things have. Sure, all of that stuff may well be “in the brain”. But their activations produce strikingly different behaviour in us. For example, most people have a sharp and it seems mostly inherent concept of the distinction between morals and conventions. Psychopaths don’t learn that distinction. If you reduce all of those values to the same thing, do we treat them like morals — where we won’t violate them for any reason — or like conventions, where we violate them when it makes sense to? For example, one convention is not eating in class, which we all accept we can violate during parties and for special events. Do we want to treat it like morals and make it an ironclad rules? Also note that psychopaths fail with them by either treating morals like conventions — that can be violated when convenient — or conventions like morals and claiming, at least, that they can’t be violated. So, in short, psychopaths are, in general, acting according to your explicit argument here if they really are all the same thing. So should we act the same for both sets of values? And if we shouldn’t, then what do you gain by insisting that they are all the same thing, when we have to act differently according to which type of thing they are?
Again, you may be right on one level that they are the same, but they, like our decision-making terms, have different enough consequences that you are almost certainly going to have to re-introduce those distinctions under new terms to have any kind of sensible theory. And, in practice, that’s exactly what you’ve been doing throughout this comment thread.
A few points here:
1) You know that most of the people you’re arguing with don’t accept your position or, at least, don’t see it that way, so this comes across very much like “If they come to the wrong conclusion based on this misunderstanding that I know they’ll make, that’s their problem” which is not intellectually honest.
2) The long history of subjective morality has included the idea that the sorts of moral pronouncements you make aren’t valid and are meaningless, so even someone who knows the history is more likely to conclude that you don’t mean it that way and so that it’s a contradiction in your position at best.
3) You’re using the same terms here, so there is no way for anyone to know that you are referring to the subjectivist position without either diving into your history or else — as has happened, I believe — without turning the position on you and getting met with a “But morality is subjective so I don’t have to care!” response, which again isn’t intellectually honest.
So, no, you are, consciously or no, relying on misunderstandings to make your case and at least by now should be aware that that’s what’s happening. That’s on you, not on them.
But then you have to account for the differences between the two. Most philosophers who argue for that position do so. You don’t really do that, and in fact here instead of accepting my advice to stop getting involved in that spinning and instead just be clear insist on continuing to uphold the confusion. Again, I have no clue what “aesthetic preferences” actually MEANS to you at this point. At least outlining the properties would avoid the issues over labeling.
1) That it was suggested by a non-philosopher doesn’t mean that’s not hare-brained, no matter how good a scientist he is. That’s like taking a famous philosopher’s idea of quantum mechanics and insisting that it’s reasonable because he said it. He might be right, but we really want the physics community to sign off on it first, since it’s in their domain. The same thing applies here.
2) I don’t believe that this is the mainstream scientific consensus.
3) Even if it was, that doesn’t really matter since, again, this is the domain of philosophy.
4) I haven’t read Darwin’s theory here — so you’d need to fill it out in much more detail for me to accept it — but it seems to me from what you said that moral claims co-opted the same reward/punishment structure that we get from aesthetic preferences — ie feeling good or feeling bad — to enforce moral norms. That’s not in any way controversial and is probably true, but doesn’t make them the same thing.
5) From an evolutionary sense, it clearly DOES make sense to distinguish aesthetics from morals because they cause us to behave differently, and thus have different survival criteria, and thus exist and aim at completely different things.
6) From a mechanical standpoint, again they produce different behaviours and so HAVE to be mechanically distinguished.
7) And we know that aesthetic preferences can be maladaptive, so we still have to ask if our moral senses are appropriately configured to track social consequences which requires an external to our evolved senses notion of morality, and thus it can’t be reduced to a simple aesthetic sense. And note that it actually seems to be nonsensical to ask that for things like food preferences, which adds another distinction to the mix.
And this is if we accept that morality is determined by our evolved moral senses, which most objectivist moralist philosophers will deny because it makes morality descriptive and not prescriptive, and thus not normative.
So, no, it doesn’t settle meta-ethics at all.
The point here was that you insist that people act “rationally” BY YOUR OWN STANDARDS right up until the point that I insist that doing something would be irrational, at which point you claim that it’s okay to be rational. NEITHER of us meant “without appealing to values here”, so this is utterly irrelevant to the discussion. You want people to act rationally but are willing to accept utter irrationality by your own standards with “Well, we aren’t always rational!” when I point out that acting rationally and consistently would imply an outcome you don’t like. If your response was to appeal to a specific value that justifies it, that would be one thing, but instead you simply appeal to inconsistencies or that they just happen to have a value and try to have that end the question. But it’s still an inconsistency in your position, because if you can allow others to reject the consequences of your moral position due to irrationality, then they can do it just as reasonably for scientism or atheism, which you are loathe to let them get away with.
As I said above, you don’t translate, you eliminatively reduce. That means that if your position is held morality is eliminated, like eliminative materialism does for mind. Is that what you’re going for? Because you keep rejecting the consequences of that if you are, but all of your arguments always end up as “There is no morality, there is only X and we are confused if we think that there is a morality beyond X”, which is eliminative.
That’s not what I said. I said we want to make moral claims to get people act against their self-interest, broader or not. By your definition, that’s not possible, which then is a radical shift in what morality is and in how we’d act towards it.
Except that’s NOT what I mean by that. I mean exactly what I said: it violates my personal Stoic moral code. So I don’t mean what you say I mean. Am I wrong? What objective evidence can you bring to bear to show that I’m wrong about what I really mean when I say that?
Again, Error Theory does not refer to people making an “error” about what morality is. Even objectivists will insist that others are making errors about what morality is. It instead claims that we are in error to think that moral claims and propositions have any real meaning at all. So please don’t call yourself an error theorist if you think that the terms DO have meaning.
On moral skepticism, moral skeptical claims are skeptical that anything like morality exists. You keep insisting that it does even when I point out that you seem to be eliminating it, and emotivists think that morality does exist but is strongly linked to/determined by emotional mechanisms.
Except that, again, they produce completely different behaviours and so are different both evolutionarily and mechanically.
But doing so doesn’t end the disagreement. The rationalist was never advocating against the calm passions, but only the strong ones. So lumping all of those together in “passions” simply puts the two of them together, but then the rationalist will simply point out that if “passions” are required, passions like anger, joy, hate, love are NOT required, still produce bad behaviours, and can be managed. Thus, the combination does absolutely nothing to settle the argument UNLESS the emotivist plays a dirty trick and insists that all emotions just are emotions and so they HAVE to oppose the calm passions as well or be inconsistent, which any smart rationalist will refuse to play along with. So in any actual reasonable intellectual debate, this move does nothing except confuse terms. The old issues still come up again and again.
No, both scientists and philosophers divide up theories for precisely the same reason: the theories have different consequences that, importantly, means that they have different strengths and weaknesses and thus can be proven or disproven differently. In general, theories get lumped together because they have a similar enough core that they stand and fall together on certain specific key points. Thus, as just shown, trying to lump them back together only reintroduces those distinctions because attempts to refute those theories will only refute some of them and not the others. No, philosophers don’t just classify theories for the heck of it [grin]. They do it because the differences ARE significant.
I can’t, because it’s a mishmash of a number of different philosophical fields that aren’t all directly related to morality [grin].
The first part is a rough idea of what values themselves are, which is not morality per se, and relates to it only if you include moral values in the same category (as you seem to). It’s also problematic, because as shown above it’s true in some sense but false in others, and importantly false in what we’d need to consider for morality (the consequences of having that value as a moral value). Second, that those are subjective is definitely true in some sense, but that’s again a theory about values. What we’re interested in is not what values someone has, but about what values they OUGHT to have, and that question is still reasonable under this view of values. Your comment about normativity is a theory on normativity, but since you don’t define what normativity actually means to you — and this one contradicts your definition at the start of the comment, since that one was about societal norms and this one is clearly individual — you don’t really have that. At best you could be arguing that all oughts only follow from the values someone has, but this is a stronger position than my own personal “I can only act on values I actually have” position but seems to get you no further. That moral terminology is a report of these preferences is at least a direct link to morality, but seems problematic since all of our evolved and mechanical mechanisms for values in general and morality in particular distinguish between values and specifically between moral values, so moral terminology doesn’t in general refer to ALL of those values, only a subset of them.
Let me outline why it would be hard to determine your actual position here:
You probably aren’t an objectivist, but note that appeals to evolution CAN be objectivist positions, because it refers to the truth of a statement outside of an individual or individual grouping. A person can be wrong about morality under that view if they go against the evolved justifications for morality.
You want to be a subjectivist here, and you probably are some vein of that, although again your appeals to evolution undercut that a bit.
You aren’t likely a moral skeptic, because you think that moral terminology refers to a specific real thing. You could only get there if you wanted to be eliminativist about moral terminology, which here you don’t claim to be.
You don’t really seem to be an emotivist, because your view applies no matter how values are implemented. An AI with no actual emotions but with hard-coded “values” would work just as well, and so if you just used how morality and moral values works for us that would be descriptive of us, but the term morality should apply to anything that has the capacity to be moral, which in your case means that is has the capacity to have values. If you define values as having to have emotions, then you’d lean emotivist, but that’s still not treating emotions as critically as emotivists generally do.
The issue here is that like many amateur philosophers you’ve picked up a number of ideas from a range of positions and put them together into something that makes sense to you. This isn’t a bad thing; sometimes progress can be made by seeing that these elements aren’t as incompatible as they might seem. However, you still try to use the labels and DO ignore that sometimes they have consequences that are incompatible, or at least simply toss those aside, which results in a very confusing position and one that seems to shift in response to arguments. That makes it hard to assess in any interesting way.
Is it possible that you’re right? Maybe, but your view is too piecemeal to be convincing.
Hi verbose,
Thanks for the explanation of the difference between moral “realism” and moral “objectivism”. One thing I’ve learned is that terms and concepts are harder to define in philosophy than physics. Even academic philosophers might interpret commonly-used philosophical terms differently, whereas this is not really the case in science. I’m presuming that this is because scientific terms tend to be defined in terms of observables.
To me, the concept that “truth values” can vary is incoherent. If a “truth value” can vary according to subjective opinion then it isn’t a “truth value”. This is why relativism makes no sense to me (interpreting relativism as: “there are moral truth values; the values are different for different people”).
I’d interpret subjectivism as: “there are no moral truth values; all there are are reports of people’s values”.
In discussing how morality works in the real world we have to distinguish between the moral-objectivist rhetoric (given that most people are moral objectivitists and so pursue a discussion in those terms), and the underlying reality, which de facto is subjectivist because that’s all there is.
I fully grant that you can readily point to moral-objectivist rhetoric, and claim that “how morality works” is not subjectivist, but I still claim that underlying the rhetoric it *is* subjectivist.
There you are pointing to moral-objectivist rhetoric. But, as I’ve said, part of my stance is that the moral-objectivist rhetoric is an incorrect veneer that doesn’t tell us about the underlying reality.
You are entirely right. Most people do indeed find moral-objectivist rhetoric more convincing, and yes, you are entirely right, if they adopted my subjectivist view they might indeed find moral advocacy far less convincing. Moral advocacy arguments might indeed not then work, or not nearly as well.
None of that is, however, an argument against subjectivism being the fact of the matter. My argument is that evolution has fooled us into being moral objectivists precisely because we then find moral-objectivist rhetoric more persuasive.
No, I’ve not denied those consequences. I fully accept that accepting subjectivism can make moral rhetoric less convincing. Again, the idea that objectivism prevails not because it is correct, but because it is rhetorically persuasive, is at the heart of my position.
What I have denied is that the persausiveness would fall to zero given subjectivism. It wouldn’t, it would be lessened, yes, but not to zero. Because, de facto, human being do care about other people’s subjective opinions!
If a 16-yr-old boy tells a girl in his class that she is ugly, then the girl cares. If someone tells another human that their actions are deplorable, then, even if everyone accept that this is a subjective opinion, then they still care. They might care to a lesser extent than if they’re told that their actions are objectively immoral, but it is not true that they don’t care at all.
One could indeed drop “moral” language and just talk about our values and what sort of society we want to live in. Is there a rational justification for adopting moral language? No, there isn’t. But we adopt it because it is rhetorically more persausive (and it is so because most people are intuitive moral objectivists).
Yes, agreed.
Yes, agreed, that’s what I meant by my comment. Some of our values are justified in terms of other values; some of our values are just innate and instrinsic.
I don’t think there is any such thing as a “rational set of values”, if by that we mean values arrived at purely by reason.
In the end it comes down to our innate, intrinsic values, that we have because of our nature, as programmed into our genes by evolution. Everything about morality then rests on those.
An example is a mother’s love for her child. It is primary, it is not derived from reasoning, nor is it instrumental on anything else. (Of course we can give reasons why evolution has programmed us to be like that.)
Which is why I’m not an Objectivist, since I deny that there is any such thing as “what someone’s concern should be” as a starting point. Any account that attempts a rational analysis from axions, resulting in what we “should do” is misguided.
No, I’m not saying that. What we call “morality” is our feelings about how we interact with each other. You can’t say more than that.
No, morality does not exist — as anything other than a word we use to discuss our feelings about how we treat each other.
Morality isn’t anything! The problem here is that you, being a moral objectivist, start from the presumption that morality must be something, there must be some objective reality to it, so you try to interpret my position in those terms.
But my whole position is that there isn’t anything such. We have — programmed into us by evolution — feelings about how we, as social animals, interact with each other. That’s all there is to it. From there, “moral” is just a term of approval. If we say a sunset is “beautiful” then we’re expressing approval and aesthetic appreciation. If we say an act is “moral” then we’re expressing approval and aesthetic appreciation. If we say it’s “immoral” than we are expressing disapproval and a preference that people don’t act that way.
That’s all there is to it.
We use different aesthetic terms regarding a sunset or painting (“beautiful”) than we do about a meal (“delicious”). You won’t usually call a sunset “delicious”. But, still, the terms are just different forms of aesthetic appreciation. But, having different terms for different types of aesthetic appreciation is useful, so that’s what we do.
There are no right or wrong answers to such questions! Feel free to advocate which of those you prefer, and to seek agreement with others on them.
My account of the differences between “moral” terms and other forms of aesthetic expression would be similar to my account of why we use “beautiful” for a sunset but “delicious” for a meal. That distinction would be about one being primarily visual and the other being primarily about the sense of taste. In philosophical terms, the distinction doesn’t amount to much that is important. If we used the terms the other way round, people would understand.
Similarly, we tend to use “moral” language about how people treat each other, whereas we use other aesthetic terms about other things.
Things I like versus dislike.
But my whole point is that there are no substantive differences! You’re asking me to make clear distinctions, when my argument is that there are no clear distinctions!
I didn’t mean that it can’t be hare-brained if it was by a scientist, I just meanst that it’s an idea with history and pedigree.
I don’t agree. I think the scientific perspective on this is the better way of thinking about it, and gets to the truth of the matter better.
Visual aesthetics (“beautiful”) and taste aesthetics (“delicious”) are also different in similar ways. But, the most important point is that they’re both different forms of aesthetic judgement.
That’s not a reason for denying it. Maybe the truth of the matter is that morality is indeed not prescriptive, and thus not normative?
Yes. Or, rather, I’m eliminating it as anything other than a variation of our aesthetic systems. To me, explaining moral senses as a subset of our aesthetic senses is an explanation of morality. The only thing it eliminates is realist/objective morality.
Yes it is possible to persuade people to act against their self-interest. And no I have not defined morality as “self interest”.
Well, aesthetic preferences certainly exist, and expressions of those preferences exist. Since that’s what morality “is” it exists, but it doesn’t exist in any form other than that.
OK, so philosophy has no “ism” label for the stance I’m describing. Which, perhaps, is not that surprising since it’s an account of morality coming from science, not from philosophy.
Yes, I am.
Well not really, no. I’ve not picked up these positions from philosophy. I’ve arrived at them from the direction of science. All of these discussion convince me that that’s a better way of doing it.
OK, guilty to some extent. I have indeed tried to use philosophical terms for such things when discussing with those from a philosophical background. (If I was talking purely to scientists I wouldn’t.) As a result I may be mis-using the philosophical language. But then one has to communicate somehow!
It only appears piecemeal to you because you try to map it to philosophical concepts and isms. It may be that those are conceptualised in ways that make it hard to get at the truth. Viewed from a scientific perspective, my stance is straightforward and pretty clear (and has persuaded many within science).
Again, life considerations got in the way, so I’m replying to this now. I’m going to start with the key issue, which is communication or, rather, the lack of it:
Look, my first degree is in science (Computer Science). During which I took physics in first year and astrophysics in second year. I also started a Cognitive Science degree, which is interdisciplinary. I’ve specifically taken psychology courses AND have had lots of exposure to neuroscience both in my Cognitive Science degree and in my philosophy degree. I’m more than capable of understanding the scientific perspective, and so if I can’t understand what you’re saying that should be seen as a HUGE problem for you. The fact that I moved from “I think you’re wrong” to “I have no idea what you’re talking about” should be a MASSIVE concern for you, not something to be dismissed as “You’re just being too philosophical” or with the implied “I use the philosophical terms incorrectly because you’re too ignorant of science to understand the science!”. If anyone can understand your position and even place it properly in the philosophical contexts, I’m one of them. But I need you to help me out here. So if it’s clearer from the scientific side, why not outline it from that perspective? Unless you think I’m too stupid or too ignorant to get it …
Minor quibble:
1) In this latest comment, you ASKED me to try to do that, so I did. Using that against me here is therefore rather dirty pool.
2) YOU are the one who started from dropping in philosophical concepts and isms, not me. And in fact, in a point I’ll return to later, I explicitly ASKED you to stop doing that and just outline it as properties and details, and you said you couldn’t. So again, don’t blame me for that when it’s YOU who tries to stick to that much of the time.
3) Philosophy is explicitly about examining how things are conceptualized. If it’s conceptualized incorrectly, philosophers are the ones that are the most experienced in finding the problems with conceptualizations (scientists outside of theoretical physics can usually let empirical evidence determine the right conceptualization, while that’s usually not true for philosophy). So, again, if they are getting in the way of getting at the truth then you should be able to explain that to me and we should be able to work through that together. That I’ve gone from thinking that your view was flawed to that it’s nigh nonsensical reflects my not being able to UNDERSTAND your conceptualization in any way. But I’m trained to understand differing conceptualizations and disagree with them instead of being confused by them. Something is wrong if at this point I can’t even find a coherent conceptualization to assess.
Okay, but let me take a stab at what I think is the basics of your position, a starting point that we can correct and build on:
You think that everything eventually relies on our values. Since the set of values we have is always personal, that makes anything that relies on values subjective. Morality critically depends on values, so morality is critically subjective and can never be objective. Also, no universal overarching value can be defined, and so we can have no “value axiom” that all must accept and then derive what they ought to do from that. What someone ought to do is always derived from their own personal values, including whatever value or values that they most value.
So, some questions that you need to answer. Yes, you need to answer ALL of them (you tend to skip things you don’t want to address at times, which is really annoying. More on that later).
1) If I take someone’s full set of values, can I identify that some values are moral values and some are not, or are all values merely undifferentiated values?
2) Can I judge someone’s actions or values as rational or irrational if I am careful to limit that assessment to only what follows from the values they hold, including those values that they themselves deem most valuable?
3) Do you accept that beliefs matter, so that I can call someone’s actions or values irrational if they take those actions or hold those values based on false factual beliefs about the world?
4) If someone holds a value but doesn’t particularly feel emotional attachment to it, but acts on it, are they still acting in accordance with your theory or is that lack of emotional attachment sufficient to say that they don’t possess that value at all?
I’m leaving out the “aesthetic preferences” part because for the basics we don’t really need it, but I’ll touch on that later and if you can be clear about your view we might be able to put it back in later.
Moving on (in order this time):
Not really. I don’t find philosophy any worse at that than science, and remember I have experience with both. The issue is over context, as both in talking about things in detail often assume that people reading it will understand the context and history and so won’t need to know the details. For example, with realism those who think that you need actual objects for realism and for objectivity won’t often mention that and just assume that everyone understands that they mean that. Since that’s a pretty common view right now, it makes more sense for me to deny it when it is relevant because I have the eccentric position of denying that direct link. That being said, it IS a long-held and acceptable position in philosophy, so I’m not just being a crank either [grin].
You lack the philosophical context for a lot of things, and so don’t pick up those subtle distinctions, which then corrupts your position when you try to place it in the philosophical context.
But “truth values” aren’t independent things. They are always associated with propositions. So what subjectivists are saying is that the truth value of the proposition “X is immoral” depends entirely on what the person THINKS is immoral, as I said, You need to deference it to the individual’s moral beliefs first. Objectivists say that you don’t need to do that, and that the truth value is entirely independent of what the individual believes it is. That’s the heart of that debate.
To give an example, it’s the difference between “I’m cold” and “It’s cold outside”. Even if a person doesn’t believe it’s cold outside, it’s still cold outside, and if someone says that they’re cold that can be true even if it’s 30 degrees Celsius where they are. Subjectivists hold the latter, Objectivists the former.
Which isn’t how anyone means it, so interpreting that way is NOT going to help with communication. In fact, that would be a case that’s ALMOST actually error theory, and so would be a better defense when you’re asked if you’re an error theorist than “I think they make mistakes about morality” [grin].
Yeah, but YOU were the one trying to defend your position by explicitly appealing to how it works in the real world, not me, and so it not being that way in the real world HAS to be a problem for you. My frustration with your attempts to use empirical evidence has been that, in practice, your appeals to us having incorrect views about morality and thus which empirical evidence must be accepted or can be ignored has lined up precisely with your theory: you ignore all empirical evidence that contradicts it and accept any that supports it. But no one — not even scientists — will let you get away with that unless they accept your theory first, which means that you have to demonstrate it to them. If you want to do that using the empirical evidence, you are not going to be able to appeal to your theory to dismiss the evidence that contradicts you. And that’s all I’ve ever seen from you here. So if you have other scientific or philosophical ways to demonstrate your theory or at least make those distinctions, I’m all ears. But I’m not going to let you insist on using how we view morality when it supports your case and yet dismiss it when it doesn’t. At the very least, you’re going to have to consider all the evidence and show that your view is a better explanation than the alternatives. Which has to include, obviously, my “conceptual truth” answer.
But the question is not “Do they care?” but is instead “SHOULD they care?”, because we know that by any criteria there are times when they shouldn’t. For your 16 year old girl case, if someone is saying that because they want to hurt her and are lying about finding her ugly, then she DEFINITELY shouldn’t care about specifically being called ugly or consider herself such, which is what would bother her (normally). I submit that under your view of morality, the same thing holds for moral claims: you’re saying something that is at a minimum not true (because you can’t actually say that the other person is immoral under your view in a way that has a relevant truth value). So, if they care, it’s not that they are immoral, just as for the girl it’s not that they are ugly. Instead, it’s about the implications of them SAYING that that would make them care. For the girl, it’s about why that person is trying to hurt her feelings. For the person accused of immorality, it’s about that person thinking less of them because of their “immorality”. But at this point, again, the claim itself is irrelevant.
Thus, simply saying “I strongly dislike what you did there” would be sufficient if morality doesn’t have differing properties that we need to consider, and you have not shown that it does. And then I can decide if I care that you dislike what I do or not. Adding in morality is doing the same thing as adding in ugly in the girl example: it’s a lie aimed specifically at bothering the person more than it would if you simply told the truth. And in both cases, then, it would be completely dishonest and akin to bullying.
So explain to me why doing this doesn’t make you a completely dishonest bully, something that you probably shouldn’t want to see happen in any reasonable society that you’d want to live in. I’m not tossing this out as an attempted “Gotcha!” or rhetorical flourish, so please to answer this. Why is this not unacceptably dishonest behaviour?
I actually just mean “logically consistent”, as I think should have been obvious. And I still don’t know if you accept that as a desirable goal either, so please answer that original question.
You realize that we can override those with values that aren’t programmed by evolution, right? Like the sweet tooth, for example? If I overwrite those values, am I still moral?
Fair enough, although again that’s a value that can be overridden … and most moralities will insist needs to be overridden at times.
And this leads to another question: if a woman was born without that value, is it still possible for them to be moral people? If they deliberately override it in order to be in accordance with what they think is moral, are they moral or immoral?
So if someone decides that their main concern is their own self-destruction, or the self-interest of others, is that something that still works for you? Are you defining their main goal being their own self-interest entirely on their goals, even if those explicitly reject that? Or is “self-interest” an irrelevant aside that you got sent down when trying to think about how to convince others to do the things you wanted them to do while still being able to appeal to moral language because that actually DOES convince some people in and of itself?
Except I can have utterly amoral feelings that are nevertheless relevant there. Your stance would invalidate the entire moral/conventional distinction in psychology, that has LOTS of empirical support for it. The group that acts as if there is no such distinction are psychopaths. Psychopaths are not anyone’s best example of proper morality, and I suspect that even you don’t consider them such. So how do you say what you just said while either still preserving the moral/conventional distinction or else accepting that psychopaths get morality right because they collapse it?
Note that here I’m not going to jump on you if your answer is that that short comment was imprecise. What I’d be after here is just to understand if you think there IS any distinction and/or making it clear that there are consequences if you don’t think there is.
Nope, as should have been clear by my REJECTION OF MORAL REALISM above. But the world “morality” has some kind of meaning. There’s something you mean when you talk about that word. And I want to know what that means to you. I can’t accept that you’re right about “morality” if I have no idea what that word means when you say it. Even if what you mean by that IS nothing, I need to know what you mean by the term that leads you to that conclusion. And this is not helped by the fact THAT YOU KEEP USING THE WORD. If you can’t tell me what the word means, how am I ever going to figure out what you mean by it when you use it?
So this rant here is invalid. You mean something by the word, and you ought to be able to tell me what that is. I guessed a misunderstanding of broader self-interest and you bit my head off for daring to guess at the meaning of a purportedly meaningless word. Fine. I ain’t guessin’ anymore. YOU tell me what you mean by it.
So, is moral approval and aesthetic approval the same thing? Is moral approval and pragmatic approval the exact same thing? If not, what makes them different? If they are the exact same thing, then how is having different terms for them not the same sort of potential confusion, at least, as having the terms Evening Star and Morning Star for the planet Venus, as it implies that they are different things? And if you accept that that confusion would exist, why do you insist on continuing to use the term “moral”, not just when talking to others but even when explaining your theory.
Which we do because they have differences that are relevant in some circumstances. So if the same thing is true for “moral”, then what are those differences? And if you explain those differences, then like I would for visual aesthetics and taste aesthetics I’ll actually understand what you’re talking about when you talk about moral aesthetic judgements. So why are you so adamant about not explaining or ever talking about them? What problem are you trying to avoid by refusing to mention them?
But we and evolution have already done that. You lost. So why do we have to argue for those things that most of us already accept about morality? You know, those things that you tell us we’re wrong about? Either there are right answers and you can say we’re wrong, or there aren’t any right answers and we can only go by what most people accept, which is NOT what you accept.
The TERMS, yes, but not the properties. Those sorts of properties are what I asked you to provide for moral aesthetics. If you couldn’t provide them for visual or taste aesthetics, I’d rightly say you didn’t know what those things were. I say the same thing is true for moral aesthetics.
And no, I’m NOT accepting “About people” as an answer, for reasons I outlined above [grin].
Let me outline some more empirical evidence here: autistics don’t start with a natural way to distinguish the moral and conventional. However, they learn the difference, usually through reasoning. This implies that these sorts of distinctions really DO exist, at least in how most people approach the world, but this works against you claiming that morals are just feelings about other people.
Which is not how anyone uses the term, and so is horribly confusing. Moreover, saying this ignores all of my previous arguments about how that doesn’t work, such as that I can dislike things that are beautiful, which makes it at best an oversimplification, and you have never addressed the idea that the link between the aesthetic and likes/dislikes is causal, not by definition, where I like beautiful things BECAUSE they’re beautiful, and not because they’re really the same thing. And I think the science is DEFINITELY on my side with that last one.
No, I’m asking you to outline the specific properties that you need to make your case for your theory without branding it as anything first. So, if we were talking about trees, to outline the properties of trees that matter for your argument — tall, plants, etc — without ever calling them trees. If you can’t do that, then I question whether you know what you;re talking about here.
It’s not about distinctions. I don’t care about distinctions here, or ANY of the terms. What properties are important to your argument, even IF all of them have them?
You realize that this was an entire progressive argument, right, and so you can’t just skip over steps and have things make sense, right?
Here, even if it was the scientific consensus — which I am skeptical of — that it’s the scientific consensus is not reasonably going to get philosophers to drop their consensus and replace it with that one. It needs to be argued for.
To a blind person, the specifics of visual and taste aesthetics are more important. It depends on the context. And you haven’t even given me the differences so I can decide IF the differences matter, and refuse to do so. Since I keep asking for them because I think they exist and I need them to stop being confused, simply saying they aren’t important is not exactly helpful [grin].
Perhaps, but again you’re going to have to argue that. The point was that for your argument to hold that presumption had to be accepted, and your opponents were not going to accept it as a presumption and so weren’t going to go along with you. Thus, the argument would be a non-starter unless you can ESTABLISH your presumption (at which point it wouldn’t be a presumption anymore [grin]).
I’ll leave the rest because it needs clarification and hopefully things will make more sense later. But I DO hope you can understand my frustration here.
Hi verbose,
Sorry! I’m trying my best to communicate! If you’re not understanding me then perhaps I’m bad at explaining. As I see it my stance is very simple. Here it is in a nutshell:
Animals evolved aesthetic preferences. We evolved to like things that are beneficial (e.g. nutritious food) and dislike things that are harmful (e.g. rotten meat). Animals that developed a highly social way of life then needed to evolve aesthetic preferences about how we interact with each other. Thus we like some ways of interacting (e.g. loyalty, generosity) and dislike others (e.g. cheating, stealing). To express such value judgements we use the term “moral” for things we like and “immoral” for things we dislike. That’s all there is to it. Lastly, if anyone thinks that there is more to morality than subjective feeling, and that by moral langauge they are expressing objective truths, then they are wrong.
That’s my account of meta-ethics, and it seems to me straightforward and simple to communicate.
How does the above do?
But I also think that my position doesn’t may that well onto the philosophical way of conceptualising the issue. That’s because, it seems to me, much philosophical conceptualising about morality presumes moral objectivism (which, given the above is wrong), and even the “subjectivist” accounts mistakenly presume some elements of objectivism.
So turning to your summary of my position:
Yes. Or, rather, everything about morality *is* our values. There is no distinct thing, “morality”, that then “relies on” our values. There is only our values (and language we use to express them).
Yes.
Yes, though again morality *is* our values, not something that “depends on” our values. Moral language is expression of our values.
Correct. To me the idea of a non-subjective value is a category error, the idea of a value without a person doing the valuing doesn’t make sense.
Yes, “oughts” are always instrumental, deriving from our values.
No, one cannot identify any clear demarkation between “moral” values and other values. They are all just values. Of course it is useful for humans to have different words for different types of value, just as we use somewhat different aesthetic language for what food tastes like, than for admiring a sunset or enjoying companionship, but these will be fuzzy-edged human-made categories that are not necessarily fully consistent.
Yes, someone’s values can be inconsistent or irrational or contradictory, so you can indeed make judgements that someone’s actions or values are rational or irrational.
Yes on both counts.
If they have *zero* emotional attachment to a “value”, then more or less by definition it is not one of their values. But one can have weak values, weakly held preferences. (For example, I might prefer milk in coffee, but it’s a weak preference, and I’ll readily drink black coffee.)
Except that, to me, moral sentiments *are* aesthetic preferences, they are the same thing — where that statement is totally literal, it’s not an analogy.
I would say that a moral *relativist* would say that, but a subjectivist would not necessarily do so. In saying that I may be using the terms wrong, so let me avoid the -ist language:
What I would say is that “X is immoral” is not a proposition and so does not have a truth value. It is, instead, a declaration of how the speaker feels about something. “X is immoral” amounts to “I dislike X”.
The propositions: “The speaker dislikes X”, “The speaker considers X to be immoral”, and “In the speaker’s value system X is deprecated and so labelled immoral” all do have truth values (and they all mean much the same thing). But those propositions are all descriptive and are all describing the speaker and the speaker’s value system.
But I’m genuinely not aware of any empirical evidence that contradicts me! By which I mean contradicts the claim that a declaration “X is immoral” amounts to “I dislike X”.
Of course there is plenty of evidence that people *think* there is more to it than that (most people are intuitive moral objectivists), but I’m not aware of any evidence that there *is* anything more to it than that.
First, the fact that people *do* care is sufficient for us to have evolved moral sentiments, and for morality to operate in human society. Evolution depends on what does actually happen (not on abstract notions of what “should” happen), so if people *do* care and behave accordingly then that is sufficient.
Second, in my stance, all “shoulds” are instrumental, and thus any “should they care?” question has to be properly phrased in terms of what people’s aims and values are, what they are trying to achieve.
If someone’s aim and desire is to be popular and socially successful, then yes they should care about how others see them, even if those judgements by others are subjective and capricious.
OK, agreed, but she should care that at least one person in her class wants to hurt her, and she should care that he might influence others in the class, et cetera.
But if 90% of society think “X is immoral”, by which they mean “we dislike X”, then yes you should care and may want to avoid doing X, even if you don’t see anything wrong with X. That’s because doing something that other dislike might make you unpopular and could get you punished. If you’re concerned with your social standing and well-being, you do need to care about such things!
All along, you, as a moral objectivist, are trying to analyse my position and find some objective, rational reason why a moral code has any force and why you “should” care about it.
The answer is, because other people’s moral codes — other people’s feelings — will affect how they act towards you, and that can matter to you. That’s why moral sentiments evolved, even though all aspects of it are entirely subjective.
Yes! Yes! Yes!
Except that the “moral” language is people telling you how they feel, and how other people feel does matter to members of a social species. [You’re right that any supposed *objective* content of the message does not matter, and indeed is not actually present.]
Yes! Saying “that is immoral” is ENTIRELY EQUIVALENT to saying “I strongly dislike what you did there”.
Adding in moral language is only adding in something extra if one interprets it as having additional objective content. It’s better to understand it as simply the wording we use to convey certain types of attitudes.
Once one does interpret the phrase: “that is immoral” as being entirely equivalent to saying “I strongly dislike what you did there”, then there is nothing being added.
But you’re right, one could avoid confusion (avoiding any moral-objectivist connotations) by dropping moral vocabulary entirely, and simply using (other) aesthetic vocabulary.
The problem for someone like me is that most people are moral objectivists and thus moral language is likely to be interpreted as moral objectivist. But I have to use the same language as everyone else. It would be a huge distraction if, whenever I was advocating positions on free speech or circumcision or whatever, I first had to get bogged down in a whole side-track about meta-ethics.
The reason I don’t consider this dishonest is because I have been entirely explicit about what I mean by the moral language that I use. Further, this “emotivist” interpretation of moral language has a legitimate standing, being advocated by notable scientists and philosophers. Thus, if people aren’t aware that moral language can be intended in this way then that’s not fully my fault!
Given that my interpretation of moral language is both explicit and defendable, I don’t think it’s dishonest to not raise the issue unless I’m specifically discussing meta-ethics.
Is being logically consistent a desirable goal? Some of the time it is, yes! Such a question can only be answered in terms of prior goals and aims. If we want a functioning iPhone, then logical consistency of the design would indeed be desirable.
But suppose we wanted to be socially successful. Regarding oneself as more knowledgeable, likeable and capable than one really is might lead to act in ways that led to greater social success, even if the claim to being knowledgeable, likeable and capable is inconsistent with all the evidence.
The set of attitudes and values in our brains is likely not fully consistent, and it would be wrong to assume that it would be better for us if they were self-consistent. Humans are prone to cognitive biases, and it may be that having some cognitive biases is evolutionarily favoured.
Yes, we can override some of our values — but only in the cause of other values. We can deny ourselves sweet foods that we crave, but only if we desire good health and not being fat.
Your question is a moral objectivist one, that presumes that there is such a thing as “being moral”. All there are are value judgments. So the question can only be along the lines: “If one denied oneself sweet foods, would one regard oneself as more virtuous?”, or would you or others regard you so? Which is the same as asking “would you or others approve of you denying yourself sweet foods in order to stay healthy and slim?”. (To which the answer is “some people would, some would not”.)
Your question is a moral objectivist one, that presumes that there is such a thing as being “a moral person”. All there are are value judgments. So the only question would be: “would people regard someone who lacked Value A as being moral”, which is equivalent to: “would people approve or disapprove of the value system of someone who lacked Value A?”.
To which the answer is that most people would want a mother to love her children, and would disapprove if she didn’t (though they might have compassion and pity for her).
Again, that would only be a sensible question to ask if morality were objective. Up above you suggest that I’m not succeeding in communicating. As I see you are repeatedly misunderstanding me because you, as a moral objectivist, are simply not conceiving of a properly emotivist account of morals where such questions are simply not properly posed questions. Thus you repeatedly try to map my scheme on to a moral objectivist framework.
I actually understand that! For the first half of my life I was a moral objectivist, and it took me years to train myself not to think like that. Moral objectivism really is so intuitive that it is hard not to automatically think like that.
I don’t understand the question! Again, are you trying to ask whether they “should” have those concerns?
I’m not defining “their main goal” at all! Are you asking about that because you are looking for some anchoring point of my system? There isn’t anything such!
We all have a whole set of values (not necessarily self-consistent ones). Those values can change, being influenced by our other values, by other people, by updated knowledge of the world. But, at any one time we will have a set of values and that will lead us to making value judgements.
Agreed.
Psychopaths lack some sorts of sympathy for others that are present in most of us. So their value system is different. That means there is a real difference between them and most humans.
I agree that there is a moral/conventional distinction in how people THINK about morals. That’s the same thing as saying that most people are moral objectivists. And yes they are, that is indeed how most people think.
I don’t know whether psychopaths are moral objectivists or not. If they’re not then yes, they have got *that* right. That doesn’t alter the fact that the main difference is that they lack sympathy for others that most people have.
[And to head off a possible reply, I don’t think they lack sympathy for others *because* of reasoning from their meta-ethics, they have a different set of values simply because that’s how they are made; they lack certain normal values in the same way that people can be born lacking normal kidneys or a normal immune system.]
To me “morality” is a subset of “aesthetics” (quite literally). Moral language is expressions of approval or disapproval in the same way that so are the terms “delicious”, “yucky”, “beautiful” and “ugly”.
Yes. (Though, as humans use the words, they’re usually applied to different subject matter.)
“Moral approval” would mean “it accords with my values”. “Pragmatic approval” means “it isn’t *directly* approved of given my values, but it will *indirectly* achieve things I want so I will accept it instrumentally”.
Having different terms for them does indeed lead to confusion! And yes, realising that the Evening Star and the Morning Star were both manifestations of the same object was an intellectual advance. In a similar way, realising that electricity and magnetism were both manifestation of the same underlying electromagnetic force was a great advance.
Realising that moral sentiments and aesthetic sentiments are just different manifestations of the same thing would be a similar advance. Indeed it was proposed by Darwin in Descent of Man, so it’s about time the idea caught on! 🙂
Well if I’m trying to explain what “morality” is then I need to use the terms as part of the explanation. But yes, one could indeed replace moral language with (other) aesthetic language, such as “like” and “dislike”.
I thought I had explained! Moral language is the subset of aesthetic langauge we use for approval or disapproval of how people treat each other. E.g.:
Approval/enjoyment of the taste of food = “delicious”
Approval/enjoyment of the look of a sunset = “beautiful”
Parental approval/enjoyment of interaction with a child = “love”
Approval of how Fred acts towards others (being generous, loyal) = “moral”
Disapproval of how Sue acts towards others (lying, cheating, stealing) = “immoral”.
All of these are fuzzy-edged categories, since there is no clearly demarked fact of the matter as to sub-categories of aesthetic sentiments.
There is no need for people to “go by what most people accept”. They can dislike the majority opinion and disagree, or even seek to persuade.
They certainly do exist in how people THINK about morality, because most people are intuitive moral objectivists. Again, I’m not attempting to deny that, I fully accept.
But most people thinking that God exists, or thinking in ways that only makes sense if a god exists, isn’t the same as God actually existing. People who think they have a relationship with God are wrong, even if that’s genuinely how they think. Moral objectivists (I assert) are wrong, even if that’s genuinely how they think.
I know it’s not how people use the term. I’m trying to get at the fact of the matter underlying the (moral objectivist) terms that they do use.
There will indeed be a fact of the matter as to what things you find beautiful. But to me, trying to draw a distinction between you finding it beautiful and you liking it is puzzling. (Of course it would be easy to like *some* aspects of something and dislike other aspects of it.)
Do philosophers have a consensus on meta-ethics? I thought (Borges/Chalmers survey) that they were split down the middle?
On consideration, I’m not sure that you’re bad at explaining. I think that you have a Frankenstein’s monster of a moral theory, cobbled together from bits and pieces of other models, and so they often end up working against each other and frustrating anyone who wants to try to understand and/or criticize it because your defenses against some charges rely on aspects that seem to contradict other aspects of the theory. There’s nothing wrong with trying to combine aspects of various theories, but when you do so you need to be very careful that you don’t put their contradictory aspects — the reasons that they are generally considered separate theories — into the same theory.
I’m going to try to build a theory that I think works better for you in a while. This may seem arrogant, but it has the benefit that I’m going to explain why it seems to work better for you AND it isn’t a theory I myself subscribe to, so I’m not trying to redefine your position into something I agree with. First, though, I think I need to highlight why I think you have incompatible or wrong elements in your theory and what that means.
Now, I’m not going to take this next part as seriously as it might sound, but it’s a good framing device for this:
So, despite the fact that I’ve directly studied and discussed emotivist theories in formal courses and had to show that I understood it both to people who knew them really, really well and even to people who HELD those positions, and did that well enough that they agreed that I understood the position, somehow the problem between us is that _I_ don’t understand emotivist theories or can’t understand them outside of an objectivist outlook? That … doesn’t seem credible [grin].
Since I’ve been doing this formally for years, I have responses for most of the typical moral theories. Please don’t try to respond to them, just to the reason I bring them up (which I’ll get to at the end). The content of them would be useful for you to know, though.
For emotivism, I tend to argue that if they want moral judgements to be taken seriously emotions can’t ground that. Not only are our emotional reactions often incorrect wrt the facts, even our specific moral emotions often recommend that we do things that viewed rationally we consider horribly immoral. So emotions only work if they are in accordance with reason, at which point we might as well just abandon them and go with reason as directly as we can. And if they just go for a straight descriptivist/subjectivist line — they are what they are — then I aim my response at subjectivism itself.
For error theories, I take the same tack you take wrt hard determinists about free will: you can eliminate or make the moral statements meaningless if you want to, but we are still going to need terms for the behaviours of what we call “morality” and need something to fill in the gaps that removing morality would create. At this point, you’d pretty much be nitpicking over terms unless you can show a significant difference in meaning or behaviour for the replacement terms.
For evolutionists, I point out that evolved sensibilities can be maladaptive, so they still need some kind of criteria for determining both what counts as a moral term and when we need to update it. So pointing to evolution can’t justify the terms, which is about the only reason to appeal to it. And if they simply want to be descriptivist, then we have to argue over that specifically.
For Social Contract theorists … well, I don’t really argue against them because I’m sympathetic to the position, and certainly think that’s how things like laws and social conventions work. The most I do is point out that laws and social conventions are not morals to pretty much anyone, and so if you want to use that as a moral position you need to be able to make the link from those things to something properly called moral … ie you need to justify the distinction between the moral and the conventional, unless you want morals to have no meaning, which means that you aren’t a Social Contract theory anymore.
For Egoists, I tend to point out that most of morality seems to be aimed at limiting considerations of personal self-interest, and that if they make that move then they collapse morality and self-interest at which point there is no point in talking about moral reasoning at all, and instead sticking strictly to practical reasoning.
And for subjectivists, I tend to respond that if that’s the case then there is no reason for me to care about their moral judgements of me, and then ask what the point of morality at all is if that’s true (some of them have some answers, BTW).
Now, what’s interesting about this? These are all RADICALLY different and sometimes completely incompatible theories, and yet against your view I have used EVERY SINGLE ONE of these rebuttals, and all of these in response to something that you have directly said and even agreed to that put you into their camp. This really does suggest that you’re mixing theories, which again isn’t a bad thing. But it makes your theory hard to understand, especially if you aren’t careful to separate the contradictory aspects.
So, let me move on to talking about some of those aspects:
1) You claim that in your view moral statements are aesthetic judgements, just like any other. However, aesthetic judgements and statements have a specific content: they are meant to express that the person is having a specific internal experience. We INFER from that statement and from what experience means to us whether or not they like or dislike the experience. So someone says that the painting is beautiful, we infer that they are having a beauty experience, we know that in us a beauty experience is pleasant, and so conclude that they liked it. For morality, this would lead us to conclude that they had a specific moral emotion experience, and conclude that the pleasant moral emotions mean they liked it, and that the unpleasant ones mean that they didn’t like it. This would be an emotivist theory AND, in fact, has to accept that moral statements express a specific and unique content, even if it is similar to or in the same family as other aesthetic expressions.
2) You deny that moral statements have any real or meaningful content. This is an Error Theorist position, but also would mean that moral statements CAN’T be aesthetic statements.
3) You argue that these moral sentiments evolved because they benefited us, and use Darwin’s claim about hijacking the existing mechanisms to justify their existence. The issue here is that this would tend us towards accepting that any parts that were required for it to evolve have to be properly considered a key part of the definition of morality, which would include that they are objective.
4) Your claim is that the big benefit of morality is that it provides for society and communities. This is a Social Contract Theory, but also generally implies that it has to be considered objective or else it loses its power.
5) You ultimately claim that the reason to follow it is that it’s ultimately in our own self-interest to do so, because of the benefits to society. Ultimately, this is an Egoist view.
6) And finally, you insist that morality is subjective, meaning that there is no set, hard-coded answer to any moral question (it’s vague how far you need to or go with this, so this is about as far as I’m willing to take it here).
So, you’re definitely mixing theories. Let me now try to resolve them simply by putting emphasis in different places than you generally tend to:
When we say “X is immoral”, what we are really trying to convey is “I believe X violates the Social Contract to an unacceptable degree”. Everyone cares about the Social Contract because it benefits everyone so much, and thus from that we can conclude the less important but still true statement that the person dislikes it and, more importantly, will be willing to take actions to prevent you from doing so. Evolution would select for this because it is necessary for any society or community to thrive, as it would be especially important that the person continue to act according to the Contract when people aren’t directly looking (hence the emotion of guilt). It would do so by piggybacking on the aesthetic systems already established: not “beauty/ugly” but instead “Appealing/Repugnant” and those emotional mechanisms. Thus, moral emotions would be useful guides to violations of the Social Contract. Since every society is different and every individual’s place in society is different, there’s no overarching moral principle that one can rationally use to justify what is and isn’t moral — ie breaking the Social Contract — in any society. There are some that no society could survive if it accepted them — murder being the obvious example — but outside of that it all depends on the details of the people and the society itself. This also means that people can argue over whether an accused violation and even be in some sense right or wrong about that without having to accept some kind of fully rational moral principle that settles the issue unequivocably for everyone in all cases. It allows us to justify having a separation between social conventions and morals by pointing out that social conventions can be violated without risking breaking the Social Contract but morals can’t. It also allows us to admit that we can have non-morally relevant dislikes like, say, a dislike of lima beans. But what we consider a violation is, again, going to depend on our own values because they will help define our place in a society. This also would let you ground rights as being things that must be in place for the Social Contract to be maintained without having to have any kind of objective and separate notion of them, and yet still leave room for them to be argued over.
This isn’t where your emphasis is, of course, but it seems to me that it captures what really is most important to you and what most of your defenses ultimately depend upon: the idea that morality is important and crucial because of its relation to societal harmony. It maintains the link to aesthetic properties and evolution that you insist upon, but cleans up some of the inconsistencies between the views. Yes, you have to drop the Error Theory claim, but you seem to still want moral statements to have weight and that pretty much eliminates Error Theory out of hand (and you don’t seem to use that for much except browbeating opponents, so it shouldn’t be that big a loss). It’s still subjectivist/relativist — or, at least, in the way you need it to be — because it is determined by the interaction of the individual and the society they are a member of. So I think it captures everything important about your view and, more importantly, all the FACTS you had supporting your view. If you think this is incorrect or unacceptable, then please let me know and we can work from this as a basis to determine what your theory really is or requires.
I’ll clean up some of the small leftovers:
What this entire point was aiming at was that the only empirical evidence you have for your position amounts to what people think morality is. So you appeal to what people think morality is when it supports your position and deny it when it doesn’t. Thus, there is lots and lots and lots of empirical evidence that contradicts your view, but you dismiss it because it doesn’t fit your theory. They’re just wrong about morality, you continually argue. And they may be. But you have no other argument for why that empirical evidence should be ignored other than that by your theory that’s just the mistake they are making. No one who does not accept your theory will not accept that as an argument, and if you don’t have better arguments for dismissing it YOU SHOULDN’T EITHER. Not if you want to claim to be doing science, since doing what you’re doing here amounts to ignoring empirical counter-evidence which is verbotten by any reasonable, rational science anyone should practice.
Well, here are the problems with it:
1) You aren’t, in fact, generally clear about how you are using the terms in the discussions when they come up. The vast majority of the time, you use them without any explanation and then we end up in a meta-ethical debate once it becomes clear that you don’t mean it the same way as everyone else does … usually when someone demands that you justify your moral position objectively and you then insist that that can’t be done. So, no, you DO use it in a confusing way.
2) Since you don’t conform to the standard emotivist position either, even someone familiar with emotivist positions — Hello! — is not going to come up with the idea that that is what you are trying to express, and probably won’t even get what you are saying AFTER you tell them you’re an emotivist, as we’ve see in this entire discussion.
3) You could easily short-circuit the entire debate by appealing directly to the two pillars of your response once it gets down to brass tacks: societal impact or personal self-interest. So you don’t really ever have to use moral terms when in general you end up appealing to one of those two things anyway. All starting from morality does, in general, is make for a longer and more annoying discussion before you get to that point.
So there seems to no real reason for you to start from moral claims. Why is it so important for you, then, to keep using them?
Note that this is another advantage of my altered theory for you: it gives you a reason to keep using moral language even when we understand your theory properly.
The point here was that you keep insisting that there is no way to distinguish morals at all, at least in general, and yet the group that actually does that is the one that seems to completely lack an understanding of what morality is. So either you have to concede that the empirical evidence suggests that there IS such a distinction — that you want to deny — or else you have to concede that they, in fact, get that distinction right and thus have a BETTER understanding of morality than most people. It’s not so much of a gotcha as a “There are grave consequences from making such a blythe dismissal, and it doesn’t seem like you need to do that for your theory to work”.
Again, an advantage of my reworked theory is that you can accept the moral/conventional distinction without having to give up any other part of your theory, even its subjectivity.
Odd, because doing so is, in fact, a common staple of at least the old cartoons, where an evil character finds beauty repulsive. Conceptually, it holds, even as in us it rarely happens. But you need it to be impossible conceptually to make your case, and again even little children can grasp that idea at a conceptual level.
No, but they have a consensus that things aren’t as easy as your view here implies [grin]. That’s all I was talking about here: Philosophers are aware of the theory and the consensus is that it’s not enough to settle the question.
Hi verbose,
Question: Why are there many different meta-ethical models that have been proposed by philosophers? Answer: because all of them capture some aspects of the truth; that is, some aspects of morality as it really is in the real world. Second question: why have philosophers not arrived at a consensus on meta-ethics, why are all these different models advocated by at least some philosophers (e.g. Borgias/Chalmers survey says that professional philosophers are split 50:50 between realist and anti-realist stances)? Answer (I suggest): because none of the models fully capture morality, they are all partial answers but not the whole answer and all have faults.
From the above two answers, it follows that a true account of morality *must* take aspects from many philosophical models. Further, I submit, it may be that the whole way philosophers address these question is sub-optimal.
You suggest that in my scheme I’ve “cobbled together bits and pieces of models”. Well, actually, I’ve not arrived at my scheme from considering philosophical models, I’ve arrived at it from a different direction, from evolutionary biology and human psychology.
From the viewpoint of philosophical models my scheme way well appear to be a Frankenstein’s monster, but from the viewpoint of evolutionary biology it is actually pretty straightforward and natural. You, being a philosopher, evaluate my model in terms of the array of philosophical models; however, I am suggesting that those schemes are all flawed. The best way of understanding my scheme is not in terms of philosophical models, but in terms of evolutionary biology.
Your replies seem to me to usually be along the lines. “Your scheme adopts Aspect A from Model 1; but Model 1 also entails Aspect B, and Aspect B implies . . . which makes things inconsistent”.
Which would be fair, if I were adopting all of Model 1, but I’m not, all I’m doing is adopting Aspect A.
I think I agree with all of that. I would suggest that, if we hear that someone we know cheated on his friends for selfish advantage, then we do have a “specific moral emotion experience”, and that if we hear that someone refused to be disloyal to his friends, even at personal cost, then we also have a (different) specific moral emotion experience.
Well no, moral statements do have meaningful content — as reports and expressions of our values, what we like and dislike. If someone says “murder is immoral”, that statement has the real and meaningful content: “I dislike and deplore murder”.
What I deny is that moral statements have any meaningful content beyond that just stated. Any suggestion that they reflect objective standards of “shouldness” is mistaken.
So what is the philosophical term for: to the extent that moral statements purport to be about objective truths, they are in error (since there are no such objective truths, and thus no meaningful content along those lines), but interpreted as being reports of the speaker’s values they do have meaningful content?
I don’t understand your argument here. Aesthetic preference also evolved because they benefit us. We have evolved to like food because it is good for us; we dislike rotten meat and excrement because it is harmful; et cetera. And yet, aesthetic preferences are accepted as the epitome of things that are subjective. (There will of course be objective statements about human nature and human psychology, and thus objective descriptions of what we like and dislike.)
I disagree that such societal agreements have to be considered objective to have power. Yes, they might (de facto) be more powerful if people consider them to be objective, but they don’t have zero power without that.
Let’s give an example. The US electoral college is a bodge, no-one could claim that such a mechanism has “objective” validity. Its only standing is as a collective agreement. Yet people think they “should” accept Trump as President, rather than Clinton, even though Clinton won the popular vote, because the collective agreement is to accept the electoral college.
So collective agreements have power, even if they are accepted as nothing more than collective agreements.
My meta-ethics does not arrive at any ethical imperatives, no statements about what you “should do”. It is a fact, however, that people who behave in ways that their peers regard as immoral will likely not prosper, because their peers will likely sanction them. But that’s just a descriptive statement.
Agreed.
Now on to your reformulation:
As per my early comments, this Social Contract Theory captures part of the truth. It is indeed true that humans arrive at social contracts, and that such contracts then influence people in how they think about how people “should” behave. So, when someone says “X is immoral”, part of what they are trying to convey might indeed by that it violates the social contract.
However, that is obviously not a full and true account of human morality. People can regard things as immoral even if they know full well that the social contract accepts them. When William Wliberforce argued that slavery is immoral, he was not saying that it violated the social contract, he was saying that he disliked the social contract and wanted to change it. If someone is campaigning agsindt FGM in Sudan, they are not saying that FGM violates accepted norms, they are arguing against accepted norms.
This illustrates that likes and dislikes are not less important than the social contract, they are prior to the social contract, and thus more foundational.
Another question is where does the social contract come from anyway? It can’t come from robots who lack any desire or any value (however rational they are), it can only come from people advocating for what they want, what sort of society they want to live in. So, again, our values are prior to (and more fundamental than) any social contract; rather, the social contract is the product of people’s values.
That morality is crucial for societal harmony explains why moral sentiments evolved. But that’s not the same as societal harmony being the definition of, nor the justification for morality. Why a mechanism evolved is different from the mechanism itself. For example, our liking for sweet, high-fat foods evolved because they benefitted us on the Pleistocene savanna. But, nowadays, with well-stocked supermarkets on every street corner, our liking for sweet, high-fat foods can be deterimental. This illutrates that we need to distinguish between the mechanism (aesthetic preferences) and the reason why those preferences evolved.
As I see it, the main evidence for my position is that it’s the only position that makes any sense from an evolutionary biology point of view — and evolutionary biology is, after all, the explanation of what we are and why we are like we are. (I would also suggest that the reason philosophers are not arriving at a consensus on meta-ethics is that most of them, not being scientists, don’t even consider the matter as being one of evolutionary biology.)
But people thinking that morality is objective is not evidence that morality is objective (it is only evidence that they *think* it is objective). If I can explain that adequately then it is not evidence against my view.
Indeed, my model explains why people think that morality is objective! No other model actually does that. (Even if morality were objective, a model would still need to explain why people *think* that, and since objective morality isn’t a direct observable, that still needs explaining.)
Hi Coel,
Your last response to Verbosestoic indicates your position is grounded in science (evolutionary biology). But science is silent about imperative oughts of the usual kind in moral philosophy. So how can science tell us anything definitive about the objective reality of imperative oughts?
I argue science is well within its domain in telling us what the function (the primary reason they exist) ‘is’ of our moral sense and cultural moral codes and people would find it much to their advantage to use that information. But haven’t you left the science reservation with any claim of the form “science informs us there are no objective imperative oughts”? (Of course, this is not an attempt to quote you, but merely to summarize your position as I interpret it.)
Hi Mark,
I don’t accept that anything is outside the purview of science (sorry, I’m a radical scientismist!). So if science doesn’t tell us about imperative oughts, maybe there are no such things? As I see it, the only kind of oughts that exist are instrumental oughts, deriving ultimately from our aims, goals and values.
I would suggest that science is the only thing that can tell us about the objective reality of anything.
Where science is silent about something, such as “objective imperative oughts”, surely the onus is on those advocating such things to present the evidence for them, starting with an account of what they actually are (as I’ve previously said, I am unable to conceive of what such a thing would even be)? From there, the defenders of objective imperative oughts need to explain why they are not accessible to science, but are accessible some other way.
Just pointing to the fact that science does not tell us about objective imperative oughts is not sufficient to claim that science cannot deal with them, since that can also be explained by their non-existence.
Hi Coel,
Thanks for replying in the midst of an intense discussion with Verbosestoic.
You are right, the onus is on those advocating for objective, imperative oughts to show how such strange things are real, not on science to prove they do not exist.
I am highly sympathetic to your view that:
“… if science doesn’t tell us about imperative oughts, maybe there are no such things? As I see it, the only kind of oughts that exist are instrumental oughts, deriving ultimately from our aims, goals and values.”
What I am trying to get at is that your claim “morality is subjective”, referring to morality only as those strange objective imperative moral oughts, goes beyond what science can tell us about imperative oughts.
“Morality is subjective” (referring to morality only as those strange objective imperative moral oughts) seems to be your pesonal belief rather than a fact from science.
Hi Mark,
I’m not sure I follow. If “morality” is taken to refer narrowly to “objective imperative moral oughts” then I would not say that “morality is subjective”, I would say that morality does not exist.
In saying that “morality is subjective” I’m referring to the only sort of morality (as I see it) that does exist, namely our human feelings about how people treat each other.
Hi Coel,
When you say “morality does not exist” (If “morality” is taken to refer narrowly to “objective imperative moral oughts”) you still seem to be speaking on a subject science is silent on and therefore speaking about your beliefs, not facts from science.
Of course, if your claim about “morality is subjective” is NOT about the non-existence of “objective, imperative moral oughts” then modern science appears to contradict you. Modern science shows that the objective function of morality is to increase the benefits of cooperation in groups. This an objective and highly culturally useful aspect of morality. It is not a subjective claim.
Regardless of your reasoning for saying “morality is subjective”, you might consider the social harm such a bare claim makes by obscuring the objective nature of morality’s evolutionary function. Perhaps “morality is subjective” is true in the sense you mean it. But that does not mean it is the most culturally useful way to present truth about morality.
For example, my favorite version of the Golden Rule is “Do to others as you would have them do to you” which is the basis of many day to day moral decisions and a wonderful heuristic (a usually reliable, but fallible rule of thumb) for moral behavior. But the Golden Rule is fallible. Simply understanding that the function of morality is to increase the benefits of cooperation tells us specifically when it would be ‘immoral’ to follow the Golden Rule – when doing so would decrease the benefits of cooperation. Cultures commonly abandon the Golden Rule in just such cases, such as when dealing with criminals, in time of war, and when “tastes differ”.
Too often, abandoning the Golden Rule in such cases leaves people morally adrift and tempted to do terrible things. Simply knowing that the function of morality is to increase the benefits of cooperation can provide substantial moral guidance even when dealing with criminals, in time of war, and when “tastes differ”.
Saying “morality is subjective” (which is not true relative to morality’s objective function) tends to short circuit what otherwise might be useful conversations about how understanding the evolutionary function of morality can be culturally useful.
Hi Mark,
Contrary to oft-made claims, science can indeed come to a conclusion of non-existence, given both silence and an expectation that if something did exist then science would not be silent.
For example, one can say that science is silent about aliens inhabiting the Moon. So is it then unscientific to conclude that “the moon is uninhabited”, because — since science is silent on the topic — there might actually be aliens on the Moon?
No, that would be silly. We would expect that, if there were aliens on the moon, then we’d have seen signs of them. Further, science employs Occam’s Razor all the time, and given no evidence of aliens inhabiting the moon, we discount that possibility and declare that the moon is uninhabited.
It’s the same for objective imperative moral oughts. We have no evidence that they exist (and indeed no conception of how they could exist or what such a thing would mean), and further, everything that needs to be explained can be explained without them, so we discount the possibility. From there we can conclude that morality is subjective and that the moon is uninhabited.
If anyone disagrees on either point the onus is on them to provide the evidence.
Hi Coel,
I see three aspects of morality your claim “morality is subjective” could be denying objectivity to.
These three aspects are: 1) its function (its moral ‘means’), 2) its goals, and 3) its bindingness.
Do you agree that, as a matter of science, “morality’s function is objective”, and “morality’s goals and bindingness are subjective”?
Hi Mark,
Yes, I would agree, though I wouldn’t quite phrase them like that. It is indeed an objective fact why morality evolved and what its evolutionary function is. But “morality” doesn’t have goals, people have goals. Those goals are subjective (meaning, they are thoughts inside people’s brains).
And any obligation to act in a particular way derives instrumentally from people’s goals and values. Thus “morality is subjective” because the status of moral obligation is what the discussion is about. Those arguing that morality is objective are arguing that there is an objective obligation and imperative to act in particular ways.
It seems to me that you admit that, in truth, morality is subjective, but want to find some way in which you can get to label it “objective” because you see that as rhetorically advantageous, you think you can persuade people better to act as you want them to if you can claim that people have an *objective* obligation to act in those ways.
I can see why that is rhetorically advantageous, so I can see why you are advocating this approach, but to my mind it is simply wrong (and, in the end, not actually that helpful).
Hi Coel,
Perhaps we differ in how culturally important we think it is to know that the objective function of “social morality” is to increase the benefits of cooperation. (Here, “social morality” refers to behaviors motivated by our moral sense and advocated by cultural moral codes.) I see it as potentially the most useful advance in human morality in my lifetime.
Perhaps you think the potential cultural usefulness of this knowledge is little to none? If so, I can understand your preference for saying “morality is subjective”.
Here are some examples of how knowing that social morality’s function objectively is to increase the benefits of cooperation could be the most useful advance in human morality in my lifetime.
This knowledge from science reveals as objective truth that following the Golden Rule when it will decrease the benefits of cooperation is objectively immoral (as when dealing with criminals, in time of war, and when “tastes differ”).
Similarly, acting in accordance with simple utilitarianism or Kantianism is objectively immoral when the benefits of cooperation will be reduced.
Knowing that the objective function of social morality is to increase the benefits of cooperation provides the basis of an objective morality for everyone from the beginning of time to the end of time. That morality comes supplied with powerful heuristics such as the Golden Rule (which initiates indirect reciprocity), whose fallibility is readily and easily understood.
Saying “morality is subjective” puts all that potential moral progress at risk.
Hi Mark,
You’re right, I don’t see it as that useful. First, it’s already pretty much accepted. It’s pretty obvious that the reason that morality evolved is to facilitate human cooperation. No-one would be surprised by that. It’s also not new, it was suggested by Darwin.
You are committing the naturalistic fallacy, that the reason why something evolved somehow obliges us to act in particular ways. Let’s consider two examples:
From an evolutionary point of view, our bodies and psychology evolved to have the function of producing as many children as possible. Does that oblige us to have children? Is it immoral (= something we shouldn’t do) not to try for as many chilren as possible? No. There’s nothing at all wrong with deciding not to. We can decide how we want to live.
Second example, from an evolutionary point of view, our bodies and psychology evolved to make us want to eat more food than we need. The evolutionary function of this is to store up fat for the hard times (in an evolutionary past when hard times were routine, including every winter). Now, does this then oblige us to eat more food than we need and get fat? Is dieting immoral (= something we shouldn’t do)? No, it isn’t. We can decide to moderate our food intake if we prefer remaining slim. But that is contradicting the evolutionary function of our bodies! It is indeed, so?
Now of course you’re too smart to commit the naturalistic fallacy explicitly. So you do it by confusing yourself between several different meanings of “moral” and “objectively moral” and sliding from one meaning to the other. Every time we discuss this I ask you to be fully explicit on what you mean by the word “moral” when you use it, and you usually avoid doing so.
So let’s be clear exactly what is objective: it is an objective fact that the evolutionary purpose of our moral sentiments is to facilitate cooperation. That’s all. It carries no normative connotations at all, placing no obligation on us to act in any particular way. (Any more than knowing the evolutionary function of craving for food obliges us to get fat.)
So what do you actually mean by “… is objectively immoral” as used in that sentence? As a rhetorical sentence, what it seems to be trying to connote is that it: “… is what we, as an objective fact, should not do”. But of course that does not follow, that’s a non-sequitur. That’s the naturalistic fallacy.
The *objective* statements about morality are *descriptions* of the *evolutionary* function of moral sentiments. And that function, as we’ve agreed, is enabling cooperation. So by “objectively immoral” you mean “not in line with the evolutionary function of promoting cooperation”, or, in shorter versions, “is not in line with promoting cooperation” or “doesn’t promote cooperation”.
That then shows that your sentence actually means: “… reveals as objective truth that following the Golden Rule when it will decrease the benefits of cooperation will not promote cooperation”.
Now that, of course, is a lot less rhetorically powerful. It’s a simple tautology. And stated like that, no it is not a “moral advance” of great benefit to us to understand.
You only think that your scheme has any important implications because you mislead yourself by slipping between different connotations of the word “moral” and so commit the naturalistic fallacy.
If you disagree with my analysis then perhaps you could write out your argument with, at every point where you use the words “moral” or “immoral”, also giving an explicit explanation and statement of what you mean by those words in that exact context.
Whoops! I found a glaring example in my last post of thinking one thing and writing another. Could you delete my last post and substitute the following? Thanks
Revised text in caps:
Hi Coel,
Just by coincidence, I have been developing an essay addressing what it means to say that a behavior is moral or immoral because it fulfils or contradicts the evolutionary function of our moral sense and cultural moral codes (the principle reason they exist).
Here are the essay’s main points:
1) To reduce confusion with broader philosophical definitions of “moral” and “immoral” (particularly with any implications of imperative moral ‘oughts’), we can define as “socially moral” the EVOLUTIONARY FUNCTION OF behaviors motivated by our moral sense and advocated by past and present moral codes.
2) Then saying a behavior is “socially moral” or “socially immoral” simply refers to their definitions as either consistent with or contradicting the evolutionary function of our moral sense and cultural moral codes.
3) These definitions are useful first because of their explanatory power for a) why cultural moral principles exist, and b) why following cultural moral principles will be “socially immoral” if the benefits of cooperation will be reduced. For example, following “do to others as you would have them do to you” could be socially immoral if acting on it decreases the benefits of cooperation such as in times of war, when dealing with criminals, and “when tastes differ”.
Unfortunately, abandoning the Golden Rule too often leaves people morally adrift and tempted to do terrible things, particularly in war and when dealing with criminals. However, science’s insight into the universal function of social morality provides firm grounding for what behaviors will be socially moral (behaviors that increase the benefits of cooperation) when cultural moral principles like the Golden Rule must be abandoned.
4) Such definitions are also useful because they are central to defining a kind of objective moral system. That system, adaptable to virtually all societies, would consist of commonly used moral principles constrained by the knowledge that decreasing the benefits of cooperation is socially immoral and increasing those benefits is socially moral.
5) Science doesn’t need to derive imperative ‘oughts’ from what ‘is’ in order a) to provide a test for detecting when moral principles advocate socially immoral behaviors and b) to provide the basis for a kind of objective moral system.
Who might want to conform to such a social morality? Whoever has a goal to act morally.
Why would people have a goal of acting morally, meaning consistently with social morality’s function? Understanding morality as cooperation provides good reasons to consistently act in ways that can be expected to increase the benefits of cooperation and are thus socially moral.
6) By producing a simple test for when common moral principles advocate immoral behavior and defining a kind of objective morality as described above, my “social morality is objective” claim appears to be highly culturally useful. In contrast, I don’t see the utility of focusing on morality’s lack of “imperative bindingness” by saying “morality is subjective”.
Why put the focus on something that does not exist? Wouldn’t it be more productive to focus on what does exist, a kind of time, species, and place independent objective moral system?
There is no naturalistic fallacy involvement in any of my claims about the universal function of socially moral behaviors. I have no need for such a dubious source of support.
Hi Mark,
As ever, you are confusing yourself owing to conflating different meanings of “moral”, and thus committing the naturalistic fallacy.
The primary meaning of “moral” is (Oxford Dictionary) “concerned with the principles of right and wrong behaviour”, what we should and should not do. Thus the term “moral” is normative, if you label something “moral” you’re saying we should do it.
It is thus confusing to use the word “moral” for a descriptive account of evolutionary function. Yes, you can define it that way, but it will lead to confusion (confusion of yourself as much as anyone). So you define:
Where that “evolutionary function” is “promoting cooperation”. OK, so then let’s turn to your argument, (5) is the crucial bit:
Which means “… advocate behaviours that do not promote cooperation”. OK, agreed.
What does this mean? All it means is “and provide the basis for a system that promotes cooperation”. Yes, agreed. Science doesn’t need to derive imperative ‘oughts’ to tell us what behaviours promote cooperation and what behaviours do not.
Again, I’ll translate: “Who might want to conform to such a system of promoting cooperation? Whoever has a goal to promote cooperation”. True, but tautological.
Again translating: “Why would people have a goal of promoting cooperation, meanining consistently with the goal of promoting cooperation?”.
Translation: “Understanding cooperation-promoting behaviours as cooperation provides good reasons to consistently act in ways that can be expected to increase the benefits of cooperation and are thus cooperation-promoting”.
Does it? Why? This begs the entire question. Nothing in your analysis gives any actual reasons for promoting cooperation. It only seems to in one of two ways:
1) The naturalistic fallacy: because it is part of our evolutionary programming, it’s what we should do. Or:
2) Because you’ve used the word “moral” you are smuggling in connotations of “it’s what we should do”.
Hi Coel,
Thanks for dropping my whoops post. (Oh, for a post edit button!)
The reasons groups and individuals would choose to advocate and enforce cooperation strategies such as “Do to others as you would have them to do you” and “Do not kill, steal, or lie” is because of the emotional goods (due to our evolutionary history) and material goods (due to the innate power of cooperation in our universe) produced by the cooperation that enforcing such norms produces.
Neither the word “moral”, nor dubious support from any naturalistic fallacy you keep referring to, is needed to make the above statement true.
I am amazed (while not at all doubting your sincerity) that you seem to be saying you were not aware of this.
Use of the word “moral” necessarily enters the process of applying the above to useful cultural ends in two ways.
1) People intuitively recognize (again due to their evolutionary history) that the cultural norms being advocated are “moral norms”, which people previously commonly confusedly thought were something else than cooperation strategies. (The sources of that confusion are on display in dictionaries, moral philosophy books, and imperative bindingness feelings from our moral sense. When the whole point of the relevant science is that we have misunderstood what morality is, it would be silly to rely on dictionary definitions to understand morality.)
2) Science shows us that increasing the benefits of cooperation is the objective function of “social morality” (the primary reason it exists). Here “social morality” is the behaviors motivated by our moral sense and advocated by past and present moral codes – essentially what the science of morality studies. And this objective function of “social morality” provides the basis for an objective social morality that groups and people could universally understand. As I have said before, they could choose to advocate for and enforce it because they wish to gain cooperation’s emotional goods and material goods benefits.
All the above social benefits of an objective social morality are put at risk by confusing people with the claim that “morality is subjective”. Note that the claim “morality is subjective” is focused on rebutting part of the common misunderstanding of what morality is. That misunderstanding is more effectively rebutted (and much confusion avoided) by the relevant science and its objective social morality.
Yes, agreed. But nothing there depends on your analysis of “moral”. It’s entirely independent of your analysis.
Yes, humans benefit from cooperation (we all agree on that). **But**, cooperation has its limit, we don’t want too much cooperation, we want the right amount. Humans actually dislike and don’t prosper in communism states, where too much cooperation is enforced. What humans actually want (and best prosper from) is an optium balance between cooperation and individual action.
This is a big flaw in your equation of “moral” with “cooperation”.
I’m not saying I was unaware of it. I’m saying it doesn’t depend at all on accepting your scheme. We can reject your account of morality and still want to cooperate to obtain the benefits of cooperation.
People have indeed mistakenly supposed that there was some objective obligation to act in particular ways regardless of whether it was what humans wanted and regardless of whether it actually benefited humans or fullfilled our desires.
By saying that there is no such objective moral obligation we can focus instead on what we want, on our goals and desires, and can adopt appropriate strategies (including cooperation) to obtain them.
But you can’t redefine “morality” to mean something other than what the dictionary says it means, otherwise you’re not talking about “morality”, you’re talking about something else.
Yes, although more correctly stated the “function” of our programming is to attain a balance between cooperation and individual action. That’s actually what we’re programmed for. As E.O. Wilson said about communism: “nice idea, wrong species”. Too much cooperation is contrary to human nature.
Well that’s not really true. Plenty of moral codes have *not* been about cooperation. Plenty of them are about doing what a supposed god is supposed to want us to do. The moral codes of ISIS are certainly not primarily about cooperation.
So why don’t you just say it straightforwardly? “We are programmed by evolution to be social and cooperative animals, and we’ll likely proposer that way”. Which is true. The only reason for your re-defining the word “moral” is to try to give the impression that there is some objective obligation to cooperate. There isn’t, but we can if we want to, and likely we do want to because it’ll attain many of our goals.
Sure, but we don’t need your redefining “morality” to realise that. And again, what we actually want is the optimal balance of cooperation and individual action. Nothing in your analysis points to that.
Why? It puts the emphasis on what we want, on our goals and values. What’s wrong with that? It’s better than “you must cooperate whether you like it or not”. Why must we cooperate if we don’t like it? Communism — with its maximising of cooperation — is not more moral, it’s less moral then giving people freedom of action and freedom to choose.
Hi Coel,
I’ll wait to reply to your above comment so we can get back to one discussion thread.
On a side note, even if we never come to an agreement, I still see our conversation as beneficial for refining how the cultural utility of morality as cooperation can be best presented.
Just to reinforce the point:
It might well be “useful” rhetorically. Plenty of people have mulled over: “I advocate that people act in particular ways’ hmm, how can I persuade them? I know, I’ll say that it is an objective fact that they should behave like that”.
The traditional appeal is then to invent a god: “You *should* behave like that because God wants you to act like that”. Your appeal is slightly different: “Science tells us that you should behave like that”.
But, you can’t quite say that openly, because you realise there is no shouldness, no imperative oughtness in what science says. So you fudge it, you define “moral” without any shouldness in it, and then arrive at “it’s the moral thing to do” because you know that people will interpret this as meaning “it’s the thing we should do”, and *that* message will be “socially useful” in getting people to act as you want them to.
Subjective morality does exist! People’s values, desires and goals do exist! They’re important to us! I am not advocating any second-rate morality, I’m advocating the only one that actually matters!
Hi Coel,
I replied to your previous comment before reading this one. Let me know if my reply did not adequately adress this comment.
I thought of a quick reply.
Yes, people’s values, desires and goals do exist. And they are mostly about ‘ends’ not ‘means’. Science’s objective social morality defines moral ‘means’, not ‘ends’ so they are not necessarily in conflict.
However, if peoples’ values and desires require actions that decrease the benefits of cooperation, then those values and desires are objectively immoral and NOT subjectively moral at all. Can you think of a value or desire that predictably decreases the benefits of cooperation, perhaps by a masochist or power mad egoist, that you would say is ‘moral’ because morality is subjective?
What makes an action objectively moral as a matter of science is its ‘means’, not its ‘end’.
That depends entirely on what you mean by “immoral”.
If by “immoral” you mean “reduces cooperation” then you are right: “if peoples’ values and desires require actions that decrease the benefits of cooperation, then those values and desires do indeed decrease cooperation”.
If by “immoral” you mean “we should not do it”, then your claim is wrong. It is not necessarily the case that: “if peoples’ values and desires require actions that decrease the benefits of cooperation, then those values and desires are wrong”.
Sure, that’s easy. Tom and Fred live in adjacent houses. They are both keen gardeners and get great satisfaction by creating and tending their gardens. Now they could cooperate. One could sow seeds in both gardens; one could weed both gardens. But they prefer not to. They prefer to tend their own gardens, not out of animosity — they are good friends — but simply because they take pleasure in creating something that *they* have created, that *they* have worked on. They even have an entirely friendly rivalry over who can grow the best parsnip for the village fete.
Their values and desires (to work their own gardens) reduces cooperation between them from what it could be. So what? Does that mean they’re wrong to do so? No. Would you regard them as “immoral” for prefering not to cooperate? I wouldn’t.
Hi Coel,
The evolutionary function of socially morality is to increase or maintain the BENEFITS of cooperation. “Benefits” are what selected for these socially moral behaviors, not mere cooperation.
People who “prefer to tend their own gardens” then would be reducing the benefits of cooperation if one seeded and the other weeded both gardens.
Your continued return to common dictionary definitions of morality based on a cultural misunderstanding of what morality is and focusing only on our sense of innate right and wrong (which we agree is an illusion) seems strange to me and needs an explanation.
Do you know why you keep doing that?
I am guessing the reason is the same as why you reject the better-informed definition of moral I proposed – Bernard Gert’s in the SEP – which makes a “morality is subjective” claim far more difficult to defend.
Hi Mark,
This reply torpedos your scheme in two different ways. First, if what is moral depends on what people prefer then that makes the scheme subjective not objective.
People treat the term “subjective” as a big bad bogey word, as though if morality is ever labelled “subjective” then that means it is unimportant and falls apart. That’s not the case. All it means is that whether something is regarded as “moral” or “immoral” has some dependence on people’s preferences and values.
So if your scheme talks about “benefits”, where what is a “benefit” depends on people’s likings, then that makes your scheme subjective. (Again, there’s nothing wrong with that, it’s not a bogey-word.)
Secondly, your above statement is no longer true. It is not true that the “evolutionary function” of our moral sentiments is to increase what we humans see as a benefit, in other words what we like. That’s not what evolution is about.
At root, evolution selects for maximising reproduction in the next generations. All else is secondary to that. Where cooperating would help an animal leave more descendants, then evolution selects for cooperation. (But, that means the right amount of cooperation, too much cooperation might be as evolutionarily disadvantageous as too little.) And where moral sentiments are needed in order to facilitate cooperative living, then evolution will select for moral sentiments. And it’s fair to state that the “evolutionary function” of moral sentiments is to enable cooperation, and the function of cooperation is to leave more descendants. And, again, moral sentiments might be there to prevent too much (evolutionarily disadvantageous) cooperation as much as too little.
But it’s only true to say that the “evolutionary function” of morality is to attain the “benefits of cooperation” if by “benefits” you mean “increases the likelihood of leaving descendants”. That’s what evolutionary logic is all about and thus what evolutionary function” is all about.
It’s not true that the “evolutionary function” of morality is any other type of benefit, since that’s not what evolution is about. So you certainly cannot define “benefit” in terms of what humans like and dislike, and still claim that attaining such benefits is the “evolutionary function” of something.
You cannot totally redefine words to mean something other than what everyone else means by them! The meaning of “moral” is set by what people mean by it! If you redefine “moral” to mean something else then you are not talking about morality, you’re talking about something else. You can take “morality” as it is commonly defined and give a new and different *explanation* of the concept, but you cannot replace the whole concept with something else.
Hi Coel,
The biology underlying our moral sense was selected for by the reproductive fitness benefits of cooperation. Our cultural moral norms are selected for, as part of the process of cultural evolution, by whatever benefits of cooperation people find attractive – some of them pleasurable emotions triggered by our moral sense’s biology.
You know this. I don’t get why you object to my saying the evolutionary function of social morality is increasing the benefits of cooperation.
The function of morality (our moral sense and cultural moral codes) is objective. The goals of morality and its bindingness are subjective.
You know this. No subjective boogeyman involved. Just the facts.
“You cannot totally redefine words to mean something other than what everyone else means by them!”
Every moral philosopher I have ever heard of thinks we ought to refine the definition of morality to reflect reason and observation. Otherwise, how do you explain the unending variations of utilitarianism, Kantianism, and virtue ethics? My advocacy of social morality defined by its evolutionary function is just one more attempt at correctly understanding morality.
Coel, you reject the Stanford Encyclopedia of Philosophy’s definition of morality. What justifies that rejection?
I do not reject the standard dictionary definitions of morality, but rather explain why morality’s bindingness (about right and wrong) characteristic is misleading and even culturally harmful when it is the sole focus of cultural definitions of morality.
As I may have mentioned, I am writing an essay on the cultural power of understanding the evolutionary function of morality. Here is an extract:
“What about a hermit who interacts with no one? Is he acting immorally? Social morality is about obtaining the benefits of cooperation in groups. Social morality is silent on the morality of isolated individuals who have no contact with anyone.
There will be many moral questions, certainly including moral ultimate goals, and likely including abortion and perhaps even the morality of hermits, that science’s moral guidance is either silent on or can’t resolve. The function of social morality being increasing the benefits of cooperation is not a thing designed for us or invented. Like the rest of science, it just ‘is’. We shouldn’t expect too much from it.
However, defining socially moral and immoral based on increasing or decreasing the benefits of cooperation does provides the basis of a culture, species, and time independent moral system. And that is not a small thing.
In my own life, I am finding moral guidance such as ‘increase the benefits of cooperation without exploiting others’ works well. Perhaps other people would find the same.”
Consider the cultural usefulness of “increase the benefits of cooperation without exploiting others” as moral guidance. The diverse, contradictory, and perhaps even bizarre products of “morality is subjective” concepts of morality are not even remotely competitive for increasing human flourishing and happiness.
If the choice is based on which is the most culturally useful concept of morality, the “morality is subjective” claim is not competitive with the “morality is objective” concept. I remain mystified that you seem to think it is.
Hi Mark,
Agreed.
Hold on, *cultural* evolution is a very different thing from *biological* evolution. If you’re bringing cultural evolution into it that’s very different.
I objected for exactly the reasons I explained last time. And by “evolutionary function of social morality” did you mean biological evolutionary function or cultural evolutionary function? (I’d presumed you meant the former.) These are very different things. So far you’ve not brought cultural evolution into it.
They’re not starting from a very different definition of “morality”, they’re trying to *explain* and justify morality. (And improve it.)
Correctly *understanding* morality (as it is commonly defined) is very different from redefining the term “morality” so that it means something very different from “morality”.
Which definition are you on about?
Reference to dictionaries and common-language understanding of what “morality” means. (And words are the property of people, not the editors of SEP.)
Good. So you accept that by a “moral” thing we mean the right thing to do, the thing we ought to do?
Here’s a suggestion. Every time you use the word “moral”, or one of its variants, in that essay, also put in brackets what you mean by the word in that context (where the “meaning” explanation is not itself allowed to involve the word “moral”).
Hi Coel,
I have found it useful to explore with you what feels like the entire possible intellectual space of ways anyone could possibly misunderstand the cultural implications of the science of our moral sense and cultural moral codes. Really, it has helped.
Thank you for that help! And I appreciate your politeness and patience during these conversations. But I forsee no benefit for either of us in continuing at this time.
Well, good news! You’re going to get an answer a lot faster than normal, mostly because I’m off and have the time to do so.
Unfortunately, another reason for a faster response is the hope that you might be able to follow it better with a shorter turnaround, because you keep responding to something like 10% of what I say and often end up making assumptions about my stance that you’d see were incorrect if you only read what I actually said. I’ll try not to be too annoyed when I point these out, but I’m going to be quite direct over that in those places.
Actually, the second answer is the answer to the first question: we have so many theories because all of them have severe problems with them that imply that they’re not correct. The reason we haven’t converged is because those problems are much tougher to solve than most people expected. This, then, causes frustration when people outside of those systems then wander in, don’t understand the issues or why the theories are separated — they start from incompatible premises focused on solving one or a number of specific issues but which leave them vulnerable to other issues — and then say they have it all solved. Just because you can find theories that seem to have solved all the problems you care about if you could only combine them doesn’t mean that you CAN combine them to make all the problems go away in any meaningful way. I talk about a specific, non-moral case where that’s true (analytic and existentialist) on my blog:
https://verbosestoic.wordpress.com/2013/05/26/analytic-versus-existentialist-kaufmann-on-philosophy/
It’s only if we accept that you have to take aspects from the various theories as opposed to, say, fixing up the existing ones, showing how the problems for a specific theory aren’t actually problems, or coming up with a new theory that handles the facts and doesn’t have any of the issues. Coel, these sorts of things are done all the time in science. Why in the world are you insisting that there’s only one possible approach when it’s philosophy?
But more importantly, you seem to have missed that I explicitly stated that there’s nothing wrong with trying to combine different aspects of different theories. I mean, I can see how you might have missed it, since I a) said it explicitly at least twice, b) said it IMMEDIATELY AFTER the quote you used from me, c) IN THE SAME PARAGRAPH, d) implied it in the quote you used by focusing on them working against each other and e) actually HOLD a hybrid view — Stoic and Kantian — which, since you’ve read my blog, you should be aware of.
So I’m not sure why you’re trying to defend having multiple aspects instead of trying to defend how they aren’t contradictory, as if I hadn’t said what I, in fact, directly said.
And yet when I pleaded with you to do that my response on reading it was to explicitly ask you why your view should be considered to be from the scientific viewpoint rather than a philosophical one, because it seemed to me to be a standard philosophical argument to me. You say that you start from this view which is so perfectly clear and yet to this philosophically and scientifically educated commenter you have never, ever, ever managed to start from the scientific or evolutionary or psychological viewpoint at all to reveal this to me. I was harsh and asked you to do so unless you thought me to ignorant or stupid to understand it, but I have to raise that again because every attempt you make to do so seems dumbed down if we were really going to do the science, as you rarely mention ANY of it other that some small generalizations. Which leads to …
… this repeated false and insulting assertion. Evolutionary theories have been in vogue in philosophy for years now. Philosophers and philosophy don’t do what you say they’re doing. So if it was obvious from the shallow level of evolutionary biology that you present, we’d have seen it already. So surely, we’d have to go deeper. And yet, you never do, even when ASKED to present it from that aspect. And then you assert that the only reason I don’t get it is because I don’t work from that perspective. I submit that I can understand things from that perspective if you want to start there, but don’t see how you ever really DO get there. So that doesn’t seem like an issue with me, personally, but with you not being able to decide which sort of explanation you’re really going for. You hint that a lot of times you do that to facilitate communication, but let me say with passion: STOP HELPING ME! It’s NOT working! Express it the way you think clearest and with what you think is the most critical evidence and let ME figure it out! Because if you’ve already done so, I can’t help but see the evidence as being inadequate and the evolutionary biology as being excessively shallow.
This is also insulting because you keep arguing that it’s ME whose dragging it down to philosophical models despite the facts that a) you started classifying it according to philosophical models and so I was only responding to what you said and taking it seriously, b) you made the accusation in response to you yourself asking me what philosophical model your view aligned with and c) you’ve asked me to do that again in THIS comment. And yet I’M the one whose locking this into philosophical models, even as I’m pleading with you to stop doing that and stick to the science if it works better for you?
Except as I clearly stated here, the issue isn’t with you taking different aspects. It’s with you taking aspects THAT ARE CONFLICTING. You can’t merely take an aspect from Model A and an aspect from Model B and say that everything will always work. You have to make them consistent. And your response when this has been brought up has not been to make a consistent set of aspects, but instead to defend your position by switching to a different MODEL to respond to different objections. This can’t be done if you want to make a convincing argument. If you’re going to borrow from multiple models, then you are likely not going to be on that model anymore, especially with how many models you borrow from. So you need to have a model — let’s call it Model Coel — that adopts those aspects and makes them all consistent with each other inside that model. I’d ask you to do that, but I already did and we mostly agree on them, so let’s proceed from there.
This ends up being semantic hair-splitting that avoids the real contradiction here: statements about aesthetic preferences/judgements have a content beyond that, and the like/dislike is an inference from that content. Since you concede that moral statements work the same way, then you’d have to accept that same conclusion: the content of moral statements CAN’T BE just like/dislike, or else they CAN’T BE emotions or like aesthetic preferences. That’s a major contradiction in your view.
Given that we can also identify likes and dislikes that aren’t moral, I have no idea why you want to maintain the like/dislike idea except as a misunderstanding of the key issues between objectivity and subjectivity.
At the risk of you accusing me of trying to put everything into philosophical forms again despite you asking me to do so, that sounds like a subjectivist view. You seem hung up on notions of truth and right and wrong, which makes you reject it, but really all that being a subjectivist means — at least critically for the debate — is that if someone says “X is immoral” no one can say them nay; that just IS what’s immoral to them at this point in time and so, by all meaningful accounts, just what IS immoral for them. You don’t need to try to boil it down to all likes and dislikes or all values as you try to do to get there.
The thing is, though, the evolved concept of aesthetic preferences is THAT they are subjective. We evolved to consider them subjective and to find attempts to claim them subjective misguided at best. So claims that they are really objective — despite them having some commonalities — seem to be violating that evolved concept in us, so someone wanting to claim that had better have some good arguments and evidence for that assertion.
The opposite is true for our concept of morality. Not only does the evolved concept critically include it being objective, it seems clear that it being objective was one of the critical aspects that allowed it to fulfill the purpose for which it evolved. Us just thinking that moral statements are universally and objectively true means that everyone will follow them and will follow them when no one else is looking, which is critical for the trust necessary to build a society. Thus, if you want to go by the evolved concept, it seems that objectivity is a critical part of that, and so if you want to argue that it is really subjective you will need strong evidence and arguments for that. And your arguments tend to be the one above that you use for aesthetic preferences: they just ARE subjective. In the evolutionary context, though, that’s not true, and we can use the evolutionary perspective to provide all the objectivity we need by appealing to the purpose that they evolved for as our determining factor for what it means to be moral. If we stick with that perspective, of course.
Social Contract theories can be subjective, as you should know because, well, the one I outlined IS that way. As stated above, it’s an implication because we need to be able to rely on people to follow the rules when no one is there to catch them breaking the rules, which means you at least want them to internalize them as objective.
That’s not important here, so don’t get caught up too much on that. What’s important here is that you don’t get what a Social Contract — in caps — really is. It’s not specific laws or social norms or mores. It’s the Contract that underpins all of society and any society at all. In the example you cite, the Electoral College rules AREN’T the rules that relate to the Social Contract. No, the rule there is this one: if you enter in a competition and agree to the rules of that competition, then you can’t decide that the rules don’t apply to you if you happen to lose but would win by other rules. It’s easy to see why this is a critical rule for any society, because if you might be entering into a competition with someone with more influence or power than you and so COULD change the rules to favour themselves at the end, then you can’t win and so have no reason to enter into that competition, but those sorts of competitions are necessary to best allocate scarce resources. So, again, we appeal to “She agreed to the rules and so can’t change them now that she’s lost!” rather than “The Electoral College says …” Even people who don’t like the Electoral College can and usually will still say that since it hasn’t been changed yet we have to follow the rules (the arguments against that were, you should note, mostly Utilitarian based on how bad they thought Trump would be as President).
Well, not quite. You clearly expect that that argument will make an impact on them, and that they will change their views in response to it. If it doesn’t work, it seems to me that you will at best throw your hands up in the air and walk away claiming that they can’t be reasoned with morally a la Sam Harris instead of simply accepting that that is their view and being okay with that. In fact, earlier you were defending FORCING them to do things your way. That’s stronger than a mere descriptive statement, which would be more akin to how we treat actual aesthetic preferences: if someone doesn’t like the same music as you do, we shrug and go on with your lives, not spend time trying to convince them how it will benefit them to like that music or even possibly forcing them to listen to the music you like. You definitely treat moral claims as stronger than that, which is why I started my reformulation where I did.
But, again, you are critically ignoring the difference between the Social Contract and the social contract, where the latter reflects laws and social norms. The Social Contract for almost all Social Contract Theories is unconscious, not codified, and encompasses all of the rules that are required for a society to function at all. If you don’t have or don’t respect the rules in the Social Contract, the society will eventually collapse. So, for example, the Social Contract will contain a rule against murder, because no society can survive if murder is allowed. What counts as murder and what counts as self-defense and so not murder and so on and so forth can vary from society to society, but there has to be a rule that says that you can’t be killed for no reason or else there wouldn’t be the trust that is required for a society to function (as an aside, Robert Sheckley’s “The Status Civilization” ably describes a society where murder is purportedly allowed, and yet the rules by which it’s acceptable are as strict if not more strict than those in any other society, and you get huge increases in status if you survive the attempts, giving it a reason).
On the other hand, social mores like which fork to use for which course are NOT part of the Social Contract. You might have to HAVE rules of etiquette, but which ones are utterly unrelated to the Social Contract, and it says nothing about them. So the existing social rules, laws and norms are NOT what we consider when we consider whether something is violating the Social Contract, but whether or not the violation, if it stood and became common, would destroy the Social Contract.
To take your examples, it is just as easy if not easier to recast “what they really mean” into a Social Contract analysis as it is for you to recast them into your view. For slavery, the rather obvious argument is that the people who are in slavery will not be any better off inside society than outside of it, and so have no reason to participate except by force, which is not a reliable way to enforce overall detrimental societal conditions. Most of the responses to demands to end slavery end up trying to define the slaves as, in fact, being in their rightful place in society and so are indeed part of it and thus benefiting from the overall social structure, with the responses generally being that there is no reason for them to fill that specific role, and so it is arbitrary and so can’t be used to justify saying that they are properly part of society. Thus, the argument that black people are people like any other and so deserve the same freedoms and opportunities as everyone else. For FGM, again it’s the idea that there is no reason to commit this sort of body modification on women that has an impact on their lives and without their consent, the reply is that there are reasons why it is better for society to do it, and the counter is that the reasons do not hold.
So, it would be easy to argue that in all cases of moral discussion, the argument is over whether or not the action risks destroying the underlying Social Contract, and thus it can — and usually is — used to challenge existing laws and social mores, which is rather in opposition to your idea that it simply cannot do that.
But they don’t even seem to be foundational but instead seem to be inferences from or caused by moral judgements. As I said for aesthetic preferences — and you ignored — conceptually we can easily imagine people who accept beauty as beauty — ie have the same experiences that we have — but find those experiences repulsive … and, in fact, many children’s shows have relied precisely on that sort of concept for their villains. Well it turns out that those same shows also use the concept that their villains like immorality and despise morality. Yes, they are overly simplified and you are unlikely to ever meet someone who acts or feels that way in real life, but the concept exists and is easily graspable by even small children. Thus, it cannot be the case that like/dislike is the foundation of moral judgements, since conceptually they can come apart. So it seems more likely that the like and dislike follows from or is caused by the moral judgement, but the moral judgement comes first.
As I stated before, values at that level are not the issue here. All moral theories are compatible with that basic level of values. Also, as you yourself said, the purpose for which it evolved does not mean that that is what defines what it is, so again we don’t need to define morality by your nebulous notion of values to be able to use values in them or, in fact, value morality. And, again, it is possible for someone to value immorality and not value morality, so values can’t be foundational in the way you imply here.
But why should we consider morality a mechanism at all?
Again, philosophy has considered the evolutionary biology, and do so far deeper than you do, as they want to derive morality from evolutionary biology, and all you’ve ever done is tried to show how yours is compatible with it. All moral theories that consider morality to facilitate social harmony are as compatible with evolutionary biology as yours is, even my “conceptual truth” model which you’ve never really addressed.
Two problems:
1) You HAVEN’T explained it adequately. You’ve merely asserted that it was wrong because you couldn’t see how morality COULD be objective, DESPITE never being able to address my conceptual truth model.
2) This also applies to any evidence you might muster against the competing theories, which I have offered repeatedly.
But the big issue is indeed the combination of the two: you dismiss the evidence against you on the basis of your theoretical commitments, but don’t let any of the opposing theories do that for yours. This then leads you to have an invalidly self-selected set of evidence that you are willing to consider. No one who doesn’t already accept your theory will agree that your theory explains all the evidence and theirs doesn’t.
For example, I have no idea what evidence you think refutes my theory, as well as all the others out there.
Um, why wouldn’t they just use the same evolutionary explanation that you rely on?
Moreover, my theory explicitly does this: we recognize it from our experience with it as being a conceptual truth and all of those are objective.
And, finally, most theories don’t try to explain that because they are interested in what morality is, not what people THINK it is per se. If they could prove that morality was objective, figuring out how people figured that out without that proof would be a job for psychology and the like, but would not mean that their proof that morality really is objective was flawed if they don’t offer an explanation for that. Here it looks like you’re jumping on something that isn’t explained and trying to use that to discredit the opposing theories even though it’s not something that they need to explain or have any interest in explaining.
Hi verbose,
I’ll reciprocate by also giving a fast reply!
It’s inevitably necessary, to keep comments from getting way too long, that we each focus on salient points. Of course two people in a discussion will often disagree on which are the most salient points.
Yes I did see that you were saying it was fine to combine elements if done consistently, but I felt it important to explain why what I’m saying doesn’t fit neatly into philosophical categories.
But what you see as me “switching to a different model”, I see as explaining why what I’m saying is not actually inconsistent.
Some philosophers do indeed (and correctly) approach meta-ethics from evolutionary biology (Michael Ruse is one). But many, many philosophical accounts of meta-ethics do not.
I thought I had done two comments ago. But here it is again. This right here is my account of meta-ethics, quoted from a recent comment:
“As I see it my stance is very simple. Here it is in a nutshell:
“Animals evolved aesthetic preferences. We evolved to like things that are beneficial (e.g. nutritious food) and dislike things that are harmful (e.g. rotten meat). Animals that developed a highly social way of life then needed to evolve aesthetic preferences about how we interact with each other. Thus we like some ways of interacting (e.g. loyalty, generosity) and dislike others (e.g. cheating, stealing). To express such value judgements we use the term “moral” for things we like and “immoral” for things we dislike. That’s all there is to it. Lastly, if anyone thinks that there is more to morality than subjective feeling, and that by moral langauge they are expressing objective truths, then they are wrong.
“That’s my account of meta-ethics, and it seems to me straightforward and simple to communicate.”
There it is, just above. That contains all the essentials of meta-ethics (though of course all the implications are not fully brought out).
It might be “shallow” and simple, but sometimes things have simple and straightforward answers! As for the evidence for it, well admittedly I’ve not given that at length, but it would be hard to summarise the vast amount of the biological evidence that is relevant.
OK, so here you’re making a substantive critique, that you think there’s a fundamental difference between moral sentiments and aesthetic sentiments. I admit that I don’t follow your argument. So let’s consider. We experience two different things using our senses:
A) Our tongue encounters a bitter herb.
1) Our ears hear “John has betrayed us, he’s told Fred everything”
B) Our brain processes and evaluates the experience.
2) Our brain processes and evaluates the experience.
C) We find the experience unpleasant and disagreeable, this evaluation can be summed up by the phrase “dislike” of the taste of the herb.
3) We find the experience unpleasent and disagreeable, this evaluation can be summed up by the phrase “dislike” of John’s actions.
D) We might express the dislike using aesthetic language: “That’s disgusting!”.
4) We might express the dislike using moral language: “The evil, traitorous rat!”.
I don’t see any contradiction here. Moral sentiments do indeed appear to be the same thing as aesthetic sentiments, though applied to different topics.
That’s not a problem for my stance. Moral sentiments are one type of aesthetic sentiments, they’re a subset of the superclass of aesthetic sentiments.
I want to maintain it because that’s what evolutionary biology tells us about mets-ethics (and humans are, after all, the product of evolutionry biology), and because it solves meta-ethics. And it’s not just my idea, it’s widespread in the relevant sciences having been originated by Darwin.
So far I don’t find any of your rebuttals of the idea convincing, though to be honest I don’t yet properly understand what your counter-arguments actually are (this could be my fault of course).
I agree that it’s a subjectivist view, though that doesn’t narrow it down much since there seem to be a wide array of subjectivist views.
But you do have to consider what the phrase “that just IS what’s immoral to them” actually means.
Does it mean: “that is what they attach the label “immoral” to”? (If so, then it’s a tautology.)
Or does it mean: “that is what they dislike”?
Or does it mean something referring to an objective standard of morality, in which case it is no longer a subjectivist account?
First, despite the aphorism “beauty is in the eye of the beholder”, I’ve met educated people who seriously suggest that there are objective aesthetic standards. For example, I’ve been in conversations with people arguing that classical music is *objectively* better aesthetically than rock music.
But, agreed, generally we do indeed regard aesthetic judgements as subjective, and moral judgements as objective. So yes, I need to explain that. And, as I’ve proposed, I suggest that we evolved to regard moral judgements as objective because it made our moral sentiments more powerful and effective.
What’s the difference? Well, most aesthetic preferences needn’t be universal. We can eat different foods and listen to different music from neighbour. But moral sentiments are about how people treat each other in society. Necessarily they need to have wider application, and in trying to promote that wider application it is rhetorically effective to promote them as being *objective* standards, not just subjective ones. Thus, the idea that moral dictats have objective standing won out in evolutionary terms, even though it is false.
Agreed.
Certainly, people **thinking** that they are objective is a critical part of it, but them actually *being* objective is not.
My arguments are (in short):
(1) It’s the only account that makes any sense from an evolutionary point of view.
(2) No-one has been able to make any objective account work (philosophers have tried for 2000 years, and as you say: “these problems are much tougher to solve than most people expected”; I think the truth is that philosophers or on a wild goose chase trying to make objective accounts work, and will never succeed).
(3) I’ve never met any account of objective morality that managed to answer even the most basic questions about what “moral” actually *means*. Yes, “immoral” means we shouldn’t do it. Why shouldn’t we do it? Because it’s immoral! But other than going round that loop, no philosopher has an answer. An account of meta-ethics in terms of aesthetic likes and dislikes is the only one that anchors morality in bedrock and thus solves meta-ethics, as opposed to floating it on arbitrary axioms.
No we can’t! That’s the naturalistic fallacy! An evolutionary purpose or function is not a *human* purpose. We have no moral obligation to align our actions with evolutionary programming.
One can indeed produce objective *descriptions* of human morality, but all moral imperatives and obligations are subjective.
One thing I’ve noticed about philosophers (if you don’t mind me saying!) is that they are keen on making irrelevant distinctions. Social contracts come on al scales, from a group of kids playing in the backyard and inventing rules of the game, up to national laws and international agreements. They are all just groups of humans agreeing how to relate to each other.
Moral codes work the same. Some moral codes can be local (groups of street kids in a South American favella have different moral codes within the group compared to how they relate to outsiders; for example stealing within the group is forbidden, stealing from outsiders is fine). Other moral codes are national. Some are international.
No I don’t! In that paragraph you’ve *assumed* what attitude I *might* take and then accuse me of being inconsistent!
All I’m doing is explaining why other people’s subjective feelings do matter to us. As I said: “It is a fact, however, that people who behave in ways that their peers regard as immoral will likely not prosper, because their peers will likely sanction them. But that’s just a descriptive statement.”
OK, so I’m not understanding what philosophers mean by “Social Contract”. But all social contracts (capitalised or whatever) are just agreements between people deriving from people’s consent and advocacy. They are thus subjective.
Agreed, and such statements are *descriptive*. And, yes, one can make any number of objective *descriptive* statements about human nature and human morality.
And I would fully agree that the moral norms by which a society operates are necessarily moulded and limited by human nature. Again, that’s a descriptive statement.
But the real question, the real dispute, is about moral imperatives and obligations, and there is nothing external to humans that obliges us to act in partiuclar ways if we don’t want to.
Replying to me: “This illustrates that likes and dislikes are not less important than the social contract, they are prior to the social contract, and thus more foundational.”, you say:
But to me, moral judgements *are* likes and dislikes.
No, actually, we can’t, that’s contradictory! “Beauty” is a value judgement (Oxford dictionaries: “… qualities that give pleasure to the senses”) and “repulsive” is a value judgement (“strong dislike or disgust”).
We can indeed imagine an adult and a child both eating an olive, both having the same sensory experience, and one finding it pleasurable and the other replusive. Or we can imagine two people disagreeing over the “beauty” of a painting or piece of music, when they are looking at or listening to the same thing.
But no, it is contradictory for the child to think: “while I accept that the olive is delicious, I find it yucky”, or for someone to declare “I accept that that music is sublimely beautiful but I find it trite and nasty”.
Of course they could — without contradiction — think: “I accept that that music has qualities that other people find sublimely beautiful, but I find it trite and nasty”.
All that means is that the villain hates what the children like.
Ditto. The villain merely hates what other people like and likes what other people hate.
I don’t agree that they can come apart.
Again, I deny that that is possible. Of course one could value what other people regard as immoral and not value what other people regard as moral.
And indeed the only thing compatible with it.
What is your conceptual truth model in outline? As I see it, morality is about the real world, about real humans, and about how real humans behave. “Conceptual truths” that aren’t about the real world are not relevant to actual human morality. If they are real-world relevant, then they are real-world truths not conceptual truths.
Or, as I see it, use my theory to show why it isn’t actually evidence against me. I don’t agree, for example, that “people think that morality is objective” is evidence that “morality is objective” if one can produce an as-good alternative account for why they think that.
But if they do that then they’ve more or less conceded the game. My explanation for why people *think* morality is objective doesn’t require morality to be objective, so if people adopt my explanation then they’re left with no reason to suppose that morality is objective. After all, the intuition that there is an objectiveness to morality is really the only reason at all for thinking so.
How do we recognise conceptual truths? From rational analysis It would seem to me that human morality and human attitudes about morality far preceded any rational analysis of conceptual truths. So this would not explain the origin of people thinking that morality is objective.
Well ok, but you’re offering “people think that morality is objective” as a big data point that needs explaining. So any good model should explain it.
The thing is that as stated, and in line with your comments about philosophy, it came across as you seemingly claiming that I was arguing against using different traits from different models, when I was actually arguing that the traits you’re pulling actually made for an inconsistent set of traits. Hence the strong reminder that mixing different views isn’t the problem, but mixing them so that they’re inconsistent is the problem.
The problem is that you don’t show how the traits make what you’re saying consistent, but simply respond to my pointing out what I see as being entailed by your traits from Model A that you CAN’T be saying that because you hold trait from Model B. That’s what I mean by switching to a new model: if the trait from Model A is causing you problems, you adopt a trait from Model B to refute the problems, but rarely if ever show how the two traits actually follow from your view and are both consistent inside your model.
Ultimately, it seems to me that your model isn’t detailed out enough to avoid these issues. You have a simple — and possibly simplistic — statement of it, but when we try to dig down into the details things seem, at least to me, to be at a minimum confusing at you tend to simple return to the basic statement instead of exploring the deeper details.
It is interesting that the only philosophers that you accept as understanding evolutionary biology properly are the ones who agree with you, almost like agreeing with you is a precondition for you accepting that they aren’t ignoring the science part that you prefer. The reality is that philosophers AREN’T ignoring those things, but simply don’t think they prove as much as you and people like Ruse think they prove.
Yep, that’s the one that my response to was “Why do you think that’s scientific? It seems perfectly in line with philosophy to me.”. I STILL don’t see how that’s somehow from the scientific viewpoint when it’s a common one given in philosophy classes and seems to be pretty standard philosophy.
What I mean by it being shallow is that it touches on the surface aspects of evolutionary biology and so when we look deeper at the entire mechanism things aren’t so clean. For example, saying that morality helps with societies which are good for people and so that’s why it evolved but in general dismissing the specific things that drove morality being selected in that way. As for the evidence, I can see that concern but for the most part I’ve never seen you give it, even in your other posts. If there’s that much evidence, then I would have expected to actually see some of it and see how your theory falls from it at some point in your discussions of morality. The evidence I HAVE seen, in my opinion, is not particularly indicative nor does it particularly support your theory despite your insistence that your theory is derived from the evidence and is the only one consistent with evolutionary biology.
Let me try to clarify this using your example, but note that taste is a bad example of this, for reasons that I’ll get into in a minute.
At this point, we’re having the experience that someone is typically referring to when they make a statement of aesthetic preference/experience. So someone having an experience of beauty could say “That’s beautiful!” at this point without directly referring to or in fact doing any of the rest of the progression. As I said, typically when we talk about things like beauty we are referring directly to the experience, and the listener translates that to the experience that THEY have when they experience beauty, which is a pleasant experience, which they like, at which point they infer that the other person liked it as well. But at this point it is perfectly possible for the speaker to find beautiful experiences unpleasant and so not like them.
Taste is a bad example because there’s a stronger link between the experience being pleasant and the words “delicious” or “tasty”, because if we were referring to the experience itself we’d say “bitter” or “sweet” or whatever. That’s likely because of the selective nature of our taste buds, which provides those experiences directly. We don’t really have that for vision or hearing.
So an emotivist idea of morality would stop here: moral or immoral is a specific type of experience, and we can infer from that, most of the time, whether the person approves or it or not. But that can come apart.
Once you get to this point, you’re making a specific value judgement. Dislike doesn’t just follow from the experience itself, or even the experience itself being unpleasant. Someone may approve of an unpleasant experience if their other values say that it is overall desirable for them. So you can’t directly link from the experience to the overall value judgement itself.
But the real argument here is that if someone says “That’s disgusting!” there is important content in that statement beyond whether or not they like or dislike it. That’s the experience that they’re actually having. So if you start from aesthetic preferences, you have to concede that the experience that they’re having is AT LEAST an important part of the content of their statement, and I submit that it’s the ENTIRE directly expressed content of their statement. That means that the Error Theory portion of your model is in conflict with the aesthetic preferences portion, because the aesthetic preferences portion insists that the statements have the experience as a crucial part of their content, while the Error Theory portion denies that.
But the problem is that if there are non-moral likes and dislikes, then you need to have some criteria for determining which likes and dislikes are moral and which are not. This is what you’ve been decidely trying to avoid committing to, and also means that the content of a moral statement CANNOT be merely “They like or dislike it” because we’d need to identify it as a specifically moral statement first and THEN could conclude that it meant that they liked or disliked it in a moral sense rather than a different sort of sense.
That we tend to like moral things and dislike moral things is a very common idea and is pretty much in line with all of the evolutionary biology. I fail to see why you need to devolve the ENTIRE MEANING of the statements to that, when so many theories — including evolutionary ones — found it much more interesting to look at what the specific properties of moral and immoral statements are that make us like the former and dislike the latter, which your strong statement tends to avoid. And even from what you quoted it seems unlikely that Darwin made the link that strongly. As I stated, the view is almost certainly that it co-opts our mechanisms for attraction and repulsion to make us automatically prefer the moral and avoid the immoral, but that’s hardly saying that all moral statements mean is that we like or dislike it.
This one. But why is it bad if it’s a tautology? It follows from the definition of subjective: if something is subjective, then it can only be determined by appealing to that state in a specific subject. So for morality, what we can justly say is moral or immoral for a subject is entirely what that subject holds is moral or immoral for them, no matter how that got there. Yes, it could still be created in them through external means and evolution, but if their moral views come apart from those no one can say that their determination of what is moral is incorrect. To do that, you’d need an objective standard (or an argument that their internal moral system is inconsistent).
Whether they like it or not is a separate question.
But recall your argument here. Your argument was to ask why morality can’t be considered subjective because aesthetic preferences evolved to benefit us and they are paradigmatically subjective, so why couldn’t morality? And the answer is that the objectivity of morality, by your own argument, is an important part of the concept of morality and a big reason why it was selected for (even if you argue that it wasn’t necessary for it to be selected, which I dispute). So aesthetic preferences don’t have an issue with objectivity because they never were and never needed to be objective to be selected. Morality, however, had objectivity as a crucial part of it. That, then, means that to deny that you need a really good argument and evidence. Yes, it could be wrong, but it’s not something that can be dismissed blythely.
Any moral theory that says that morality will tend towards society friendly behaviour is entirely consistent with the evolutionary point of view. More on that later.
They’ve also been trying to make subjectivist accounts work for as long, as well as evolutionary and various scientific/naturalist accounts work for as long as the science has been around, so this hardly counts only against objectivists.
There are NO objectivist moral accounts that define what moral and immoral is as “We should/shouldn’t do it”. What you’re referring to, as we discussed before, is the idea of “moral motivationalism”, which is the idea that once someone understands what it means to be moral then that fact alone should motivate them to act morally. But this requires coming up with a meaning for what is moral first and then determining if it is sufficiently motivational. Your view can be seen as the other tack — again, similar to what Richard Carrier does — of starting from what motivates people and then deriving what is moral from that. The issue with this has always been that people can have deep-seated motivations to act in incredibly nasty ways, so you STILL have to curate their motivations to more morally acceptable behaviour, which usually results in some sort of Enlightened Ethical Egoism. So your demands end up seemingly as you objecting that their morality isn’t motivating enough for you and so they have no definition of what it means to be moral or immoral, when the real issue is that you are objecting that they fail the moral motivationism aspect.
And this argument, of course, doesn’t work on me because I REJECT moral motivationism, at least in part for these very reasons — it’s too hard to find a moral definition that will motivate everyone — but also because conceptually if this was true then you couldn’t have a truly amoral or immoral person, but this certainly seems to be conceptually possible. Thus, moral motivationism is not a requirement for any kind of morality — objectivist or subjectivists — and so what we have is an additional problem of convincing people to act morally. Which we seem to have, so it isn’t really much of an issue for moral systems [grin].
What’s a “human purpose”, then, and how can your view align with it? Without simple tying it into the value system of the individual and so reducing to “What they think is moral or immoral is?”.
Also, yes, this IS an issue for evolutionary theories: why should we accept that the concept of morality we evolved to have is actually right and reasonable and what morality actually is? But this also strikes against ANY evidence you use from evolution to support your claim.
I just finished watching Doctor Who — through Capaldi’s run, if you’ve watched the series — and so I have a good way to put the distinction here: Your argument here is like The Doctor showing up, and you insisting that he isn’t a doctor because doctors have practices, and when he replies that he’s not A Doctor but THE Doctor you saying that that’s an irrelevant distinction. It’s not irrelevant; it’s what we’re talking about [grin].
But the distinction isn’t irrelevant in any case. First, there is strong psychological support for this, in the moral/conventional distinction, which even those who are autistic learn without too many issues, suggesting that it’s a real phenomena. Second, that’s been part of the philosophical definition from the start, as Hobbes argued that these rules are the rules that you’d allow a sovereign to kill you over if you break them, which is hardly the same as those lower level social contracts. And finally, it seems to make sense. We don’t consider breaches of etiquette moral breaches generally, but they are part of a social contract of some sort.
Social Contract theory explains all of these results really well, by drawing the quite reasonable distinction between rules that are necessary for the survival of society and those that are useful but not required. It can even justify rules of etiquette being taken more strongly at times based on the idea that rules of etiquette and the like are required for a society to function but the details of them aren’t required, which explains the differing social contracts and details of them from society to society.
Note: Laws are not morals. We tend to believe that we have some sort of moral obligation to follow fair laws, but we can easily see that some laws are not moral. The same thing holds for lower level social contracts; they aren’t morals per se. The Social Contract theory easily explains this by noting that following morally valid laws is a requirement for a society to function for the former, while the latter are based on the idea that if you agree to something you need to follow through on that, while noting that the specific laws or agreements THEMSELVES are not necessarily ones necessary for society to function, and can in fact violate the rules necessary for society to function.
We had the explicit argument over whether it could be justified to force someone to conform to your moral views, and you were on the side that it could be (admittedly, that was a while back). For simple disagreements over aesthetics, the idea that it could be justified to do that is ludicrous. Also, we also don’t think that social disapproval is an appropriate response to different aesthetic judgements, and you think it is. That’s stronger than a sort of loose disagreement over aesthetics.
So about the only thing you could be disagreeing with here is my characterization that you’d throw your hands in the air and walk away muttering like Sam Harris if they disagreed even after your persuasion, but given your posts on things like rights it seems a reasonable assumption to me.
And the reason I suggested Social Contract theory for you is that it COULD be subjective in the precise sense you needed: everyone has their own values and own place in society and will argue for what counts as violating it based on that. However, it also retains the possibility of telling someone they’re wrong about it actually violating the Social Contract, such as telling religious people that forcing them to conform to a law that violates their religious principles doesn’t violate the Social Contract, in an example that has never been raised in any of your posts anywhere [grin].
Which I’d translate to “Nothing forces us to act morally if we choose not to”. But this, then, has no bearing on the definition of morality itself.
But this would just be you asserting that likes and dislikes are foundational in your theory, which would be a tautology. That doesn’t work as a counter to the arguments I made that likes and dislikes aren’t foundational to morality at all.
So what you’re doing here is tunneling straight down to the specific value judgement of like or dislike and making that the entirety of the statement. As stated above, that’s not what the statements generally mean. The problem is that here you either have a set of completely undifferentiated values at which point talking about it being a beauty or moral value judgement seems meaningless, or else we can differentiate between different types of value judgements but then those different types can clash and so what we end up liking or disliking might not be due to the moral considerations at all (they can come apart).
In a sense, yes, but it is the case that they experience beauty in the same way as others and yet find it unpleasant instead of pleasant, breaking the inference we make from that experience.
The issue here is that even at the value judgement level, the villain agrees with everyone else about what it means for something to be moral and something to be immoral, but decide that they dislike the moral and like the immoral instead. This means that whatever anyone thinks morality is — even you — it is conceivable for them to agree with you about what morality is and still dislike the moral and like the moral. It’s only if you define it as strict like/dislike that you can make it seem incompatible, but this then does not seem at all like what morality is, and so it seems like you’re really talking about something else at this point.
To use an example, base 10 mathematics is a conceptual true, or set of conceptual truths. It can be used in the real world, evolution can select for it, it can be selected for based on the convenient traits it has — we have 10 fingers, for example — but it can be determined without having to appeal to real world examples or empirical data (as we see for other bases).
Morality is just like that. There is a concept of morality, and it can be used as a tool in our lives just like anything else. But all of its statements are objectively true, and would be true even if there were no moral agents using it or capable of using it.
If I already have to accept your theory as true to agree that it isn’t evidence against your theory, it doesn’t work as a demonstration of that.
Well, no. The biggest reason is always that once we work out the entailments of any subjectivist theory it starts to look like something so completely unlike morality that it seems like we’re talking about something completely different. For example, it doesn’t seem like we should take it anywhere near as seriously as we do or as was necessary for evolution to select for it. The intuition that morality is objective is actually at the “Humans evolved from apes” level, and your response generally works out to be at the same level as “If humans evolved from apes, how come there are still apes?”.
Sure, but we could do mathematics without a deep understanding of number theory as well.
To get to the idea of conceptual truths, we need two things: language and logic. Language, to the idea of words and words that have meanings and so the idea that we can be using the same word to talk about different things, and logic to show that it would be logically inconsistent if something was not true about the concept a word is referencing. These are, of course, fairly early developments in humans.
Well, it’s more that if you want to deny that base intuition, you need to have a good reason for doing so. Objectivist theories can explain why we think morality is objective by simply saying that that’s because it IS, and that we happened to get it right. Why we got it right without the analysis might be interesting for psychology and the like, but it’s not really important for the theory itself.
The same thing applies to you, though: if you could prove that morality had to be subjective, then I wouldn’t care for an explanation for people thinking it was. But so far, as far as I can see and recall, all you’ve ever said about that is that you aren’t satisfied with objectivist theories. That’s … not really good enough [grin].
Hi verbose,
I’m not saying that such philosophers don’t understand evolution, I’m saying they don’t see it as relevant to metaethics, and I think that’s where they start off wrong. As I see it metaethics can only be properly understood from the viewpoint of evolution. (I made the argument more fully here.)
It’s “scientific” because it addresses metaethics from the starting point of evolutionary biology. I’m not saying that it’s *incompatible* with philosophy (as I see it, such philosophy is best done in close connection with science), I’m saying it doesn’t start from typical philosophical analyses (though some philosophers have indeed taken this approach, starting from evolutionary biology).
Well, for example, in the SEP overview article on metaethics, the word “evolution” does not appear once. Nowhere does it even mention the sort of line that I’m taking. Indeed, to me that article is a good illustration of how philosophical-type analyses of metaethics just don’t go anywhere, exactly because they don’t start from the evolutionary perspective which explains what morals actually are and why we have them. The majority of philosophical accounts of metaethics (including all those cited in that SEP article) are similar.
The evidence, as I see it, is of three sorts:
First, a conceptual argument. From the evolutionary perspective, the stance that “moral sentiments are subjective aesthetic sentiments and that’s all there is to it” is the only one that makes sense. It explains what needs to be explained, and given that there is no reason to suppose that there is anything more to metaethics. Again, I made that argument more fully here.
Secondly, the literature on human psychology tells us that moral sentiments function pretty much like aesthetic sentiments. Now, admittedly I’ve not expounded on that evidence, partly because it would be better done by someone much more expert in psychology than I am, but that’s the impression I get.
Thirdly, there is the complete failure of any moral realist or moral objectivist model to attain any proper account of what realist/objective morals actually are. This is what one would expect given my first argument above.
It’s also worth noting that the “moral sentiments as aesthetic sentiments” explanation has a strong pedigree back to such notables as Charles Darwin, David Hume and Adam Smith.
Anyhow, on to the actual comparison of:
“A) Our tongue encounters a bitter herb.”
“1) Our ears hear “John has betrayed us, he’s told Fred everything””
Similarly, the person hearing the above is then having the strong emotional experience that we could label “being upset”, and they could exclaim: “That’s appauling!”.
Here we don’t agree. It is possible for the speaker to find unpleasant something that other people regard as beautiful, but it’s contradictory for him to find unpleasant things that *he* regards as beautiful. (Though it would be possible for something to see both beautiful and horrific *aspects* in the same thing.)
But you can link from the experience to the *specific* value judgement. You’re right that that might then be over-ruled by other values, so that the overall value judgement is different. But I don’t see why this negates the basic point, all it’s doing is pointing to the complexity of human psychology.
Agreed. And I’m saying that there is exactly the same experiential component of moral sentiments. These moral judgements are not just rational judgements devoid of emotion, they actually are emotional judgements with an experienced emotional component. We only care about morals because we *care* emotionally!
When I suggest that moral sentiments actually are aesthetic/emotional sentiments I really do mean it!
Exactly!
… hmm, well again, human psychology is complex, there can be a lot of inputs and connotations to the statements we make.
This is a good example of where you read into my model things that aren’t there (likely owing to trying to map it onto philosophical categories). The only reference to “error thoery” that I make is saying that *if* people think that their moral judgements reflect some objective standard, then they are wrong.
Nothing in my model denies that people are having emotional experiences as part of their moral judgements, indeed the WHOLE POINT of my model is that that is EXACTLY what IS happening. Now, if “error theory” denies that then I’m not an error theorist. Again, I’ve used the term only as in the previous paragraph. If there are other connotations to “error theory” then ok, but I’m not adopting those aspects.
No, actually, I don’t! As I see it, there is no fact of that matter as to what is a “moral” sentiment as opposed to some other sort of aesthetic/emotional sentiment. If I were advocating a moral realist or objectivist position then yes I’d need to clearly demarcate “morals” from other things, but my anti-realist position says there is no clear distinction. This is a feature not a bug.
Evolution would have no way of caring whether an aesthetic/emotional sentiment was a “moral” one or some other type, and thus there would be no distinction that matters. Humans might then find it useful to use different terms for different types of emotion (and I can give a *descriptive* account of how humans use the term “moral”), but there is no clear distinction here, all such judgements are simply patterns of electrical activity whizzing around neurons in an intertwined way.
But I’m not saying people like or dislike something “in a moral sense rather than a different sort of sense”, I’m just saying that people like or dislike something!
(Again, I could give a *descriptive* account of how humans categorise different aesthetic/emotional sentiments; for example it seems true that humans tend to label them as “moral” *if* they are — falsely — under the impression that the sentiment reflects an *objective* standard, whereas they tend to use the label “aesthetic” where they accept that the sentiment is subjective. Again, this is not saying there is *actually* a distinction — the whole point of my argument is that there is not — this is merely a statement about how humans think.)
All such questions would be ones of human psychology, rather than of meta-ethics.
But this is claiming that there is some sense in which things are “moral” or “immoral”, independently of whether we like or dislike them. That’s a moral objectivist stance, and that’s what I reject — it’s also not what Darwin or the evolutionary perspective lead to, which is that moral sentiments really, really are nothing more than and nothing less than aesthetic sentiments.
If all that sentence means is “what a subject labels as moral or immoral and regards as moral or immoral is what that subject regards as moral or immoral”, then yes agreed.
My reason for disliking that phrasing is that the phrase “what *is* moral for a subject” suggests than that, with connotations of some reference to some objective standard and truth values about that standard. Which then gets contradictory. If by “what is moral or immoral for a subject” you mean merely “what that subject holds to be moral or immoral” then it’s clearer to say the latter.
But I’m rejecting the whole concept of applying truth values to someone’s moral claims and thus labelling them “correct” or “incorrect”.
What I’m trying to do is explain why most people *think* that morality is objective (when far fewer do so about other aesthetic judgements). I do think that people *thinking* that is part of our evolutionary programming.
As in my previous comment, there’s a good reason to suppose that such thinking is an evolutionary adaptation, given that moral codes by necessity need to apply widely, whereas other aesthetic preferences do not.
No moral realist or moral objectivist account is compatible with the evolutionary perspective, since they give no account of (1) how *evolution* knows about that objective morality (which it would have to, if objective morality is part of our evolutionary heritage), and (2) how *people* know about objective morality (which is also something that a moral realist needs to explain).
What I was asking is an even more basic question, which no objectivist account has answered: what does it *mean* to say “X is moral”?
A “human purpose” is simply a purpose that a human has, deriving from our values, goals and desires.
And my reply is: It is misconceived to ask whether the “morality we evolved to have is actually right”. That’s a moral-objectivist question, presuming that there is such a thing as a “right morality” and a “wrong morality”. I reject the whole concept of applying truth values to moral claims.
All there is is aesthetic judgements that humans make, whether they like or dislike something.
There’s a whole continuum of social contracts, and we tend to apply the term “breach of etiquette” for less important matter and “moral breach” for more important ones.
Societies can actually function with a whole lot of things we’d regard as immoral, including slavery, capturing women in war and making them your slaves, selling children into slavery, human sacrifice, et cetera. We know this because a lot of societies have indeed functioned that way. So a “requirement for a society to function” is a pretty ill-defined concept. Function how well?
I doubt I said it was “morally justified” (if that’s what you mean by “justified”) since I reject the whole concept of truth values and objective “justification” applied to morals. However, de facto, it is *possible* to impose ones views on someone else, and one may *desire* to do so.
As I see it, yes moral sentiments are aesthetic sentiments, but they are aesthetic sentiments about how people treat each other. Because of that, we do want to impose them on society generally.
If we dislike Tom stealing from Fred, and don’t want to live in a society that allows that, then we try to impose that preference on society.
Again, asking for *justification* for this is a red herring. There is none (and we don’t need it), if by that one is asking for *objective* justification.
If by that you mean they have the same experience, but find the experience unpleasant whereas others find it pleasant, then yes, agreed.
But, to me, the concept of “beauty” entails liking it. So your sentence seems to say: “it is the case that they experience beauty in a pleasant way and yet find it unpleasant instead of pleasant”, which is contradictory.
That only makes sense if the villain is (like most people) a moral objectivist, and so thinks that there is something that is objectively moral, and yet dislikes that thing.
My reply is that, like all moral objectivists, the villain is wrong about there being something “objectively moral”. So, this doesn’t pose a problem for my position, part of that position is that moral objectivism is wrong.
No, it’s not conceivable. To “dislike the moral” they need to (as above) have an objectivist conception of morality, and if they do that they’re not agreeing with me.
This could open up a whole new arena of discussion, but, in short, as I see it mathematical axioms are adopted as being “real world true”. The only sense in which maths is “true” is that it models the real world. One cannot even prove self-consistency *within* the mathematical structure itself (Godel).
So I don’t think that maths can be regarded as a “conceptual truth” independently of the real world. Further, it can only have *relevance* to the real world if the “conceptual truths” correspond to real-world truths.
But if morality were like maths, then it would have no standing or applicability or normative force except to the degree that it mirrored what was already there in the empirical world. (Again, I don’t think that the fact that human-made maths models the empirical world very well is merely a coincidence.) Thus morality cannot have real-world normative force as a mere “conceptual truth”.
To put this another way, in the same way that maths is founded in axioms, I could declare as a conceptual moral axiom that “slavery is moral”. I could, alternatively, declare the axiom “slavery is immoral”. So what? Neither of these gets me anywhere on their own.
You mean that subjectivist theories don’t give you an objective morality, and your intuition tells you that there must be an objective morality? So if it’s not objective you declare that it’s not morality?
To me, the evolutionary-subjectivist account that I’ve been advocating *does* look exactly like morality as it reallty is in the real world.
Which would imply that we cannot have conceptual-truth morality prior to language and logic. And yet, I’d bet that humans did. Indeed we can see a rudimentary form of morality operating in other social animals such as chimpanzees, and I bet their language and logic is not that advanced.
That’s not sufficient. Something being true is not adequate to explain why we know it is true. There are lots of statements that will be true, yet we don’t know that they are true (did Alexander the Great stroke a dog on the day three days before his eight birthday; who fired the arrow that killed Harold at Hastings?). And how do we know the conceptual truths of mathematics, other than that we adopted axioms and logic that modelled the real world?
Yes it is, this is a crucial defect for moral objectivist theories. Just saying they are “conceptual” then raises the whole question of how they operate in the real world such that we then know about them.
This is going to certainly be the last comment you get from me before the holidays, so let me wish you happy holidays … whichever ones you celebrate.
But as I have reminded you on many occasions, philosophers don’t reject evolution or science or anything because of some arbitrary starting point. They reject them because they don’t seem to work and have serious problems. Specifically for most evolutionary theories, they can either use evolution to define what it means to be moral or not. If they use evolution to define what it means to be moral — ie our evolved intuitions or concept of evolution just is what it means to be moral — then they run into massive issues with that evolved sense and purpose clashing with our actual moral intuitions, and the question of whether our moral senses could be maladaptive. As you yourself point out, the purpose for which morality evolved is not the same thing as the definition of it. But as soon as you abandon evolution determining the definition of what is moral, then it doesn’t seem like starting from evolution gets us anywhere. If it’s not going to tell us what is moral in and of itself, then all we need is to be loosely consistent with evolution, or at least come up with the traits that evolution selected for. So starting from it might lead to interesting insights, but certainly doesn’t seem to warrant the importance of making or having evolution-based moral theories.
Let me pause here and talk about a couple of things from your post to address your specific argument:
The thing is that since evolution is not sentient or intelligent or in any way volitional, this doesn’t just apply to moral facts, but instead to FACTS THEMSELVES. This them implies that any evolved mechanism that we evolved to determine facts would be equally subjective and equally separated from the actual physical facts. Since, if we recall, Plantinga’s argument is that if evolution is true we have no reason to think our evolved cognitive faculties reliable, the same logic you apply to moral facts would, in fact, end up proving his theory right. I, uh, don’t think that’s what you wanted [grin].
So the better answer — and the one more consistent with evolutionary biology — is that we develop through various means certain capacities, and evolution then selects for those capacities that benefit our survival and reproduction. You can argue for the reliability of our cognitive faculties with the argument that them being at least mostly reliable is importantly why they evolved, and so while evolution does not select for truth or for facts if our fact-finding faculties didn’t find facts reliably they wouldn’t have facilitated survival. And the same thing can be said about our moral capacities: this developed in us through various means — generally, the starting point would be the ability to be self-sacrificing and do things for abstract principles rather than for intuitive or conscious self-interest — and evolution selected for it because it facilitates societies. This has the benefit that it doesn’t require evolution to somehow “invent” morality before it was useful enough to be selected, which is always an issue for evolutionary stories.
Thus, just like physical facts, moral facts existed before evolution selected for them, and so are separate from evolution. This finding also allows you to maintain your idea that the purpose morals evolved for is not their definition, as they are separate from them.
Another issue:
The problem is that our evolved sense, in fact, greatly REJECTS this sort of reasoning. It is NOT the case that we think that if our view of what is moral would get us punished by society we should conform to society’s definition. Instead, we are encouraged to dig in our heels and insist that we are right and they are wrong. If this would follow from the evolved sense of morality, then why did evolution program the precise opposite notion into us, the one that would work against our own evolutionary success? But if what we have is a prior capacity that evolution co-opted, this makes perfect sense: that aspect is there because it was there first, and that “defect” wasn’t overwhelmed by the benefits of having it, which is likely that people will still act on it even when no one is able to punish them if they cheat (which is another nail in the coffin of “being punished is the important thing” idea).
Objectivist theories can handle this easily: there are just right and wrong answers and morality pushes us to act as if our answers are right. Pure subjectivist theories can explain this: this is just what we think is moral and since we think morality is important we are willing to defend that (maintaining that importance is the issue for those theories). Even the Social Contract theory I outlined can handle this, as it can rely on the person really feeling that from their own perspective the Contract is being violated. But a theory starting from inside evolutionary biology doesn’t seem to be able to handle it anywhere near as well.
Having covered this in a number of classes and readings … it doesn’t. There ARE aspects that align with that, but the problem is that most of them are better explained as the emotional response following FROM the moral judgement rather than producing it. We can easily find entirely reason-based moral theories that don’t have significant emotional content and yet seem like they are properly doing morality — even if we don’t agree that they have the right principles — and we constantly understand that our emotional moral reactions can be wrong and need rational correction. The only way you can make that consistent is to tunnel down to calm passions and specific value judgements using them, but at that point you are talking about something completely different from the aesthetic experiences that you link morality to and so can’t use them as examples anymore. So psychology has some aspects that might align with you, but doesn’t support you as well as you think it does.
But the same people saying that those don’t work have ALSO found similar reasons why the evolutionary arguments don’t work. That you think you can work around or ignore those problems doesn’t mean the problems necessarily go away. As an example, I find your claim that objectivists can’t even say what it means to be moral at a basic level unconvincing, but that doesn’t mean that there isn’t a problem there that needs to be addressed.
I’m … really not sure of the relevance of this here. If someone says “That’s beautiful!” and are only talking about that initial experience itself — and it contains the properties that are associated with beauty — then why is the other person getting into any idea of appalling? I talk about the moral aspect later, so this is either early or confusing [grin].
Remember, here we’re only talking about the initial experience. So we have a notion in our heads about what makes an experience beautiful, just like we do for what makes an experience salty or sweet. So, they’d be in a case where they are having an experience and it agrees with their model of beauty that it is a beautiful one. The only way you can insist that they would have to find it pleasant if they find it beautiful is to insist that part of that model always has to include it being present, meaning that it is part of the definition of a beautiful experience that it be a pleasant one. But for that to hold it would have to be inconceivable to have a beautiful experience and find it unpleasant, and here is where our villains return, but their EXPLICIT concept is that they agree with the heroes on what makes something beautiful, and yet find the experience unpleasant. They are not finding something ugly that others find beautiful, but find it beautiful and find it unpleasant. Could this simple conceptualization be wrong? Certainly. But since it’s a concept that even little children intuitively understand you’re going to have to do a lot of work if you want to nevertheless argue that it’s inconceivable. Little children conceive of it all the time [grin].
The point is that you have repeatedly claimed that the expression IS just like/dislike. Here, if you want to proceed directly from the experience you CAN’T end with the overall “like/dislike” assessment, because that can clearly come apart, as I argued repeatedly: you can see something beautiful and yet dislike it. If morals work the same way, you can find something immoral and still approve of as part of the overall value judgement. Thus, you’re going to need to have specifically aesthetic or moral value judgements if you want to maintain your chain and reduction to like/dislike; it’s going to be the case that I like or dislike it in a specifically moral sense, not just in a generic sense.
This clashes with this, as you say later:
You need that specific moral sense because we have specific experiential contents that align with your specific aesthetic experiences that you are using to make the link here. The initial experience of something beautiful will critically contain identifiable elements that we can use to identify it as a beauty experience and thus at least a beauty value judgement, and the same thing will have to be true for morality if your theory is true. Thus, we WILL be able to judge whether they are judging it in a moral sense as opposed to a beauty sense. This is on top of the issue raised with generic value judgements.
Back to where I left off:
This followed on from where I outlined the components of your view and allocated them to specific theories. Which you agreed with, and which you’ve asked me to do on many an occasion. So, again, you’re accusing me of an invalid move for doing what either you asked me to do or for assessing your views in a way that you agreed to that you were saying it. So this is an invalid example of me “reading in”. You agreed that your view said that there was no content in a moral statement beyond like/dislike, and _I_ identified that as an Error Theory idea because it denied that moral statements contained specifically moral content, which unless I’m misinterpreting you you’ve said pretty explicitly in this very comment reply.
Then the content of the statement includes those experiences, which is beyond a mere like/dislike, and since those experiences to map onto aesthetic experiences have to be specific to morality just as the experiences are to beauty, there is specifically moral content in those statements. If you accept this, then we can drop the constant comment you make and there being no content beyond like/dislike and that there is no specific moral content in the statement. But this is what I asked you to drop because I saw no reason for you to keep it, and you insisted it was important. And yet it certainly seems to contradict the aesthetic experiences model that, to be honest, really does seem to be more critical to your view that than aspect does.
The problem here is that this is following from my comment that we can identify likes that are not moral and you replied, and I quote:
So this would be an example of you flipping models, on the one hand saying that your statements have moral content and then replying later that they don’t, which can’t both be true. A pure subjectivist model can escape this by saying what we’ve talked about before that we can find moral and non-moral values in people but those always follow from what that person considers moral. Since you DO seem to hold that view, then it seems to work, but then you have the issue that if I make all of my moral decisions against any kind of aesthetic experience — I deny they mean it and ignore them in making my moral value judgements — then that has to be a valid move on my part, but then that contradicts the idea that moral statements are aesthetic preferences. It also runs into the issue that if I intellectually define what is moral but then decide that I like the immoral and dislike the moral then moral statements, to me, come apart from the like/dislike mechanisms. Again, you’d need this to be invalid by definition, and again the villains who accept the definition of moral the heroes are using but decide that they dislike that work against that, since again it would have to be inconceivable and even little children can conceive of that scenario.
This is one of the issues with pure subjectivist views, BTW: it’s really hard to nail anything down because all someone has to do is say “No” and you end up having to argue that they know their own minds less well than you do or else are lying, which doesn’t facilitate discussion.
A more standard emotivist view like Prinz’s works better here, because it can trigger on the specific moral emotions themselves and note that these have different triggers in different people. Still, that seems to have more identifiable properties than you generally allow.
Skipping quite a bit here because I’ve either already pretty much covered it or will in a minute.
So let’s say that if we start from evolution this is what we come up with. Why should we think it right? Not in the sense of the evidence — which I’ve touched on — but why this following from evolution means that it’s the way to go? Evolution has been wrong a lot with these sorts of things, like with religion. So starting from evolution doesn’t give it any kind of epistemic primacy that means we shouldn’t keep consider very carefully whether or not it got it right.
Well, recall that I’m not a Social Contract theorist, and so am definitely going to agree that it would have problems. However, Social Contract theorists WOULD be able to credibly argue that it’s not simple survival that determines this. A society, for example, could maintain slavery through enormous force, but that would lead to an unstable society that was constantly under the threat of a slave rebellion. So slavery can’t be a fundamental part of the Social Contract due to how it destabilizes societies.
(My personal biggest issue for Social Contract theories is that our concept of morality, again, seems to accept that we should destroy a society if we consider it immoral, which goes against their idea. There are patches for this, but not very good ones [grin].)
That wasn’t what I meant by justified. I mean rationally following from the principles of your argument, and to me it failed it because we can’t rationally justify it for aesthetic preferences in any way, so if moral judgements were just like them and not objective we wouldn’t have that argument either. Only if we think morality is objective in some way could we make that move.
So, still, even accepting it as possibly reasonable means you are treating morality as more important than we treat aesthetic preferences, which was the point there: the Social Contract theory lets you do that without abandoning the form of subjectivism you seemed to want.
This means that we can — and must — identify specifically moral sentiments by their specific morally relevant identifiable properties. Otherwise, this move doesn’t work and you can’t justify treating moral sentiments stronger than other aesthetic sentiments.
Well, I had two answers to this:
1) If “normative” is a conceptual truth about morality, then it would.
2) It is possible to say that all conceptual truths are normative, in that they are all statements of the way things “ought” to be if something is to be an example of that concept.
Your responses tended to fall back on motivation to disprove that, but again I don’t think that just because a statement is normative that that means that someone will be motivated to do it or have doing that as their highest motivation (which, yes, does beg the question of why we should care about or act morally if it in and of itself isn’t motivating. I’m not that worried about that at the moment [grin]).
To which my reply is that you are making the mistake, in the maths context, of declaring “2+2=4” as an axiom, when it is really derived FROM the base axioms of base 10, in that case what is a number, but “+” does, what “=” means, and so on. In the same way, “Slavery is immoral” is NOT an axiom, but has to be derived from the axioms of the moral system. My argument in the Social Contract section about us determining it by what is necessary for a society to survive is an example of this. In my own personal moral system, it would follow from the axiom that morality is about preserving agency, and slavery does not properly preserve agency. And so on.
No, it’s that it seems like we can no longer use it for what we use it for and what it evolved to do, since if it is subjective like taste in movies we don’t use that to define society and as set rules that everyone feels bound to follow even when no one is watching them.
Of course, I’d deny that they are properly moral at all because they can’t take an action specifically because it is moral. Most of those examples rely on empathy, but if we look at mind-reading and simulation theory we can see that they are probably unconsciously trying to predict what they would do, run the rudimentary situation, feel bad, and act out of empathy to stop their own bad feelings. But that’s hardly morality unless you’re an Egoist, since they may not care at all about the animal they are helping. In general, it seems that animals do these things to stop themselves from feeling discomfort, which isn’t really properly moral.
You may disagree, but the evidence doesn’t settle this either way, since we are pretty certain that they don’t understand morality and so we’d be arguing over the philosophical question of whether that’s required or not.
What you’re talking about here are statements that have a truth value but where there’s no clear way for us to determine what that truth value is. If an objectivist could prove that morality was objective, then we’d have that clear way and so this would be irrevelvant. So it would only be a historical artifact about why we glommed onto that idea, like the ancient philosophers that got atoms right before they could demonstrate that they existed.
We WOULDN’T have known about them, but once we developed the capacity there are only so many ways you can go with it so we got some things right.
Hi again Verbose,
The idea that there is some standard that amounts to “what it means to be moral” is a moral realist/objectivist one. It’s a wrong question to ask, it’s a category error. It’s like asking “what it means to be beautiful”. One can answer with a whole of it *description* about what humans *regard* as beautiful, but there is no such thing as “being beautiful” in an abstract/objective sense, and it makes no sense to ask whether humans might have “got it wrong” in what they regard as beautiful. It’s the same for moral sentiments.
So, what you point to as supposed “serious problems” for the evolutionary model are simplky not problems at all. The flaw is in trying to map it onto moral objectivist ways of thinking and asking moral objectivist questions.
It certainly does not give you moral prescriptions, agreed. But to ask it to is again a moral objectivist question. It supposes that there is an objective moral oracle that can tell us “what we should do”. The whole point of my model is there is no such thing, and that it is a category error to ask that question.
But, while the evolutionary account does not give ethical prescriptions, it does solve meta-ethics (completely and straightforwardly). And that’s what it is trying to do.
Because the idea that there is something that is “moral in and of itself” is just a misunderstanding of what morals are. It’s no more sensible than asking what is beautiful “in and of itself”, since “beauty” is not a property of a thing itself, it’s how someone else regards that thing.
The importance is in then undermining all moral objectivist theories, and thus solving meta-ethics. The problem is that moral objectivists look at evolutionary theories, see that they don’t give moral objectivism, and so conclude that evolutionary theories don’t solve the issue, but they do; the flaw is in supposing that moral objectivism is true and being unwilling to relinquish it.
Not so, there’s a big difference between facts and “moral facts” in this regard. Suppose I’m mistaken about being able to fly off a cliff top, or mistaken about lions and snakes being harmless, or mistaken about needing to store food for winter, or if I think that 2+2=5 and try building aeroplanes. I’m likely to get myself killed, and be less successful in leaving descendants. So, evolution will — to a very large extent — program us to have a model of the real world that does align factually with the real world. This is why Plantinga is wrong.
But “moral facts” are not like that. Suppose I act immorally. What happens? Does a thunderbolt strike me dead? No. Is there any actual force imposing these “moral facts”? No. So there’s way that evolution has any traction on these moral facts. Even if these “facts” are floating about in some abstract, conceptual Platonic space, in what way to they have any effect on what happens on Earth? And if they don’t then they’re irrelevant to evolutionary programming.
What *does* have an effect our survival and reproduction is how other people act towards us, and thus on what they *think* about morality. Thus people’s *subjective* sense of morality is relevant to evolution, and evolution will have traction on our *subjective* moral sentiments — which is why they could and did evolve.
But “objective moral facts”? There’s a complete disconnect between those and anything that actually happens in the world. So they are evolutionarily irrelevant. So my argument holds for “objective moral facts”, but not for facts in general and not for our subjective moral sense.
Yep.
“Benefitting societies” is a group-selectionist argument. But “group selection” does not work. (This has been discussed at length by evolutionary biology, and yes there is a minority of biologists who argue for group selection, but the consensus is against them.) Secondly, even if this did work, it would *not* give you an objective morality, it gives you “whatever helps this society to propogate” (there’s an ought/is disconnect between the two).
The moral objectivist can’t just fudge this point. Let’s start from “objective morality” being a “conceptual truth” floating around in Platonic space. How does this then affect who lives and dies, who has chiildren, which societies are successful, etc? No moral objectivist has ever produced a workable account of this.
So how is objective morality “useful”? How would it actually be selected for? What *is* selected for is “what helps people survive and reproduce”, and (if you’re a group selectionist) “what helps societies be successful and propagate” — but unless you just declare by fiat axiom: “we define “moral” as “what helps societies be successful and propogate”, I don’t see how this gets you anywhere at all. It’s what Sam Harris does, but declaring that axiom begs the whole question.
But there’s a huge difference. Physical facts are descriptive, “moral facts” are normative. No-one has given a workable account of how a normative “moral fact” gets selected for. (Unless one goes for the question-begging axiom: whatever gets selected for is a moral fact.)
I have not said that morally “we should conform” to what society regards as moral. The hole point of my stance is to reject such “moral should” prescriptions.
It is, of course, true that conforming to societal moral standards might be tactically advantageous. If you like in a society like Pakistan that thinks it moral to lynch blasphemers, then you might decide it is tactically wise to avoid blaspheming. That is not the same as saying that “morally you should” avoid blaspheming.
How?? How does morality “push” us in any way? Of course we have a conscience, but our conscience is the very essence of something that is *subjective*, it is an account of what *we* think and how *we* feel. So how does one get objective morality having any traction in the real world?
Yes.
It handles it exactly as you’ve just said. We have evolved to have feelings and values about topics that we call “moral” topics. Since they are feelings and values, they are important to us, so we act on them. Being important is what feelings and values *are*!
I suggest that you (and others) analyse things that way because you are an intuitive moral objectivist and so think that that’s how things should be. So, again, all you’re really doing is reporting is your intuitive moral objectivism.
It actually works just as well if we regard moral sentiments as primary and moral theories as commentaries on those sentiments. Actually, it works much better, because that explains why none of these moral theories actually works that well when actually analysed!
If these “reason-based moral theories … seem like they are properly doing morality” then how come no philosopher has managed to persuade all the other philosophers of this? Why are there still lots of competing models? Saying “because it’s hard to work out” isn’t a convincing answer. My answer is because the theories are mere commentary, they are rationalisations of underlying moral sentiments, attempting to map them on to moral objectivism. But since moral objectivism is false, the theories only work semi-well.
Why? Why are you now distinguishing between “calm passions” and “aesthetic experiences”?
The only reasons you’ve pointed to so far are that evolutionary arguments don’t give you moral objectivism. You’re right, they don’t. That’s not a flaw of the evolutionary arguments.
The second person is experiencing being appalled in the same way that the first person is experiencing finding something beautiful. That’s the comparison I’m trying to make.
Yes, that’s what I’m maintaining. And it would seem to me that dictionaries back me up (“beautiful”: “Pleasing the senses or mind aesthetically”).
Yes.
Well, I regard that as inconsistent. Of course a pantomime villain can be made to say contradictory things such as “I hate beauty”. (Though it’s not inconsistent if interpreted as “I hate what other people regard as beautiful”, and likely that’s how it is interpreted by most people.)
I’d suggest that the little children are interpreting it as: “I hate what other people regard as beautiful”.
Again, I disagree.
Since I’ve disagreed above I needn’t follow you to this conclusion.
Again, I don’t see that I do.
Well OK, I would, as a matter of psychology and neuroscience, and given a sufficiently advanced brain scanner, need to be able to see differences between primarily visual aesthetic experiences, primarily aural ones, primarily tactile ones, and primarily moral aesthetic experiences. And yes, different brain circuitry will be involved in all of these cases. Given a sufficiently advanced brain scanner you could indeed classify aesthetic experiuences (likely in fuzzy-edge, overlapping categories). But I don’t need any neat, clear-cut conceptual classification along any moral objecivist lines.
OK, agreed, I’ve said that moral statements do not contain content other than like/dislike aesthetic content.
But then you said to me: “That means that the Error Theory portion of your model is in conflict with the aesthetic preferences portion, because the aesthetic preferences portion insists that the statements have the experience as a crucial part of their content, while the Error Theory portion denies that.”
It seems to me that I’m being consistent. I said that moral statements: (1) have like/dislike aesthetic content, and (2) don’t have other content. You identified this as Error Theory. But you then said that the aesthetic model and the error-theory model are in conflict, since error theory denies that statements have aesthetic experience as part of their content.
I confess that I don’t understand your argument here (usually I do follow your arguments, so this is unusual). It does seem to me that my statements are consistent: that moral sentiments are aesthetic sentiments, and that moral statements are reports of these.
I’m baffled as to why you’re distinguishing between like/dislike and aesthetic experiences. Surely like/dislike *is* an aesthetic experience?
Well, what I actually said was that there is no OTHER content other than the aesthetic like/dislike experience that amounts to a “moral sentiment”.
As above, there will be specific types of neural activity that amount to different types of aesthetic experience, and so different aesthetic experiences will feel different, but that does not mean that moral statements have OTHER content, because the specific moral aesthetic experience IS the content of the moral statement that I am pointing to.
As above, different aesthetic experiences will involve different brain circuitry and will feel different. Stroking a cat will feel different from putting a hand on a hot stove. In the same way, moral aesthetic experiences will involve particular neural patterns, and with a sufficiently advanced brain scanner we could classify aesthetic experiences into different (fuzzy, overlapping) classes. We could then point to some of these and say: “these are the ones for which humans tend to use the label “moral”.”.
But so what? Why is any of that a problem for my model?
That seems pretty sensible to me, and in line with what I’m saying. I’ve never asserted that all aesthetic experiences feel exactly the same and are literally indistinguishable.
There is no such thing as “is right” morally! There is no such thing as “it’s [morally] the way to go”! These are moral realist/objectivist questions. They are category errors.
It’s like asking: OK, so evolution has programmed me as a male to find women beautiful, but why would we think that they actually *are* beautiful? It’s not a sensible question. What the heck would that question actually be asking?
And that fact that my model cannot answer such questions is not a defect of my model, it’s a defect of those question and of the moral-objectivist presumption that such questions are properly posed.
100% correct! Except you then need to go further and realise that asking whether evolution “got it right” is non-sensical. Literally, there is no sensible meaning to that queastion (and the supposition that there is is a moral-objectivist delusion).
So the Social Contract theorist needs some metric by which to judge socities. Any such choice would be arbitrary (declared as an axiom by fiat) or subjective (depending on someone’s personal advocacy). Either way, you can’t get to an objective moral scheme down this road, it has all the same defects as utilitarianism.
You are absolutely right. Rationally following from the principles of my argument there is no way in which I can arrive at a “justification” for trying to imposing my morals on society.
Is that a defect of my model? Well, if you’re a moral objectivist, you might *want* there to be justification for society’s morals, and you might think that there *should* be such justification. But, as ever, moral-objectivist intuitions are wrong. There is no justification of the sort you’re asking for for any moral scheme, nor can there be.
But does the above stop me advocating for my moral positions? No it doesn’t! I don’t advocate them because they are “justified” (or because I am “rationally justified” in advocating them), I advocate them because I *want* *to*! Because I *like* those morals!
I’m not wanting to treat moral sentiments as stronger than other aesthetic sentiments. Just for example, many parents would regard their love of a child as more important than moral sentiments.
If a parent needed to do something grossly immoral to save the life of their child, would they do it? Most parents would! (Of course they’d then rationalise: “well, under those circumstances, it’s not immoral …”.)
To me these answers are question-begging, in that they declare the normativity by fiat without doing anything to explain why it is there or what it even means.
So, it is a “conceptual truth” that “we should do X”, and the normativity, the shouldness is also a “conceptual truth”.
I — quite literally — have no idea what that concept means. What does it mean to say that “we should not do Y” is a “conceptual truth”? Why should we not do it? Because it violates a normative conceptual truth? But that’s still no answer, it’s just repeating the claim. To me these claims literally have no sensible meaning.
Well, being pedantic, in a logic system like maths, one can choose which are the “axioms” and which the derived theorems, but that’s a side issue.
OK, but I could declare a set of axioms that then entailed “slavery is immoral”. Who is to say those axioms are better or worse than the ones leading to “slavery is immoral”?
But societies aaccepting slavery can and have survived (at least as long as any other sort of society). And anyhow, what’s the justification for chossing “survival” as the metric?
But axioms delcared by how *you* feel about things sounds suspiciously like a system that is subjective at root! (And there’s nothing wrong with that!) If you were asked to *justify* those axioms, not in terms of what *you* *want*, but instead in terms of basic principles — and thus arrive at a properly objective system — how you would do it?
The same enterprise regarding maths arrived at the conclusion that it could not be done, and that one could not justify maths ex nihilo. You’ll run into the samer problem with morality being a “conceptual truth”.
First, one could say the same about humans! Second, your conception that there is such a thing as a “proper morality” that is not just a more developed version of the above, may be wrong. And third, social animals do indeed care about other individuals in their troop. That’s quite obvious from the way they behave.
Most humans don’t understasnd morality either! Indeed, I’d suggest that all moral realists don’t understand morality. 🙂 (Others might say the same about non-realists!) But that doesn’t stop us all being moral anaimals. (Which, by the way, is another argument that morals don’t derive from conceptual and rational analysis, but from our natures, from our values and feelings.)
But this answer then says that all of our evolved nature and intuition is NOT about any supposed moral objectivism, but instead is a development of the Chimpanzee-like social behaviour.
Given that, one is left with very little reason to suppose that there is an objective morality. It certainly seems superfluous and important (especially if it is merely conceptual truths in Platonic space, of little relevance to our actual lives), and all it does is throw up conceptual puzzles that the combined might of philosophers have not solved.
Sigh. This is always one of the biggest stumbling blocks in this discussion, and on pondering it I think it’s just a common confusion, mostly on your part. We really need to make clear the difference between talking about meta-ethics and talking about ethics, which are interestingly terms that YOU use and I don’t.
See, the discussion here is purely at the meta-ethical level. What it means to have a discussion at the meta-ethical level is to essentially as “You know, when we talk about morality, what the heck do we mean? What are we talking about?”. Then we have meta-ethical conclusions or theories that specify the details of what morality is like at the meta-ethical level. Whether morality is subjective or objective is a meta-ethical question, so at the level of meta-ethics both subjectivists and objectivists have to have ideas on that. Even those who think that moral statements are meaningless have that as part of their meta-ethical theory.
At the meta-ethical level, there are things that are true or false, right or wrong, correct or incorrect, or at least better or worse. Moral subjectivity is a meta-ethical claim, and so cannot be invoked at the meta-ethical level to avoid having to justify meta-ethical claims. And since people don’t agree on those meta-ethical claims, those claims have to be justified with evidence and reasoning.
YOU have a specific meta-ethical theory, and that’s what we are primarily arguing over. You are explicitly saying that some meta-ethical claims are wrong — specifically, objectivism and realism — and think that your meta-ethical theory is the best one for morality. I don’t agree. Thus, we need to settle this with evidence and argumentation and reasoning, and since that’s at the meta-ethical level you can’t retreat to subjectivism to avoid providing that.
And that was the point I was making here wrt evolution. You can start from the evolutionary perspective and try to map that directly to or use it as the main justification for your meta-ethical theory. This would be the cleanest and most beneficial way of using it. The problem is that doing so leads to odd statements and contradictions, so at some point we all end up dropping things from that account to have things make sense. Even YOU do this when you pointed out that the purpose for which morality evolved is not the definition of it (and the definition of it is what meta-ethics is supposed to be doing). But if you accept that we need to drop things from the evolutionary biological account of morality, then if someone disagrees with anything that you want to maintain from it then can rightfully demand to you justify keeping that element while you drop other elements, requiring an independent justification for their inclusion.
As an analogy, it’s a lot like using the Bible to justify claims about what God is supposed to be like. The ideal way to do that is simply to point to the Bible and say “That’s what the definition of God is!”. But doing so leads to problems and contradictions. So then we can pick out some aspects and drop others, but then it is entirely reasonable to demand a standard for what to include and what to exclude, which means that we need to have a pretty good idea of what it would mean to be God or what we mean when we say “God” before doing so, which makes using the Bible to define that pretty much useless.
That’s my view of the problem with starting from evolutionary biology. If we could just go pretty much directly from that to our meta-ethical theory, that would be great. But since that leads to problems and contradictions, we can’t. But then we need an independent way to determine what aspects of that to include or exclude, and so using it as a justification doesn’t really work. Thus, we’re left with treating it like intuitions: if you come up with a meta-ethical theory that violates our intuitions or the evolutionary biological account, you need to a) take a long hard look to see if you’ve made a mistake and b) have a really, really good independent argument for why your theory is right nevertheless.
None of this has anything to do with moral objectivism. Moral objecitivism/subjectivism comes later. The problem is that unless someone is willing to simply take evolutionary biology at its word wrt meta-ethical theories it can’t be used as a convincing justification for anything that an opponent would disagree with since even the person advocating for it has to concede that some aspects of it have to be dropped for a proper meta-ethical theory.
So unless you have really good reason not to — and outline that reason to me — whenever I talk about right or wrong or correct or incorrect or about what it means to be moral you should assume that I’m talking about the meta-ethical level, and you have to be able to justify statements at that level or else this entire discussion would be utterly pointless: no one will accept your meta-ethical THEORY without justification, even if they might follow the ethical statements — like “Slavery is immoral” — that follow from it.
Just to highlight what I said above, I’m saying that it CAN’T solve meta-ethics because in order to do so it would have to justify a meta-ethical theory — even a subjectivist one — and it can’t do that alone because its meta-ethical implications are at times utterly improbable and contradictory. I’d go into that in detail but as I already pointed out you YOURSELF denied that by rejecting the social purpose as being the definition of morality and thus as the meta-ethical theory, which then leads to all the problems I pointed out above.
Even if it necessarily provided sufficient justification to undermine objectivist ethics, that wouldn’t solve meta-ethics because objectivist theories aren’t all there is. You’d still have to settle between subjectivist and Error Theories, for starters. In short, undermining what you see as your main opposition doesn’t, in fact, justify your own theory, as you, as a scientist, should be well-aware of.
Sure. This was my fix for your error wrt morality: what we have is a capacity, exercising that capacity provides a benefit, and evolution selects for that benefit. It’s possible that we could get that benefit another way — Plantinga’s idea of simply believing that which it does benefit you to believe — but reliable cognitive faculties are easier, even if sometimes we end up believing things where it would be better for us if we believed otherwise. The same thing applies to morality (more on this later).
But you need to remember your own argument. Your own argument was that we know that morality has to be subjective because evolution could not select for morality itself. The problem is, again, that evolution can’t select for truth EITHER. And so if you concede that evolution could select for something other than truth but we could have a truth-finding capability, then you have to concede that it could select for something other than morality and yet we could still have a moral-finding capability. At which point that key argument becomes completely irrelevant; it doesn’t matter if evolution could select for morality when it comes to determining if we have it or what it is, just as that doesn’t matter for truth.
So all we’d need is to find appropriate consequences for not acting morally that evolution could select for. Since you think that morality DID evolve, there has to be some, and in fact you’ve already outlined them, and we don’t need things falling from on high or direct and immediate reaction, any more than, again, we don’t need that for most of the truths our faculties provide either.
I didn’t say “Benefitting societies”. I said “Facilitating societies”. And we have both already agreed that the evolutionary benefit of morality is that it aids in forming and maintaining societies, and individuals inside societies out-compete individuals outside of them. So I have NO clue why you are calling YOUR OWN THEORY invalid because it’s group selection [grin]. My suspicion — and this will come into play more when we get into the argument that you don’t get while claiming that you usually understand my arguments — is that for most of this argument you’ve been translating my arguments into something that you’ve seen before and responding to that instead of to what I’ve actually said. This is why you think you understand me most of the time while I can only recall all the times I’ve had to respond in frustration because you didn’t.
Anyway, morality evolved because it facilitates societies and that benefits individuals. Thus, if someone wants to promote a meta-ethical theory that wouldn’t be social-friendly they’d have to have a pretty good reason to hold that regardless. Fortunately, pretty much all meta-ethical theories — even Egoism — are social-friendly.
But this, then, knocks out that main argument of yours: there is no reason for evolution to have to select for morality specifically when it can select for the social-friendly side effects of it. Thus, we can have morality without having to have evolution select specifically for it, just as we can have truth without having to have evolution select specifically for it. And your main premise upon which the rest of the argument rested was that evolution can’t select for morality specifically and that that meant it had to be subjective feelings.
Thus, you have a choice here: either abandon your argument or else accept that truth is also just subjective feelings in the same way.
Actually, whether it’s normative or not is a meta-ethical question. But again normativity is not something that evolution selects for. The argument would be that normativity is a property of morality that is required for it to be social-friendly in the right way. But we’d have to settle the argument over normativity first, wouldn’t we? Do you think that morality is normative? You’ve been … unclear on this at this point.
At any rate, morality’s purported normativity isn’t relevant to the reasons why your argument doesn’t work. In fact, it makes things worse. If we accept that morality is normative, then it seems improbable to say the least that subjective feelings could give us proper normative statements. And if we don’t accept that, then you have no way to differentiate it from truth statements.
So all you’d have is your exceedingly strong take on moral objectivism requiring an exceeding strong moral realism. Since I have explicitly denied the latter, I need no Platonic realm or any real objects at all, and your argument here, again, risks making mathematical entities like numbers the exact same sort of thing and so requiring there to be real numbers floating around out there. If we don’t need numbers to be real in that way, we don’t need morals to be real in that way either.
Allow me to remind you of your actual argument here:
And my counter was that if this is true, then why would our evolved notion of morality critically include the precise opposite principle? If people will punish me for acting morally if they disagreed, presumably that will impact my having descendants in a negative fashion. So why, then, is morality insistent on us continuing to act on our morality in the face of those punishments and fears of punishment? Strictly from the perspective of evolutionary biology, this is problematic, since it doesn’t look like it could have evolved if it has to reject this thing that you think so critical to its evolutionary success. Outside of it, however, it seems reasonable: evolution CO-OPTED an existing capability which is still superior to the alternatives, just like us believing truths that could hurt us is still better than the alternatives. In the case of morality, that insistence on acting morally in the face of punishment ALSO allows us to be confident that everyone will still act morally even when they have no fear of being punished.
But, again, it following from, at least, your evolutionary starting point is too improbable to countenance.
Our moral capability creates these statements as objective statements, and as such we act as if they are just true and false and so don’t give in when people insist that we are wrong. Just like mathematical truths like 2+2=4.
Since we are talking about our psychology here, though, arguing against our intuitions is arguing against our psychology and thus undercutting our own argument. It turns out that at the psychological level and inside psychology moral sentiments DON’T act very much like aesthetic sentiments. Imagine that.
To put it bluntly, you’re just plain wrong about the psychology here. The closest we can get are emotivist theories but they have to toss out a lot of psychological facts to make that work, and they only get to emotions, which are not the same thing as aesthetic sentiments psychologically.
Here you are both confusing meta-ethics and ethics again AND are taking my statement as stronger than it actually was. At the strict psychological level, entirely rational theories are possible and compatible with our psychology. They are not ruled out at that level as the evidence doesn’t support emotivist strongly over rationalist theories. At the meta-ethical level, rationalist theories also seem like reasonable candidates as well. Sure, emotivist think they’re wrong, but rationalists think emotivists are wrong, and the psychology doesn’t settle it. And my entire point was that contrary to your assertions the psychology doesn’t settle it.
And PLEASE stop relying on the tired “How come no one agrees on this if you’re right?!?” argument. That also applies to all subjectivist theories, all evolutionary theories, all Error Theories and in fact every moral theory everywhere. Given this, the most reasonable conclusion is that this is a really, really hard question and no theory has had any more “success” than any other.
Um, I did so from the beginning. That was my point about not being able to refute Stoicism by appealing at least directly to “calm passions” as passions or emotions and saying that that would undercut their view that emotions aren’t allowed in morality, buy pointing out that that isn’t what they meant by passions or emotions and so they can accept the role of those calm passions in reasoning without undercutting their theory. The same thing applies to any rationalist counter to any emotivist theory. Calm passions are not going to settle anything here, especially for those who reject emotivist theories based on moral emotions, since calm passions AREN’T specifically moral emotions (even to Hume).
The point here was that the field that finds objectivist arguments unconvincing finds evolutionary arguments EQUALLY unconvincing. Your theory gets no benefit from appealing to what that field finds unconvincing, and your attempted appeals to internal psychological explanations for them rejecting them are equally unconvincing.
I’m still not sure of the point of it, though, since as appalled is a negative feeling and beauty generally a positive one they are distinguishable, and so aren’t the same thing, and so it doesn’t seem to get you anywhere … especially since in my description — which you have to go long with at least for argument to, well, understand my argument — the beauty experience I’m talking about is higher in the chain than what the appalled one would be. For the comparison to work, I’d have to agree with your conception and reject mine, but you’re asking me to do that before even ADDRESSING mine which, well, is not all that likely to work [grin].
You don’t appeal to dictionaries to settle ANY conceptual issue. That’s true in science and equally true in philosophy. Especially since it would be descriptive and I have already conceded that most of the time humans find beauty pleasing. I’ve denied that it is necessarily and conceptually true so much so that it’s logically inconsistent for it to be otherwise, and you need that to be true for your case to hold. And you have never actually addressed that.
Where is your evidence that that is how people interpret it? The shows are often quite clear about the intended interpretation. Skeletor, for example, I believe once describes the feeling of love with the normal traits that we feel and then says that he hates that feeling. It’s not pleasant for him. And everyone understands the idea that that could be possible, and it’s certainly how I’VE always interpreted it. And remember that I introduced it as if that interpretation would be obvious, so you have strong evidence that this is indeed how _I_ conceived of it. And if I can conceive of it that way, then it isn’t obvious that it’s logically inconsistent. And I can stop there, because that’s all I need to obligate you to show that despite it not being obvious that it’s logically inconsistent it nevertheless IS logically inconsistent. Quoting the dictionary and insisting that it is logically inconsistent are not going to cut it.
Except the conclusion here was that you, yourself, had said that the overall value judgement can depend on things that aren’t relevant to morality and so someone can find something immoral and yet, after consulting all their values, approve of it. This was just setting out that basic point that you, again, already accepted before taking it further to my view that you can also find something moral and yet not like it simply because you don’t like the moral.
So this has nothing to do with the statements above that and is something you already agreed with it, and yet you are disagreeing with it. See why I get confused [grin]? If you at least acknowledging a potential confusion or fleshed it out this would be better, but simply say “No” doesn’t facilitate anything here. Sometimes length is desirable [grin].
You do, because even you admit that someone can find something beautiful or moral and yet as part of the overall value assessment disapprove of it. This means that you need separate beauty and moral like and dislike value judgements that you can separate from the overall value assessment of the entire situation, even IF beauty and moral conceptually have to map to A like assessment. Which I deny.
This is a flawed but depressingly common view of what we can do with neuroscience. How are you going to find those differences in the brain scans? Well, what you’re going to do is show them things or get them experiencing/doing things and then seeing what lights up, correct? But you’d have to show them beautiful things and see what lights up, etc, etc. How are you going to determine what those things are? Either you’re going to have a set and proven idea of what it means for something to be beautiful, or you’re going to ask them what it makes them feel. For anything subjective, you’re going to be limited to asking them. But if that’s good enough to do the brain scans, that would be good enough for us to determine what someone is experiencing without bothering with the brain scans. And if it isn’t, then it can’t be used to justify the brain scans either.
Thus, for brain scans to say anything interesting, we have to have a way to settle the differences in the things before doing the brain scans. As such, that’s never going to be how we determine such differences. And for anything subjective, the determining factor is always going to be what THEY feel at the time, at which point their self-report will always be the best evidence you have. If you have done your brain scans, show them something, and they report having a different feeling than what the brain scan would suggest, are you going to deny their self-report and tell them they’re wrong about what they’re feeling?
So neuroscience is a red herring here. All of its knowledge depends on things we already have access to, or at least as much access to as we are going to get and are going to be able to get much easier than doing brain scans.
1) I never asked for one.
2) I have no idea what clear-up conceptualization you think moral objectivist theories demand for the difference between different types of aesthetic experiences or between moral value judgements and other types. This is starting to come across more as a standard shot than as any kind of meaningful argument.
Aesthetic statements have the aesthetic experience ITSELF as a critical part of the content of the statement: one of if not the defining purposes of the statement is to express that the person is having that specific experience. Even if whether or not we like it is always part of the statement, you can’t leave out the experience itself. And once we have that experience in the statement, then we can easily differentiate all aesthetic statements from each other. But you tried to deny that with what I called the “Error Theory” part that denies any specific content beyond like/dislike. There is ALWAYS specific aesthetic content in the statement that differentiates all the classes. The same thing, then, would apply to moral statements: whatever it is that makes them moral statements or have a moral aesthetic or whatever would always be in the statement, always be a critical part OF the statement, which would then let me easily distinguish moral statements from other ones. This would make that part of your theory at the very least superfluous, generally confusing, and probably contradictory depending on if you ever wanted to use it as an argument as stated (which you have). So, yeah, either meaningless if you don’t really mean it as stated or contradictory if you do.
Not in any way SURELY. You have a … very odd idea of aesthetic experiences that is going to prevent ANYONE from understanding what you mean. Because a simply like experience isn’t a beauty experience, and we can have like experiences that aren’t beauty experiences. As stated here, we’d end up with like experiences being a different type of aesthetic experience, like beauty experiences are different from taste experiences. But that doesn’t seem to be what you want, as what you seem to want to do is classify beauty experiences AS like experiences, and as a subset of them. But then we can distinguish all of our like and dislike experiences and saying that the content of a statement is “like/dislike” is just plain wrong, because it doesn’t actually SAY anything interesting if that’s all we were saying. So either way leads to confusion and ultimately to you having to accept distinctions you’ve been mostly denying for the entire discussion.
So, yeah, confusing [grin].
So, by this … the content of the moral statement is, in fact, its moral content. Um, why in the world did you spend so much of this conversation, as far as I can tell, using the “like/dislike” discussion to deny the EXISTENCE OF MORAL CONTENT instead of simply saying that the content of a moral statement is the moral aesthetic experience that the person is having?
Because you spent the entire rest of the discussion denying that such distinctions ACTUALLY EXISTED in your model! That’s why I carved that part out and called it your Error Theory model, at which point you now seem to have abandoned it and declared that you never believed it at all! But the only reason to reduce the statement to like/dislike IS to deny those sorts of distinctions! Again, is it any WONDER that I’m confused?!? What is the point of reducing the statements to like/dislike if you are just going to reintroduce the distinctions in a strong way later?
To be fair, this might relate to your constant insistence on not having clear conceptual distinctions, but again I still don’t understand what you mean by that or what that does for you.
1) All they need is stability/survival, which is neither arbitrary nor subjective, as I already pointed out.
2) I never insisted that it be objective, since one of the reasons I suggested it for you was because it would allow for the sort of subjectivity you seem to want while still having some kind of non-arbitrary features to argue over. It’s not my preferred view, remember?
It’s a problem for your model because you say that moral sentiments are just another sort of aesthetic sentiment but we don’t treat them the same way, and in fact treat them in a way that’s inconceivable for other aesthetic sentiments. Someone attempting to impose their aesthetic sentiments by force on others is critically WRONG about aesthetic sentiments and what they entail. Why, then, would doing that for moral sentiments not be equally critically wrong? Your model needs to explain that or else it seems improbable that moral sentiments could be just another type of aesthetic sentiment.
At least emotivists tie it to strong emotions/passions, which at least are strong judgements demanding action. You don’t do that and so have an issue here.
1) So why do you advocate for them so strongly but not for your other aesthetic sentiments?
2) Knowing this, why should I take any moral pronouncement of yours seriously? Take any specific moral area where we disagree. All you’re saying is that you personally like it or want it a certain way. But you have no power over me. I have no reason to care any more, in this model, about your moral likes as about what you like to eat or what you find sexually attractive or anything. Thus, I can immediately dismiss any moral argument you make. Thus, if you ever make a post talking about something being immoral, I should immediately comment on it with “I don’t care what you think immoral, and nor should anyone else”. As I have said repeatedly, this would make all moral discussions pointless, with everyone instead appealing to other arguments beyond like/dislike to deal with those issues. An ignominious end for morality, something that was so critical to our evolution and societal development.
And if they do that, on what grounds do you say that their action is not therefore moral because of the rationalization? What more could there be? If they rationalize their love for their child as being moral, isn’t that what it is from their perspective?
Here you seem to be asking me to justify the normative force of a conceptual truth by then saying you don’t know what normativity would be. I see no need to justify the normative force of my conceptual truth until you tell me what it was you MEANT by normative force.
As a strict OUGHT though, it’s pretty well justified: if X is a required trait for a concept, then anything participating in that concept will have to have it, and if it doesn’t then it isn’t participating in that concept. For morality, it would allow us to say that if you don’t have X then you aren’t a moral person. That’s a pretty good simple approximation of what we thought the normative statements of morality did for us.
You’ve drifted back to motivationalism again, while ignoring it in this entire comment. My view does not say that you will be motivated to act morally once you know what it is. It says that once we know what it means to be moral then if you know that and choose not to follow that morality you will be an amoral or immoral person. If someone is okay with that, morality’s not going to do anything else to force them to not be that way.
It’s also not true of base 10 mathematics, so it’s not just pedantic, but is wrong.
But what’s important here is this: it is pointless to define 2+2=4 as an axiom in any mathematical system. If it followed from the base axioms, then it’s superfluous and can be dropped for efficiency. If it doesn’t follow from the base axioms then it either has no place in that mathematical system — the system was defined for a different purpose than producing those statements — or else you would have to define EVERY similar statement, which as there are an infinite number of them would be impossible and pointless, so you should just add some general base axioms to generate those and be done with it. If it contradicts the base axioms, then adding it makes the system inconsistent and so you need to remove it or the base principles to have a consistent concept. Either way, that’s just not the sort of statement you want to make axiomatic.
The same thing applies to statements like “Slavery is immoral”. We never want to make that axiomatic, but instead start from a bunch of meta-ethical premises that lead to that as a conclusion. And it would be better if they were evidenced in some way and not merely axiomatic, in a similar way to how you try to justify mathematical axioms (although not necessarily empirically).
Finding these sorts of meta-premises is what philosophy is all about.
The Social Contract doesn’t exist outside of a society, so it doesn’t apply there. Thus, Social Contract rules that destabilize society are self-defeating, and thus create a logically inconsistent Social Contract. See the above discussion for why that’s an objectively bad thing to do [grin].
And societies based on slavery — or, at least, the BAD kinds of slavery — will always be bad because the slaves would not have joined that society if they had known, and are arguably better off outside of it than inside of it. Again, conceptually, that violates the very basis of the Social Contract itself and so is self-defeating, and so always wrong. Only if slaves are better off as slaves than outside of society can ANY justification for it work (which usually means some kind of indentured servitude, at most).
You asked by “Slavery is immoral” couldn’t be an axiom, and how we got to it if it wasn’t. I was giving examples. When I talk about personal, it’s NOT based on my feelings, but on the meta-ethical theory that I currently favour, so not the same thing at all, so this is irrelevant.
I don’t recall anyone coming to that conclusion, especially since pure mathematics doesn’t even HAVE a need for that sort of justification ….
My statement here is this: you can’t be said to be acting morally unless you take the action BECAUSE you understand that it is the morally right action to take. Animals are incapable of that, humans ARE capable of that and any case where you aren’t doing it for that reason is at best amoral. This can be controversial though, so the key take-way is that you can’t get from animal empathy to morality directly without further argument.
Hi again Verbose,
Agreed.
Yes, agreed.
Agreed, but as I see it I’m not “retreating” to subjectivism, I’m expounding on what a subjectivist view of meta-ethics amounts to.
That is indeed what I’m trying to do.
I don’t see that it leads to contradictions. I do agree that it leads to statements that sound very odd to an objectivist, such that the objectivist intuitively thinks that they can’t be right. As I see it I’m not dropping anything at all from the evolutionary account.
But that’s not “dropping” anything. It’s just that the evolutionary account doesn’t give as a definition of a common-language word (common usage as reported in dictionaries does that).
I’d say that the *explanation* of ethics and morals is what meta-ethics is supposed to be doing. Meta-ethics shouldn’t redefine the terms too far from their common-language meaning.
So far I don’t accept that we need to drop anything from the evolutionary biological account of morality, so I don’t need to justify keeping some parts but dropping others.
I still think that we can! Can you summarise why you think we can’t?
I don’t understand why you say that. The status of morality, whether it is objective or subjective, is surely one of the key issues for meta-ethics?
Agreed. That’s what I’ve been attempting to do.
Please do go into that, because so far I don’t accept it! That is, I don’t accept that the implications are “contradictory”. Counter-intuitive?, yes, I grant you that.
Again, I see the “definition” of morality as simply being the common-langage meaning of the word. Meta-ethics is then about the *explanations* that underly that. So, my meta-ethical theory indeed does say that “social purpose” is why we have evolved to have moral sentiments (= feelings of right are wrong about how we treat each other).
But we can only evolve reliable cognitive faculties if they are selected for, in the sense that errors are punished (you get eaten; or starve; or fail to reproduce). That’s why we have cognitive faculties that are pretty good at negotiating around the world we evolved in.
To make the same claim about *morality*, however, you then need to explain how moral errors are punished, why someone making objective moral errors would be less likely to leave descendants. Why is that?
The answer cannot be along the lines “because other people will dislike it and kill him”, because that is selection dependent on people’s *subjective* *feelings* about morality, not selection dependent on any objective fact about morality.
Yes it can! If the fact is “there is a lion in the bush”, then the error “there is not a lion” can get punished by being eaten. Now give your equivalent account of how knowledge of objective moral truths gets selected for.
Exactly. So how does that work?
Let’s suppose a warlord conquers a tribe, kills all the men, takes the women as concubines, and has lots of children by them. That’s “immoral”, agreed? So how is that behaviour then selected against?
And again, any answer to that question that depends on how other people might act is not sufficient, because they act on their subjective feelings about morality, not on any objective truth about morality.
No, what I’ve outlined is how a subjective-feelings-morality could evolve. That is not how an objective-truth-morality could evolve.
I’m not using a group-selectionist argument, I’m saying that the individual benefits (not just that the group benefits). [“Group selection” implies that the group benefits, even if an individual doesn’t.]
OK, agreed.
That’s exactly it! It does indeed select merely for the “social-friendly side effects”. Except they aren’t “side effects”, they’re all there is.
Evolution is selecting only for the “social-friendly side effects”, which depend entirely on people’s subjective aesthetic feelings! That’s all there is to it.
Evolution does, in addition, also select for correctly spotting lions in bushes, etc.
I’m saying that it can’t select for “objective truth” morality, since it has no traction on that concept. It *can* select for “subjective feelings” morality, since people can dislike and punish you or like and reward you.
I lion eating you is real. A snake biting you is real. A person punishing you is real. Evolution can select for all of those things, and that includes truth — it select for correct judgements about lions in bushes and it selects for correct judgements about whether you’ll be punished for certain acts (where the punishment is determined by people’s *subjective* *feelings*).
But nowhere here is an account of how evolution selects for “objective moral truth”.
Exactly.
Go on, then, make the argument.
As I see it the only normativity, the only “shoulds”, are instrumental shoulds deriving from human values, desires and goals. Moral language is a report of our values and desires, and instrumental oughts follow from those.
But there is nothing external to humans that morally obliges us to act in particular ways. There is no objective normativity.
If by that you mean objective normative statements (as opposed to instrumental ones deriving from subjective preferences) then I agree entirely.
Moral statements are not distinguished from truth statements (= descriptions) by being normative, they are distinguished from truth statements by being reports of values. They are declarations of aesthetic judgements.
First, very few people do, in fact, continue to act in the fact of punishments and fears of punishment. A few do, yes, but they are very much the exception.
Second, human psychology is complex and multi-faceted. It may be that such stubborness is selected for to some extent for complex reasons.
Third, as I’ve accepted, humans *wrongly* suppose that morals are objective (are moral-realist intuition is pretty strong). The error could lead to people clinging to their moral stance even when they will suffer for it. If only a few do that, then it is not in contradiction with the idea that moral intuitions *overall* evolved.
In order for evolution to co-opt any “capability” to do with objective moral truth, it needs to have traction on objective moral truth. I’ve not seen even an outline proposal for how that works.
That explains how an *illusion* of objective morality could evolve. Again, though, that only depends on how people *think*, it does not depend on the actual facts of objective moral truths.
But again, what is the consequence of getting “objective morality” wrong, such that it is selected for?
Getting maths wrong can get you killed. (If you’re attacked by 2 enemies to your left, and 2 to your right, and you kill 3 of them, and then think you’re safe and relax, because you think that 2+2=3, then you could be dead.)
Now give me an equivalent noddy explanation of how getting objective moral truths wrong could get you killed. (Again, any appeal here to subjective human preferenes is invalid.)
You’ll need to expound on that a lot to convince me.
You can appeal to dictionaries for what concepts actually mean! And I deny that somebody regards something as “beautiful” if they find it displeasing. That, to me, is a simple contradiction. They might recognise that other people find pleasing what they find displeasing.
I’ve no problem with someone finding displeasing what others find pleasing. This is no different from a child not liking chocolate.
Yes, someone can over-rule one value with another value. So someone might tolerate something they regard as immoral in some circumstance (e.g. they might steal to keep a child from starving).
Yes, they can disapprove of stealing in general, but approve of “stealing to save a starving child”. Or they can approve of loyalty to a friend, but not be willing to lie for a friend who commits murder.
This just means that judgements can be complex. I don’t see that there is any great difficulty for me here. Indeed it’s the moral objectivist, who needs to apply truth values to bald statements, who has more difficulty here.
Yep.
Yep!
The only thing I need this for is to make the point that different aesthetic experiences can be distinguishable.
Primarily visual aesthetic experiences are different from primarily olfactory ones, which are different from moral ones.
Presumably a moral objectivist would need a clear-cut answer to “is this morally salient?”. So if we ask whether teenage masturbation is morally wrong, surely the objectivst needs to answer one of “yes, “no”, or “it’s not morally salient”?
In my scheme, I don’t need to distinguish moral sentiments from other aesthetic sentiments. If someone dislikes something, I don’t need to ask the “but is it morally salient” question, because there is no fact of that matter.
And I’m suggesting that someone experiencing a moral emotion and making a moral statement is also having a specific aesthetic experience.
Is that true? Let’s take two different aesthetic experiences: (1) Sam watching a sunset, and (2) Sam tasting granny’s soup. The two experiences are different and distinguishable, agreed? In both instances Sam says “Beautiful!”.
Nothing in Sam’s statement allows me to distinguish between the two instances, even though the experiences are very different, agreed?
I’d suggest that much of what we experience is not conveyed by the language, and indeed can’t be. How do you convey what it feels like to see the colour red?
Yes, that’s exactly what we want. We have different aesthetic experiences, and we cn categorise them s visual, olfactory, tactile, etc. And some of these we will like and some we dislike.
So any aesthetic experience will have several attributes, including, say, “olfactory” and “pleasant”.
And yes, I do think that all “beauty” experiences must (be definition) also have the attribute “pleasant”.
The like/dislike could be one of the things conveyed by a statement, but needn’t be the only thing conveyed.
I’m still largely baffled as to why you think there is an issue here. There is a large array of things we would put under the category “aesthetic experience”. There are also differences in different types of aesthetic experience. So what?
No, the *moral* content of a statement is whether we like it or not.
“It is moral” is equivalent to “I like it”.
“It is immoral” is equivalent to “I dislike it”.
There, that’s meta-ethics in two statements.
Aesthetic experiences have multiple attributes. One of those attributes is whether the experience is pleasant or unpleasant. The *moral* content of a statment is simply whether that experience is pleasant or unpleasent.
I still don’t see why this is hard. To me it seems very simple and straightforward.
But the like/dislike is not the only attribute of the experience. If Sam calls the soup “beautiful” he is conveying that he likes it (that is one attribute), there are lots of other aspects of the experience (including the saltiness, the taste of sage, etc).
You keep suggesting that I’m confusing you. I really don’t see anything at all confusing about this!
The *moral* content, the aspect salient for *moral* jugement, is simply the like or dislike. That does not deny that there are all sorts of other aspects of any aesthetic experience.
If the Social Contract theorist is adopting the axiom “what leads to stability/survival is moral” then they are making a by-fiat, subjective declaration.
That’s because their choice of stability/survival is subjective, it is based no what is important to *them*.
Many aesthetic sentiments can be satisfied without being widely imposed. We can choose our own music without caring what others are listening to.
Moral aesthetic sentiments are not like that, their subject matter is how people treat each other, necessarily they can only be fullfilled if they are widely applied. If, say, we have a revulsion of slavery, that can only be satisfied by imposing the standard on others. That’s why we try to impose moral sentiments but not necessarily other aesthetic sentiments.
As per last answer. I like Metallica. I don’t care much whether others listen to Metallica. I *do* care if a child down the street is neglected and abused. Therefore I want to impose standards of child protection but don’t want to impose choice of music.
There’s no reason why you should. Other than the fact that I am one member of society. Any one person thinking something will have little effect. But if the view became prevalent then for pragmatic reasons it might affect you.
Yes, yes and yes.
No, because humans can influence each other. (Don’t ask whether they “should”, as in a moral-realist should, influence each other; as a simple fact, humans *do* influence each other.) And if that influence spreads it can come to dominate. That is, indeed, how things work!
What question are you asking? Are you asking whether I would like or dislike their act (approve or disapprove of it)? That depends on what it was and the fuller circumstances.
If you’re asking how it rates against some objective standard of morality then there is no answer because there is no such standard.
Well hold on, it’s not *me* arguing for any normative force.
But, in order for a “conceptual truth” to be “moral” and not just descriptive (akin to maths) doesn’t it need to me normative? And for that, does there need to be a normative force? Or something that gives the normativity substance? It has to matter in the real world, surely, for something to be a “conceptual moral truth” or “concetual normative truth”? How does it do the “mattering in the real world” if not by some “normative force”?
That doesn’t get you anywhere! I can define the property “twiffing”. Being “twiff” involves the normative instruction that you must balance a pencil on your nose once a day. Why must you? Well, if you don’t, then you’re not “twiff”!
Err, so what if I’m not twiff? There is no normativity there, no requirement to do anything. That would make morality quite literally empty. **Why** — in your conceptual objective scheme — should I be moral? “Because if you’re not then you’re not moral” is not an answer.
Just as, if you’re not twiff, then you’re not twiff.
That makes morality as empty as twiffing. Quite literally. This path does not giver you a morality that qualifies as morality, since it provides no reason for being moral.
The point is that you could adopt 2+2=4 and drop one of the other axioms instead. (You could work back from 2+2=4 to derive the dropped axiom, proving that it is not needed.) But this is a side track.
But if you want to require that “Slavery is immoral” is part of the system, then you may as well adopt it as an axiom.
De facto, maths axioms are adopted (and thus “justified”) from the fact that 2+2=4 works in the real world, and so is taken as a starting point, whence one distils down to more “basic” axioms.
Only if we have a goal, a value, of a stable society. It must, ultimately, derive from human goals (and thus be subjective).
That argument depends on human choice being a “good”, which is a subjective value judgement.
The problem here is two-fold:
1) You keep using it to oppose my discussions of “the right answer”, which as I pointed out here is usually used to talk about what the right meta-ethical theory is. So from my perspective you are retreating to subjectivism to avoid justifying your meta-ethical theory, which is PRECISELY why I said we needed to get on the same page there.
2) Subjectivist theories actually DO claim that there are answers, and even right answers, to ethical questions. They just insist that those are subjective. It’s error theorists who deny that there are answers. As I have been pointing out, your view is a mish-mash of various theories and so your expounding on what subjectivist theories amount to is often quite confused.
Let me raise a question here that came up when I was reading other parts of the post but is both a) critical and b) useful to talk about when talking about this confusion. It strikes me that what’s important about any meta-ethical theory is that it can answer one question: What does it mean for a person or an action to be AMORAL? Particularly for agents. Any good meta-ethical theory has to be able to tell me when a person would be acting outside of morality, and most of them have good explanations for that.
For Utilitarian views, a person would be amoral if they either do not understand utility, or don’t make their choices on the basis of utility (they ignore it when making appropriate decisions). This is amoral because someone could, entirely by accident, always in accordance with utility but if they aren’t doing it BECAUSE they recognize that that’s what utility says to do.
For Virtue Theories, a person is amoral if they either have no concept of virtue or ignore it when making decisions, as per above.
For emotivist theories, they either don’t have moral emotions or else don’t consider them when making decisions.
For subjectivist theories, the person either doesn’t have a moral code, doesn’t understand the moral code at the relevant level (personal, societal, cultural, etc) or doesn’t act on it.
For your view, though, I can’t for the life of me figure out what would make a person AMORAL. So I think it not only critically important that you answer this, but think that it will clarify your position because it will highlight what is important to you. So please, answer this and take care to ensure that your entire theory is consistent with your answer.
You explicitly dropped the objective part, which is fundamental to our evolved intuitions AND is a major reason that morality was selected for. That you can explain it away or justify it doesn’t change the fact that you cannot simply take the evolved intuitions or the evolved process and use that as the meta-ethical theory. Because of that, we are always going to be able to challenge any part of your meta-ethical theory — and, indeed, ANY meta-ethical theory — justified entirely on evolutionary grounds. Once we accept that what evolution built here might not actually be accurate wrt what morality really is, that we might be incorrect or deluded or acting on an illusion that evolution built into us, then a simple justification “This is what evolution gave us!” can never rationally settle challenges, and so cannot justify a meta-ethical theory. So you can’t simply insist that starting from evolution is the right way to go, and that the conclusions of evolutionary biology should take precedence. The thousands of years of moral philosophy take precedence, because any argument from evolutionary biology is going to have to go through them to be accepted.
Remember that we agreed that meta-ethical theories are supposed to tell us what morality actually is, or what the best model of it is, which is what “definition” normally refers to. I was never interested in “common usage”, and so if you were really insisting that the purpose of morality could be used simply as the meta-ethical theory of what morality is but didn’t describe “common usage” then you should be able to understand now why I was confused, because I presumed that we were talking about the same thing and the thing that was important in the discussion.
Of course, this would also contradict your later comments about how we can use the dictionary to determine these sorts of concepts, which I denied. What you’d be saying here is that you can’t use the purpose of morality to determine the actual concept of morality but you can still use it to determine and justify the meta-ethical theory, whose sole purpose is to tell us what that actual concept of morality is. This … seems contradictory [grin].
And it’s important that we settle this, especially since, again, you have a tendency to appeal to evolution and the dictionary when it supports your position and deny it as relevant when it doesn’t. We need to have a consistent standard for what evidence is appropriate that goes beyond “It supports Coel’s theory”.
As I pointed out above, you were, from my perspective, using subjectivism to argue against my demands for an objective justification for your meta-ethical theory being the right one. There is an objective answer to the question of whether our best meta-ethical theory is subjective or not. So when we are discussing meta-ethical theories — which, again, is all we’re doing — there is no subjectivism at that point: it’s all objective. We may objectively conclude — as much as we can objectively conclude anything — that the right meta-ethical theory is a subjectivist one, but it would still be the objectively right meta-ethical theory.
My experience here has been that you have often been responding to my meta-ethical concerns at the ethical level, and insisting that, say, my demands that you justify your meta-ethical theory are just left-overs from my objectivist ethical views. The point of doing this was to clear up that confusion.
Well, I don’t want to go into it in too much detail because it will be ANOTHER rabbit hole for us to dive into, and there’s an example from your own argument to talk about later. The more obvious ones is the cases where we extrapolate from evolution what our answer to a moral question SHOULD be and yet we strongly react as if that’s immoral (usually related to the fact that evolutionary accounts end up strongly relying on either direct personal benefit or explicit societal benefit — that benefits individuals — but when we extrapolate that out we get strong reactions that it’s immoral). The simplest structural issue is that evolution selects for personal benefit, and if we analyze those structures or their suggestions and decide that it doesn’t benefit us we usually declare that suggestion maladaptive or incorrect and ignore it. This, for example, is precisely what we say about the sweet tooth, which clearly evolved and clearly evolved for a benefit, but often suggests things that don’t benefit us and so should be ignored. But to do that for morality seems to completely miss the point of morality. It’s my “Russell from Angel” argument: if the wealthy and powerful vampire Russell can get away with draining and manipulating young starlets without any risk to himself directly or overturning society and those protections that benefit him, why shouldn’t he? But we consider even that sort of consideration utterly immoral in and of itself, which is NOT how we treat any other purely evolved concept. You can work around that, of course, but not without separating what morality really is from how our sense “evolved”, and thus without accepting that something that would follow from the evolutionary history of our sense of morality isn’t something that you can just stick into the meta-ethical theory. So everything that would follow from evolutionary biology about morality needs another argument before we can put that into our meta-ethical theory … but, again, that defeated the purpose of starting from evolutionary biology in the first place.
Thus, my conclusion of: if you create a meta-ethical theory that isn’t compatible with evolutionary biology on morality, you should go look back to see where you’ve gone wrong, but that’s as far as that goes. And any meta-ethical theory that is society-friendly will be compatible with the evolutionary biology (and Social Contract Theory is arguably the easiest one to reconcile with it, but more on that later).
Yes, but evolution doesn’t select for that on the basis of truth. It doesn’t know what truth is, nor does it care about it. It selects for it on the basis of the side effects of truth, which are that it is easier to navigate the world with true beliefs rather than false ones. If someone had a set of false beliefs that worked out, evolution would happily select for it … which, ironically, is pretty much how atheist theories of religion based on evolution work. But again your original argument was that morality couldn’t be objective because evolution cannot select for morality because it is not aware of morality. It’s not aware of truth either, and yet it can select for that, so that argument, at least, is utterly irrelevant, and so you are forced to pivot to a new argument that was not expressed there:
This argument is a pivot, as it moves to saying that the “punishment” morality received was based on subjective rather than objective factors and so morality can’t be objective. This … makes no sense to me, actually, because it would have to accept that morality is separate from the evolutionary selection and yet insist that the basis of that selection determines what morality really is. In short, you’d be accepting that evolution did not create or produce morality but selected for a separate capacity, but then we have to say that that capacity is impacted by the separate side effect that evolution selected for. Again, that’s not true for truth — truth is not importantly self-interested because evolution selected for it on the basis of personal self-interest — so why would it be true for morality?
Isn’t that a question that it is more important for you to answer than it is for me? Remember, you base your morality ON that evolutionary benefit, and I don’t. So I can say that there are cases where acting immorally can provide an evolutionary benefit but, nevertheless, the actions are still immoral, just as I can say that sometimes holding false beliefs can do that but, nevertheless, the beliefs are false. But how do you explain this? If this allows him to reproduce more, how can you ever consider it immoral?
See, this is one of the contradictions that starting strongly from evolutionary biology gets you into: if it meant that he reproduced more than others, why ISN’T this properly moral? Separating morality from evolution allows us to justify the morality or immorality of the action separately from reproductive success or, in fact, how people actually acted. Which is precisely the difference between normative and descriptive theories.
Says who? As long as it produces stable societies, it would be consistent with evolution without having to define itself totally by the purpose for which it evolved, or rather the benefits it was selected for.
I know that. My comment was expressing my exasperation at the fact that when I echoed YOUR OWN THEORY back at you, it suddenly became “group selection”, seemingly because it came from my keyboard and not yours.
But that doesn’t follow from the argument that I was criticizing, and so is pretty much a non sequitur to it. You insisted that for morality to be objective evolution would have to be able to select for morality specifically, and so since it can’t it can’t be. My answer is that as long as any morality has social-friendly side effects, it can be selected for on that basis without impacting what that morality really is. On top of that, this means that, as I said, any social-friendly morality is compatible with evolutionary biology here, holing your argument that your view is the only one compatible with it.
The “subjective feelings” part is, of course, the heart of our debate, so I’ll deal with that more later.
But it has no traction on objective truth as a whole, and yet we accept that truth is objective. Yes, it can select for lions not eating you, but it can’t select for lions not eating you because you have true or false propositions. Plantinga is absolutely right that if you ran away from the lion because you correctly believed it was a threat or because you believed that it was playing a game and you wanted to play with it evolution couldn’t distinguish the two cases. The answer, as I’ve stated, is that it is easier to produce consistent “run from lion” behaviour with cognitive systems that reliably produce true beliefs. It turns out that it’s also easier to produce social-friendly behaviour with objectivist moralities, and so evolution can and will in general select for them. And we know this because morality being seen as objective will both get people to act morally even when no one is looking AND will get them to challenge societal morality when appropriate. You can argue that morality isn’t really objective and that we just think it is — as you have been — but you cannot use evolution NOR your argument that evolution cannot select for morality itself to demonstrate that.
If morality is descriptive, it only describes what a person has, not what a person should have. There’s no justified way to move from that to imposing it on someone who disagrees. And if you explicitly appeal to societal harmony, you aren’t talking about morality anymore. Thus, if you want someone to change their morality, it has to be normative so that you can say that they SHOULD do that. This has two components for something like morality. The first is that morality really is that way (the statement is true about morality). The second is a motivation to BE moral. I’ll talk more about that later when we get into the details of normativity.
But remember that you claimed that evolution selects for the subjective feelings of OTHERS, not myself. But my moral sense is critically internal. So if it depends on subjective feelings that are not mine, then it isn’t subjective in any sense that the moral debate mean it nor is it subjective in a sense that most moral objectivists would care about. This is an inconsistency here: do MY feelings about morality determine what is considered moral — as any aesthethic preference view would insist — or do the feelings of OTHERS determine that? And what happens if the societal rules say one thing and each individual’s feelings say another? In general, people will get punished according to the societal rules, not any one or even each individual’s feelings about the rule.
To be honest, this is a problem I’ve started noticing here, and is a common one from people who are overly scientifically oriented (psychology gets this a lot, and it’s the driving force behind behaviourism): You constantly talk about things from what can be viewed internally and ignore what’s going on internally. But for things like feelings and aesthetic preferences and moral judgements what seems important is the internal process, not the external behaviour. Someone who internally thinks that X is morally correct but refuses to do it because they fear punishment isn’t acting morally to most people, but that’s a position that your view both seems to imply and that you’ve seemed to support. But, again, that clashes with your aesthetic preferences view. If I say that something is beautiful only because everyone else says it does but I’m lacking the proper feeling, it seems reasonable to say that I don’t really find it beautiful. The same thing should apply to morality, then, but that makes morality personal, like beauty, and not something then determined by the subjective feelings of others.
If they’re merely descriptive, then they ARE truth statements and nothing more. They may be truth statements about subjective values, but they’d be nothing more than truth statements nonetheless, which means they be selected for in the same way as truth statements were and, again, your argument would falter. After all, the statement “VS likes lima beans” is clearly a truth statement and can be true or false (it happens to be false).
They don’t consider that action moral though, unless they rationalize it. So our intuitive and evolved notions of morality would insist that acting merely out of fear of punishment is not moral. Thus, our evolved concept of morality STRONGLY advocates against acting according to what evolution selects for, according to you. Why would evolution ever produce that? Thus, it is more reasonable that that is simply part of morality and our moral capacity, and evolution selected for that capacity because it’s better at producing stable societies — which, again, benefit individuals — than the alternatives available.
Feel free to come up with a better explanation for it than mine. Until then, I have no reason to accept this as a counter to my argument.
Again, the issue is that you’d be insisting that evolution produced an intuition in us that strongly selects against what you yourself claimed is its purpose and what it selects for. There is no reason for evolution to do that, and so again your original argument then doesn’t work.
Sigh. No, as I have said repeatedly. All it needs is for there to BE such a capability and for that capability to produce actions that can be selected for (ie give a reproductive benefit). Just like the capacity to reliably produce true statements, or the ability to do math. Or do you think that evolution produces the ability to do math or produce true statements despite having no traction on numbers or on truth itself?
They throw out our psychological beliefs that we can have dispassionate moral reasoning AND that our moral emotions can and often are incorrect, and emotions and aesthetic preferences are clearly not the same thing. That should be enough to get past this little hiccup in the debate, even if you don’t think my objections are right. They still have to be answered.
No, you can’t. As you yourself stated, dictionaries track usage, not correctness. We can easily use terms in ways that are at best inaccurate and can, in fact, be totally wrong. And, in fact, you rely on that when insisting that our common usages of moral terms are wrong and your meta-ethical theory is right.
The concept is determined by the meta-ethical theory. The meta-ethical theory is not in the dictionary. The same thing applies to anything, like beauty.
You’re not paying attention again. I have pointed out a number of times that we will never actually come across a person who finds something beautiful but finds it unpleasant because in humans beauty produces a pleasant sensation in us. But we know that the two things can come apart conceptually because we can easily conceive of individuals who experience beauty but in them it’s an unpleasant experience rather than a pleasant one. Yes, common usage will conflate the two, but here what’s important is the concept, not the usage or, in fact, what humans commonly do.
For morality, it’s even worse, because we can have actually have people who can make the judgements of what is or isn’t moral apparently in the same or similar ways that we do and yet don’t disapprove of the immoral but instead approve of it. Psychopaths, for example, although they are more amoral than immoral. People who revel in vice and acting against what others and they consider moral. Even autistics tend to be missing the emotional cues that we commonly associate with morality (although, as I’ve written on my blog, they definitely seem to be moral agents). So, you need them to be the same at the conceptual level and they clearly aren’t. Your consistent denial of this can only be seen as deliberately ignoring empirical evidence, which is clearly a no-no for you.
I’m not sure how this helps you here. If they have the same initial feeling as people who like chocolate, then that initial taste sensation is separated from the feeling of pleasure, which contradicts your claim. But if they don’t, then you’d have to insist that what they experience is COMPLETELY different from what people who like chocolate experience, which doesn’t seem to be the case. For tastes, it seems like we generally have the same taste experiences but those taste experiences don’t trigger the later experiences of pleasure or displeasure. For example, I can’t stand spicy food. It doesn’t seem to be the case that I experience the spices completely differently than anyone else, but that for whatever reason the link between spicy food and my pleasure centre doesn’t kick off like it does for people who like it. But this implies that there are two experiences there: spicy and pleasure caused by spicy. I have the former but at a minimum lack the latter. Nothing you have said addresses that at all.
And if morality is an aesthetic experience, then it has the same duality. And our concept of villains who like to do evil and hate to do good seems to confirm that.
Okay, so then why wouldn’t I say that when that non-moral value overrules that moral one that despite finding it immoral they like it nevertheless?
The problem here is that you are supposed to be giving examples where non-moral values trump moral ones, but you are using moral values to trump moral values. What would we say if someone says that they disapprove of murder but approve of it when it benefits them, and act on it? Sure, we wouldn’t say that they like murder in general, but surely in the cases where THEY commit it they like it or approve of it, no?
No, because they can easily differentiate between moral judgements and approvals, which is the move you don’t allow yourself here.
But … the only reason I’m bringing this up is because you keep arguing that they AREN’T distinguishable! You keep arguing that moral judgements are aesthetic judgements just like any other and we can’t distinguish them or point to anything or any property that is specifically moral! So how can this move help you? Saying that we can’t do it EXCEPT at the level of the brain would at least allow you to avoid the consequences of folk moral judgements, but, yeah, YOU aren’t the one who benefits from aesthetic judgements being differentiable here [grin].
Great! This means that we know what properties are unique to visual, olfactory and moral aesthetic experiences, which means that you can tell me what those properties are for moral aesthetic experiences so that I can differentiate them. So, what are they?
Which means that that specific moral experience will always be part of any moral statement/declaration and a key part of its content, and so there will always be specifically moral content in any moral statement and we will always be able to identify that. This refutes what I called your error theory portion of the theory which explicitly denies that.
No, what we’d ask is “Is this a moral statement or not?”, which as it turns out you have to ask as well, because you’d have to ask if it is expressing information about a moral aesthetic experience or not. And whether it is doing that or not IS a fact, like it or not. Thus, your comment about there being no fact of the matter about it being a moral statement is inconsistent with your own theory.
The thing is that language in general, as you noted, is often ambiguous, but that doesn’t mean that you can leave out the context and say that we can’t tell what is really meant and what the statements are referring to. If I say “I’m going down to the bank tomorrow”, it’s ambiguous whether I mean the building that contains money or beside the river, but there is certainly a fact of the matter that we can easily determine given the appropriate context. What we have is two words that sound the same but mean different things, which is in fact precisely the case in your example: the first case is an expression of a specific aesthetic experience — visual, pleasant — while the second is a slang term for “greatly enjoyable”. Given the context, we can easily distinguish what is meant, because “beautiful” is NOT a common word that describes taste experiences.
So we could talk about “beautiful” in the context of visual and auditory experiences, because in both cases the same word is used to express the specific experience. But again we can easily distinguish what is meant by looking at the context, and there is a fact of the matter about a) what was meant and b) whether they are really experiencing that at all. And we can explore this by asking them in more detail what specific properties they are experiencing. So with the proper context we CAN easily distinguish them, even if the language is ambiguous. We could solve the ambiguity by adding separate words for each with no conceptual issues at all, but since the ambiguity is so easy to resolve we don’t bother.
The same thing would be true about moral experiences: given the right context, there is a discernible fact of the matter about whether a statement is a moral one or not. So if morals are aesthetic experiences, then your claim that there is no fact of the matter about whether a statement is moral or not is false.
We don’t. We convey that we are seeing red and the listener translates that to their experience. However, we can find many ways to convey the properties of the experience that we are having so that we can judge whether they are having it or not. And, of course, this is a bad example because colour experiences are more basic than beauty ones or moral ones (the latter two involve judgements).
It clashes with what you say right after this:
But if all aesthetic statements express both like preferences AND the specific experience they are having, then this can’t be true. For example, it would be just as valid to say that the “beauty” content of a statement is whether we like it or not, making this useless as a meta-ethical view because it would be equivalent to non-ethical views. Moreover, you’d be insisting that a claim that I dislike lima beans is a MORAL judgement, which is false and something that you already disagreed with besides. So making this move completely contradicts what you just established above.
Which answers this:
Because above you said that beauty experiences, tasty experiences, and moral experiences are all subsets of like experiences, but now are flat-out saying this:
How can moral sentiments be a subset of like — pleasant or unpleasant, one presumes — if any expression of like/dislike is a moral statement in and of itself? You denied that beauty statements and like statements can conceptually come apart, but if the moral content IS that part then conceptually they can unless beauty and morality ARE THEY SAME THING. But since you distinguished them above, that CAN’T be true. So what’s actually the case here?
And that’s putting aside that your “simple” statement means that any like or dislike judgement is a moral one, but if I say that I disliked that I had to shovel snow this week that seems utterly incomprehensible as a moral statement.
And on top of this, this would also contradict your statement that morality is about how we treat others. And no, saying that is can express different things doesn’t because let me remind you of what you said:
“Equivalent” does not mean “one thing that it’s saying”. “Equivalent” means “the same as”. You can escape this trap by saying that the important thing conveyed to other people IS that approval, but remember what I’m after here is not that, but the meta-ethical theory, which tells me what it is. And this circular wandering is what’s confusing, and could be solved quite simply by dropping that error theory part about there being no fact of the matter that really doesn’t seem to be doing anything for you, or at least is not something you’ve been able to explain in any sensible way.
Well, except the Social Contract theorist isn’t saying that because that IS important to them. You could have a Social Contract theorist who thinks that what is moral is defined by the Social Contract but doesn’t care about that and so acts immorally all the time.
The argument is essentially this:
1) The only sensible notion of morality comes from the Social Contract.
2) Moral terms, therefore, have no meaning outside of the Social Contract.
3) But a Social Contract can only exist in a stable society.
4) Therefore, the Social Contract can never advocate for things that destabilize society because that would be self-defeating.
5) But the only things that are moral are immoral are things the Social Contract advocates for or against.
6) Therefore, there can be no moral statement that destabilizes society.
None of this actually requires in any way that anyone CARE about morality.
That’s one of the big problems with your view: you can’t seem to conceive of anyone who just doesn’t CARE about whether an action is moral or not, and so always tunnel down to the motivation without understanding that what morality is and why people care about it can and often are different things.
Why? I can certainly form norms about how _I_ treat other people without others having to do so. For example, my comment policy on my blog that bans swearing, and my own policy that I at least generally don't swear in other comment threads. Why is this bad if others don't follow it as well, in their comment policies and comments (except on my blog, of course [grin])?
What is it, then, about moral preferences that means that everyone has to follow them or else they won't "work"?
Imagine that someone had that inverted, trying to impose Metallica on people and not caring about child protection. Noting that these are two completely different aesthetic experiences here — auditory vs moral — are they in any way wrong to do so? If they are, then by what standard or argument are they wrong? And if they aren’t, then how can you consider moral statements special in that regard?
Which means that I care for pragmatic reasons and not moral ones, meaning that my judgement isn’t moral. And, of course, we already know that things like musical tastes can hit that same sort of pragmatic judgement as people shun those who don’t share them, so again morality doesn’t seem special here, but you claim it is.
I’m asking that if they would generally consider an action immoral but take it anyway by rationalizing it away, why doesn’t that still count as them making a moral judgement, and taking a moral action (which is what subjectivism itself would generally imply) as opposed to an immoral one?
See, this is precisely the problem: you are insisting that somehow “normative force” has to link to “mattering in the real world” and are denying my moral theory on the grounds that it doesn’t have the right “normative force”. But until we agree on what that is that argument can’t get anywhere, and you consistently refuse to define what YOU mean by “normative force” when you claim that my view doesn’t meet it.
This is a good time to post a post I made while waiting for your reply about normativity, because it highlights the issue:
https://verbosestoic.wordpress.com/2019/01/18/pondering-normativity/
The issue here is that there are two kinds of “shoulds” what we are talking about here: Conceptual and Goal-Oriented. Conceptual is about what something has to be to be counted as an instance of a concept, and Goal-Oriented is about whether being an instance of that concept is something that a goal-seeking agent is motivated to become. So, to take your example, I can’t determine if I should want to be a “twiff” until you tell me what a “twiff” is and entails. It’s THAT conceptual normativity that I will then use to determine if I have a goal-directed normativity to be one and so to balance a pencil on my nose once a day. But NO goal-directed normativity is possible until I know what concept I’m looking at.
A meta-ethical theory, obviously, is all about defining the Conceptual Normativity, defining what it means to be moral and what that entails, and what is necessitated by that. That’s what we’re discussing. Settling that won’t necessarily create Goal-Directed normativity — or motivation — but it is required first. So you can’t ask me for a Goal-Directed normativity, and I could have the right meta-ethical theory of morality even if it doesn’t motivate you to act morally.
The key here is that the concept we settle on about morality is going to say something about someone, especially if they decide not to be moral. To use the surgeon example, we know that being a surgeon requires cutting people open at times (being incapable of that would mean that they aren’t any kind of surgeon). If someone says that they don’t want to be a surgeon because they can’t stand the sight of blood, that says something about them, just as it does if they say they don’t want to be surgeon because they can make more money in another job.
This will be true about morality as well. One thing that is consistent about morality is that it eschews reasoning on the basis of personal self-interest. If someone rejects morality because they don’t see how that benefits them personally, understanding that Conceptually Normative fact about morality is not going to produce a Goal-Directed Normative fact in them … but they will be revealed as a selfish person nonetheless. And that gives us what we wanted morality for and what it has been used for all along.
You realize I covered all the cases, right, and you just addressed one with an answer that I GAVE, right? Yes, this is a sidetrack, but I’d still appreciate you PAYING ATTENTION TO WHAT I SAY!
But we don’t want to do that. We think it will be, but an argument that produced an accepted and proper meta-ethical theory that entailed that slavery is not immoral couldn’t be rejected because it didn’t contain that axiom, nor would we add it because that would produce a logically inconsistent meta-ethical theory.
Which is not true for any mathematical and geometrical systems that can still be assessed for conceptual truths and yet DO NOT describe the world, but which could be in the same position as our standard mathematical system if we ended up in another world where THEIR rules applied instead of ours. So even if that’s what our standard mathematics is based on or justified by — and, remember, I don’t agree with that — the other ones are still valid and so your argument here is not true for conceptual truths.
Hi Verbose,
I’m finally getting round to responding!
First, in my scheme, there is no “fact of the matter” as to what is or is not “moral” or “immoral”, and the same applies to “amoral”. Moral langauge is a way that humans speak about and report their values. We use “moral” as a word of approval, and “immoral” as a term of disapproval. So, the question would not be so much “what would make a person amoral?”, as though there were a fact of that matter, but “in what circumstances would people likely label someone as amoral?”.
The answer to that is pretty much the one you gave for emotivism (since I think my position is pretty much emotivism). Thus, a person would label someone “amoral” if they were acting without regard to human feelings and values.
The evolutionary account DOES NOT say that morals are objective. So I am NOT dropping that! The evolutionary account says that morals are subjective, because the only thing that matters for evolution is how humans think and feel about morality.
My evolutionary account does also say that, in general, humans tend to THINK that morals are objective, but again, I am NOT dropping that from my account. That humans (generally) think that morals are objective is part of my evolutionary account. I am not dropping anything at all from the evolutionary account, so your critique along these lines doesn’t fly.
It seems to me that you are failing to distinguish between two things: (1) human *intuition* about morality, as programmed into us by evolution, and (2) the explanation of morality that comes out of an understanding of biological evolution.
Yes, the first of these is highly fallible. (Human intuition is fallible about lots of things, the theory of Darwinian evolution itself is pretty counter-intuitive.) The second of these is not that fallible. We do now have a good understanding of evolutionary biology.
Not as I see it. To me the “definition” is more basic than that. In order to know “what X actually is, what the best model of X is”, we first have to agree what we’re talking about, what the “X” we’re referring to is. To me the “definition” simply tells us that. A meta-ethical theory then *explains* (not “defines”) morality.
If we’re not agreed on the definition, then one of us could by giving a theory that explains X while the other gives a theory that explains Y.
No, it’s not contradictory! We can’t even talk about the “purpose of morality” if we’re not first agreed on what we mean by the term “morality”, and *that’s* what the definition is for. We need to the dictionary definition of “morality” to tell us what we’re talking about, and then the meta-ethical theories tell us where morality comes from what it’s purpose is, et cetera.
Well that’s not what I was trying to do. I was trying to *explain* the theory (in terms of subjectivism) in ways that justified the theory as being correct.
Agreed.
The point there is that, as I see it, many of your critiques of my meta-ethical position actually assume objectivism in making the critique, so I’m pointing that out to defend my theory.
Again, let me distinguish two claims that are part of my scheme:
Claim (1): The account from evolutionary theory of why we have morals and what they are is RIGHT. (That account being that morals are subjective human feelings and values.)
Claim (2): Human INTUITION about why we have morals and what they are is mostly WRONG. (Since human intuition tells us that morals are objective.)
So, if your critique is: human intuition about morals conflicts with the evolutionary explanation of morals. YES IT DOES! I agree! But that’s not an inconsistency in my scheme, it is part of my scheme.
All along I’ve emphasized that evolution gave us morals that are, in truth, subjective, but programmed us to *think* that morals are objective.
I entirely agree.
And nor are there “side effects” of objective morality that could cause evolution to select for objective morality.
If you disagree, what are these “side effects” that could lead to selection for “acting in objectively moral ways”?
[Any answer along any lines involving how other humans feel and behave is not an argument here, since other humans feel and behave according to their *subjective* feelings and values, we need a way for evolution to select for *objective* values. What is that mechanism?]
Sorry, I don’t understand this objection. All I’m saying is this. Evolution can select according to how other humans think and feel. If you act in ways that they dislike, leading them to kill you, then acting like that gets selected against. But, since people’s behaviour derives from their *subjective* feelings, this only gives selection according to *subjective* human feelings.
A) Thinking one can fly off a cliff top gets selected against (so in general, false thoughts about the world get selected against).
B) Acting in ways that other humans *think* are highly sinful gets selected against (so, in general, there is selection according to human subjective feelings).
C) Acting in ways that are *objectively* immoral. How does this get selected for or against? Can you explain the mechanism? Use side effects if you like. (And note answering as in B, about how humans *think*, only gets you the subjective morality of B.)
Me: Let’s suppose a warlord conquers a tribe, kills all the men, takes the women as concubines, and has lots of children by them. That’s “immoral”, agreed? So how is that behaviour then selected against? Isn’t that a question that it is more important for you to answer than it is for me?
No, it’s a question for you. If you think that such behaviour is “objectively immoral”, and you think that objective morality is selected for, then how does this work in the above case?
No, no, no! I do not base my morality on evolutionary benefit! That’s tje naturalistic fallacy.
But if you concede that, you concede that evolution cannot select for *objective* morality, and thus are conceding one of the main parts of my argument.
In my scheme there is no such thing as it being objective immoral. So I don’t “consider it immoral” in any objective sense. So if I “consider it immoral” then all that means is that I dislike it.
So your question amounts to: “If this allows him to reproduce more, how can you ever dislike it?”. To which the answer is “very easily”. Why on earth would I be required to like something just because it is selected for? Again, that smells of the naturalistic fallacy.
There is no contradiction at all! There is no contradiction between “it was selected for” and “I dislike it”! Lots of things (human anger, aggression, etc) will have been selected for, yet I often dislike them.
Because there is no such thing as “properly moral”. All there is is what humans like and dislike. And there is no contradiction in my disliking the behaviour of, say, a warlord who left more children as a result of his violence.
See, this is a clear example where you, as an intuitive moral objectivist, ask moral-objectivist questions about my scheme and conclude that there is a contradiction. There is no actual contradiction in my scheme.
The “social friendliness” is all about how humans subjectively *feel* about each other and about society. How does your *objective* morality have any effect in any way that differs from that subjective human-feeling account?
If you’re claiming that your objective morality is in any way different from *subjective* human feelings about society, then how would evolution ever select for the former and not the latter?
Yes it does have traction! It has traction through the side effects, as you’ve accepted. So, how does evolution have traction on “objective morality” in any way that differs from its traction on subjective human feelings about society?
The only traction is on how humans *behave*, and that is derived from how *they* feel and *their* values, and those things are subjective.
Exactly, it has traction on how humans *think* about morality. By definition, an *objective* morality is one independent of and apart from how humans *think*. And it is *that* that evolution has no traction on.
Neither of those two components are actually necessary! If both you and the other person THOUGHT that morality was like that, then that would suffice in affecting your behaviour, even if morality was not actually like that.
Again, in my scheme, this is why we evolved to THINK that morality is objective. It does not require that morality actually BE objective.
Another reply is that, actually, de facto, one can influence other people’s behaviour just by expressing preferences, even if everyone accepts that they are simply preferences. That’s because we all have to get along in society, so other people’s preferences do matter to us.
Well no, it selects for the subjective feelings of all people. What it actually selects for is a mixture of genes in the genes pool, and these get into all of us. Evolution doesn’t actually care about individual humans.
Your moral sense is subjective in the sense that it is your internal sense. But yes, it depends on your genes and the genes depend on past selection in other people. Eveything about our subjective nature is the product of past facts. That doesn’t stop our subjective feelings being subjective.
That’s a weird phrasing, “what is considered moral” in the abstract, without any reference to who is doing the considering? Again, all there is is people’s feelings on the matter. Your feelings about what is moral/immoral (what you like/dislike) determine what you regard as moral/immoral. Other people’s feelings determine what *they* feel on the matter. There is nothing other than that, there is no “what is considered moral” in the abstract.
[You could fairly talk about what society in general regards as moral/immoral, but that is simply the aggregate of individual opinions.]
Most likely, if no-one supports those societal rules, then those societal rules will evolve.
True, although if no-one supports a rule then it may not get applied.
Agreed.
OK, but so what? Why is this a problem for my scheme? Human psychology is complex. It may indeed by the case that people think that “Someone who internally thinks that X is morally correct but refuses to do it because they fear punishment isn’t acting morally”. So what?
If you’re going to ask something along the lines of, “so is it, in your model, moral or not?”, then you’re asking the wrong question (a moral-objectivist question where one can fairly ask such a question). There is no such thing as being moral or immoral in the abstract, all there is is people’s approval or disapproval.
Agreed.
Agreed.
Agreed!!
Exactly! You’re right! I am *not* saying: “There is such a thing as what is moral or immoral, moral statements do indeed have truth values, and what is moral or immoral is determined by the subjective feelings of others.” (That is a bastard mixture of objectivism and subjectivism that makes no sense to me.)
What I am saying is that there is no such thing as what is moral or immoral in the abstract, moral statements do not have truth values, they are simply reports of the speaker’s feelings.
Correct, as DESCRIPTIVE statements, as reports of someone’s subjective feelings, they DO indeed have truth values. As NORMATIVE statements, as imperatives of what we “should do”, they do not, in the abstract, have truth values. The claim of objective morality is that there are objective NORMATIVE statements that have truth values.
Evolution selects for morality (moral feelings) overall. There can be harmful side-effects (from the avolutionary point of view) so long as they are not that prevalent. De facto, few humans insist on acting what they regard as morally in the face of serious punishment.
Evolution does not select for stable societies directly (that’s a group-selectionist view, and group-selection is largely weak and ineffective). Evolution selects genes that proposer within the gene pool. But yes, that means that it selects genes that prosper well alongside other genes, and that has the effect of producing harmonious and cooperative societies.
So, overall, yes, we’re agreed. Evolution does indeed lead to us being socially cooperative mammals that prosper in harmonious societies.
It only “strongly selects against” such behaviour if people insist on acting in ways they regard as moral when knowing that they’ll be heavily punished for it. In practice, very few people actually do. So, no, it won’t be strongly selected against. It might be weakly selected against, but that’s ok so long as the overall effects of morality get selected for.
I cannot conceive of that. To me it’s a straightforward contradiction in that finding the experience pleasant is a necessary component of “experiencing beauty”.
Amoral psychopaths I grant, but they are no problem for my scheme. They are simply not making moral judgements. As for “people who can make the judgements of what is or isn’t moral apparently in the same or similar ways that we do and yet don’t disapprove of the immoral”, well, we’d need to examine a lot more what judgements they are actually making and what they mean by them, before I would accept this as a problem for my scheme.
It is quite easy for a human to have a love-hate relation with something, to both like and dislike something. For example it’s easy for a human to both like and dislike their addiction to tobacco or alcohol. Human psychology is complex, so if they’re both liking and disliking something that is not a contradiction in my scheme. [It would be contradiction in an objectivist scheme where there are yes/no answers to whether something is moral.]
Some of the sensations passed to the brain will be similar yes. Whether the brain likes the sensation or not can be different. Parts of the brain will be evluating sensations passed to it, and evaluating them, finding them pleasant or unpleasant. I don’t see any problem for my scheme here.
Yes, agreed. One person experiences both “spicy” and “spicy is nice” and the other experiences “spicy” and “spicy is nasty”. Why is this a problem for me?
If another value over-rules a “moral value” then the first value is morally salient and so, necessarily, is also a “moral value”. So this is just one moral value over-ruling another moral value.
I’ve not claimed that “non-moral values” can trump moral ones, only that moral values can trump other moral values.
My general answer to all such questions is that humans have a complex web of values, a web that is not necessarily self-consistent. All you’re doing is pointing to the complexity and inconsistencies of human psychology and the value judgements we make. This is no problem for my scheme. Why would it be?
Saying that moral judgements are aesthetic judgements is not saying that we cannot distinguish between different aesthetic experiences. I can distinguish between, say, tasting a nice dish and listening to great music.
There is no fact of the matter as to what is a “moral” sentiment and what is not. Morality is a human-constructed category. It’s a way of talking about our feelings. Different humans will construct the category differently. Therefore there is no general answer to your question. For example, some people would put using contraception in the “morally salient” category, while others would not.
We can, though, identify some widespread commonalities about how humans use “moral” language, and most often it is used about how humans treat each other in society.
The “moral content” is approval or disapproval.
Here we get back to the unhelpfulness of philosophical terms! I was using “error theory” to mean that people are in error when they think that moral language refers to objective normative truths. I was not intending to deny that moral language has zero content. Obviously it does not, otherwise people would not bother uttering it. The content it does have is approval or disapproval.
There is no fact of the matter as to whether something is a moral issue or not. There is indeed a fact of the matter as to whether someone approves or disapproves of something. Whether or not that is a “moral” approval or disapproval is not something about which there is a fact of the matter.
I don’t agree. Nothing in my scheme requires that there be such a discernible fact of the matter.
But they don’t. Saying “beautiful!” on tasting granny’s soup conveys approval and indicates that the speaker is having a pleasant experience. It does not convey the taste of the soup, nor other qualia.
I’ve not said that all like/dislike judgements are moral judgements; I have said that all moral judgements are like/dislike judgements. Moral judgements are a subset of like/dislike aesthetic judgements, but there is no general fact of the matter as to how the subset is defined. People will use moral language differently. (Again, human intuition about morality does *not* reflect an underlying reality!)
The point is that there is no fact of the matter as to “whether an action is moral or not”. All there is is how people feel about actions. I can readily conceive of someone who simply doesn’t care about other people and what they think of things.
Yes, you can do that. That’s part of morality, but it’s much wider. If one thinks “As a moral principle, everyone should have access to medical care”, then that means you want other people to adopt this principle, since you yourself cannot provide medical care to everyone on your own.
Any moral preference that relates to the well-being of others *in* *general* must necessarily involve other people beyond yourself.
What do you think my answer to that would be? Can it be that, even after all this time, you still don’t get the very basics of my scheme? Well my answer would be this:
There is no such thing as “wrong” in the abstract, so “are they in any way wrong to do so?” is not a sensible question to ask. It’s a moral-objectivist question.
All there is is people’s preferences. So, if you are asking “would you dislike them doing that?”, the answer is yes I would. If you are asking whether people in general would disapprove then I suspect that yes they would.
It *does* count as them “making a moral judgement”. The “rationalizing it away” amounts to using some values to over-ride other values.
Well it’s a pretty strange “objective morality” that has no normativity to it, and it’s a pretty strange “objective morality” that does not matter in the real world!
It’s up to *you* to propose whatever “normative force” you have! An objective morality needs to be an objectively normative morality (everyone agrees that we can objectively *describe* morality) and any objectively normative morality needs some sort of “normative force” to make it normative. From there it’s entirely up to you to propose whatever “normative force” you’re proposing.
Agreed. Goal-oriented shoulds are instrumental shoulds to achieve a goal. We all agree on that sort of should. All goals are subjective (they derive from our values, aims and feelings) and so all goal-oriented shoulds are subjectiv shoulds.
I told you what it entails: Being “twiff” involves the normative instruction that you must balance a pencil on your nose once a day. That’s it, there’s nothing more. And why must you? Well, if you don’t, then you’re not “twiff”!
Being twiff is purely conceptual, and twiff-ness has no normative force. Let’s see how your conceptual morality avoids that fate.
Well no you can’t, not if you’re arguing for an *objective* morality, since that requires some normativity, and that requires some “force” or something that motivates you to act morally.
If we don’t have that then we have twiff-ness, where the only threat is “if you don’t act twiff then you’re not twiff!”.
All this will give you is the declaration that if they reject acting morally and are not moral, then they are not moral. Which is equivalent to saying that if they reject acting twiff and are not twiff then they are not twiff.
That gets you nowhere. You need the normativity for your “conceptual morality” to matter in the real world. You can’t have an objective moral scheme that does not matter!
This is one of my big criticisms of attempts at objective morality, no-one has any idea what it even means. Any objective morality needs to be an objective normative morality, and no-one has any idea how that normativity operates or what it amounts to.
As I said in the shorter comment — and, to be honest, in the original comment — you are confusing the meta-ethical and ethical level again. At the meta-ethical level, there is absolutely a “fact of the matter” about what would make something moral, immoral or amoral. That’s rather the point of the theory, to tell us what morality really is and how it all works (more on that specifically later). Now, it might not be the case that there is a “fact of the matter” about whether, say, contraception is a moral issue or not, in the sense that you can’t say that everyone who is a proper moral agent will consider it a moral issue or not. Some might consider it a moral issue, some might not, and no one could be considered correct or incorrect no matter what position they take on it. This, of course, is the heart of all subjectivist theories: to some degree, what the appropriate grouping — which could be as small as one person — thinks it is is what it is, at least for that grouping (and subjectivist theories then deny that moral statements HAVE meaning outside of that grouping).
But there is always a fact of the matter about whether something IS morally relevant or a moral statement or not, even if that fact of the matter is that all statements are moral statements. Your confusion causes you to at various points say that there are statements/values that are not moral ones, that there is no fact of the matter about whether there are statements/values that are not moral ones, and that all statements/values are moral ones. I really wish you’d just pick one [grin].
Most people will label someone as amoral based on their moral intuitions about what it means for something to be moral. You have continually denied that our intuitions are necessarily accurate about morality. So why, then, do you think this is the right question to ask about whether or not something is amoral or not? I wanted what followed from your meta-ethical theory, not folk amorality, and you keep denying that your view supports any kind of Social Contract theory which WOULD allow for you to say that. So this is at best an entirely meaningless statement by your OWN arguments.
So, throughout this discussion I’ve been commenting on how you don’t understand philosophical terms, below you argue against the person who DOES understand them — myself — using them (error theory specifically) because it doesn’t reflect your actual view, and yet think that you can simply say that your view fits the emotivist one because you consider yourself one? Especially since you, again, get the emotivist position wrong?
The emotivist position does not, in and of itself, care about what people outside of the moral agent are feeling. It’s all about the emotion that the AGENT is feeling. So amorality to an emotivist is someone who either, as I said, does not themselves feel moral emotions, or else does feel them but ignores their judgements. So emotivists will always consider Stoics to be amoral. If they manage to extirpate (eliminate, roughly) moral emotions, then they are incapable of morality. If, though, they follow Seneca and concede that the moral emotions cannot be eliminated but strive to not accept their judgements and instead rely on rational judgements, then they would be ignoring their moral emotions and so would be amoral.
Your response is, really, nothing more than a soundbyte. Stoics can act according to the feelings of others. For example, it could easily be considered a Stoic virtue to act in ways to avoid triggering emotional reactions in others. They would still be amoral to emotivists. And all objectivist theories can consider the feelings of others — Utilitarianism, for example, is explicit about this — and so that doesn’t differentiate your view from them either. So your answer doesn’t tell me what it takes for someone to be amoral in your view, which suggests that you don’t actually have any kind of meta-ethical theory at all. Any meta-ethical theory can easily answer that question, even with “It’s impossible because …”
Let me try another way. What kind of person, in your meta-ethical theory, would be INCAPABLE of acting morally, and so would be amoral by definition? I’ve already outlined what other common views would say about that.
That’s what your INTERPRETATION of the evolutionary biology of morality says. It’s not mine, as I have argued for and will argue for again later. But let’s return to the original point. The original point was that the appeal of evolutionary biology was that we could take it and take obvious and clear interpretations and consequences of it and use that as our meta-ethical theory. This always leads us to problems, so we can’t; our interpretations of evolutionary biology will always be meta-ethical theory aware and, in fact, cannot make sense without starting from a meta-ethical theory so that we know what to emphasize and what to de-emphasize. So, sure, you can make your meta-ethical theory CONSISTENT with the evolutionary account to some extent, but so can pretty much every other theory. If I disagree with your meta-ethical theory, you are not going to be able to point to an aspect of evolutionary biology that you emphasize or de-emphasize while I de-emphasize or emphasize it and settle anything. So we need to settle the meta-ethical theory without being able to simply appeal to “Evolution says”. All of us, even you, end up having to say “Evolution, properly interpreted and understood, says”. And that means that we need separate arguments to say that, yes, we’re understanding evolution properly wrt morality.
You have a point that, from an evolutionary standpoint, morality doesn’t HAVE to be objective for evolution to select for it as long as we BELIEVE it is and therefore ACT as if it is. But all that does is allow you to make subjectivist views CONSISTENT with evolution. Objectvists can easily retort that wrt that aspect it is easier to accept that morality itself IS objective and was selected on the basis of that property it already had rather than on some kind of complicated illusion (I’ll get into your arguments later in the comment, so please don’t go into them here and instead focus on the main point here) and that point doesn’t, in and of itself, provide any reason for objectivists to think their view wrong.
I submit that your interpretation of the evolutionary biology is very tightly tied to your meta-ethical theory, and far more than mine is, for the simple reason that you CARE more about them aligning than I do. Thus, I cannot accept your interpretation as being the correct one without accepting your meta-ethical theory, which I don’t because I find it lacking justification.
And that’s not even taking into account that I think your interpretation of the evolutionary evidence is wrong.
The evolutionary biology would, at its base, have as evidence the evolved end result and the selection criteria used to get there. The former are our intuitions. The latter does have objectivity as a key component. Dismissing both of those requires extraordinary evidence, and I can’t see what aspect of evolution itself you could use here to provide that. Even the arguments you make are not aspects that follow from evolution itself, but are instead philosophical/conceptual arguments. So what am I missing?
Well, obviously, we have to agree on what phenomena is picked out by the term “moral” when we want to make a meta-ethical theory explaining it. But dictionaries track USAGE. They track folk or, to put it better, INTUITIVE views. While usage is useful for getting a starting point for what phenomena we’re talking about, it’s not stipulative. It can be wrong.
Let me put it another way. The process of naturalized philosophy — doing philosophy using science’s methods, which you want to do — is to take all the common examples of a thing, see what’s in common with them, and then derive what it really is from that. This may, however, involve dropping some instances as that even if people CONSIDER them part of the phenomena. But since dictionary definitions track usage they’d come out of alignment, and they’d come out of alignment precisely because of the consequences of the correct meta-theory.
To be honest, that you’d insist on this is mind-boggling. Any case where someone comes into a specialized field with a term that is also used in common parlance SURELY has to understand that it likely is used differently inside the field than in the common parlance, and so you can’t simply use the terms interchangeably. How would you react if someone came into a scientific field and said that the dictionary said that the term had such-and-such a meaning and so you were wrong? Would you simply accept that, or instead say that they don’t understand the term as used? And you are CERTAINLY coming into a specialized field here.
I don’t really remember how we got here, but let me make this clear: I am not going accept an argument of “The dictionary says this, so that’s what it means even if you disagree!”. If you want me to accept something, you need to show why it makes sense.
No, you’ve made it CONSISTENT with your scheme by explaining it away, or at least potentially doing so. But, remember, our intuitions are the result of that evolutionary process, and are the primary things that we have to suggest that there’s anything like morality at all. If we follow what seems to be the consequence of evolution and yet it results in something that we intuitively consider horribly and unacceptably immoral, then we have to decide which of those are right. In theory, we can pick either, but you can’t simply say that we HAVE to pick the evolutionary outcome there without a justification from a meta-ethical theory. And remember the point of this was simply to show that we cannot simply take the evolutionary theory and MAKE it a meta-ethical theory, because of the contradictions and problems in the evolutionary theory. We need to understand the evolutionary theory in light of a justified meta-ethical theory, which is what we’re working on right now in these comments.
You continually strawman objectivist views by insisting that if there’s ANY subjective component then it must be subjective. NO objectivists hold that. So you can have a objectivist morality that is selected for on the basis that people reject those who don’t accept morality and act on it objectively even if in some way that relies on acting on “feelings”. Some objectivist views EXPLICITLY INCLUDE DIRECT FEELINGS and yet remain objective. So why would that underpinning here, which does not define what morality is but only talks about what evolution worked with, matter here?
Let me make this simpler: I see no reason why a real objective morality could not evolve for the same reasons that an illusion of objective morality could. Both of them rely on people acting as if morality is objective, so how can an objective morality not be as good at producing those actions as the illusion is? So you need to give me something that addresses the specific behaviour of the agent itself — and not those around it — that shows that an objective morality couldn’t fulfill that. Otherwise, you’ll just be continuing to strawman objective moralities.
You should have read it, then, because the important part is what comes AFTER that (and I’m frustrated here because YOU KEEP DOING THIS, so much so that I have to keep my previous comment open to see what it was you ignored in your reply):
More simply, you’d be saying that we have a capacity that was selected for based entirely on a side effect that that capacity has, but then what it was selected for critically determines what that capacity is. That’s not possible. Either the critical features of the capacity were built into it by evolution, or else it always had them and it’s just that it works better than all the other alternatives that got it selected for by side effect. To put it back into this specific context, if we assume that we started from an ability to be moral which produces more stable societies which benefits individuals, then most of its properties would be properties that it had originally and evolution “found useful”. That means, then, that the process that evolution used to find it useful can only work on the properties it already had, and so can’t change them, so if it was objective originally even if people could only select it on the basis of liking or disliking it it would still be objective. And considering that evolution DID find its objectivity useful it would be a VERY tough argument to make to claim that it didn’t start with it.
That’s why it makes no sense. You’d have to argue that we selected for a property it had but, when that selection was done, it didn’t REALLY have that property because the selection process gave it a contradictory property. But we’re interested in what it had to start and so what it “really is”.
It’s not an issue for me, because selection pressures don’t matter for what morality really is. So immoral behaviours can be selected for and have evolutionary benefit without ceasing to be immoral behaviours. The example seems to support my case: if we can find strong cases where the evolutionary selection would select for a moral conclusion and yet it doesn’t, that suggests that we have a separate moral faculty that can be evolutionarily useful but isn’t determined at all by selection. And if we have a separate faculty, then that faculty can be objective even if selection is based on something “subjective”.
Seriously, can you outline in more detail why you think this is an issue for me? Because to me it really seems like you’re asking me how it is that immoral things than have selection value, but since I argue that it evolved by side effect the answer is obvious: we have a capacity that says X is immoral, and that determination remains even if sometimes doing X can be selected for. What morality says doesn’t change and isn’t based on selection.
That it would change depending on what was selected for sounds more like your theory than mine.
That it would be a fallacy doesn’t mean that you aren’t saying that [grin].
You continually argue that if a proposed property of morality cannot be selected for then it can’t be a property of morality. You also seem to argue — and this would seem to be implied by the first argument — that what evolution selected for just ARE the properties of morality. This SHOULD make the properties selected for and the properties of morality pretty much identical. So, then, how can what makes for a prosperous community be selected against by morality?
(To be fair, for both of us the answer is that the moral approach, in the long run, worked out better. But still, this is more an issue for you who more tightly couples them than I do, so I still can’t see why you thought this mattered for my position).
How? It’s rather the inverse: things are moral or immoral independently of evolution, and evolution selects for it because that capability adds value. Evolution is perfectly capable of selecting for all sorts of other capabilities and things as well. But it selects for morality specifically BECAUSE it is independent of self-interest or evolutionary interest. Otherwise, it would have just selected for them and eliminated the contradiction.
So, then, why don’t we like it, if it works out better for us? Remember, we have to accept that morality evolves because it has evolutionary benefit in the long-run. So what evolutionary benefit would disliking that thing that works have?
Again, my view is immune to this charge because it accepts that the two can come apart. But at this point I have NO IDEA how your system of likes evolves with the specific likes we need to have a stable society and so consistent moral premises.
Yeah, I think I’m hitting the fact that your meta-ethical theory is confusing and somewhat vague when you retreat to “likes and dislikes” in general. I’m so confused at this point that I can’t even ask a good question about it. Your view has to evolve, but how does it evolve? What benefit does your like/dislike system have and how does it facilitate it? I’ve gone into detail on mine, but at least can’t recall the details of yours. Help me out here?
Yet you won’t dislike them morally much of the time. People dislike cold, for example, but don’t consider that moral. The example is specifically a moral dislike, but one that works for the warlord and was part of the evolution of a number of communities. If they disliked it morally when it was going on, then the community trumped morality. If they didn’t, then it wasn’t a moral issue then and morality wasn’t a factor, and so no one’s view is bothered by it. So, still confused [grin].
But if we presume that the warlord and his children liked that behaviour, why didn’t we end up with people who liked it if those who liked it reproduced more than others? The most you can get to is arguing that people had feelings that liked it when they were in the top position but disliked it when they weren’t … but that’s what morality was supposed to control in the first place, so if it was merely the “dislike” shouldn’t it have kicked in earlier?
I think I might drop this for now and wait for your more detailed explanation, hopefully in the next comment.
If you’re asking how an objective morality strongly differs from a strong illusion of an objective morality, the answer is that it doesn’t, just as a Holodeck image of a tree doesn’t differ strongly from a real tree, but nevertheless they are different and one cannot argue that what they are seeing is not a real tree just because we can simulate trees on a Holodeck. If you’re asking how an objective morality would differ from a subjective morality based on social approval/disapproval, the answer is as I said below: an objective morality will have as a principle challenging social approval which should not be present in a morality that is itself based on social approval and disapproval.
As I stated above, yes it is true that thinking morality is objective would select as well as it actually being objective. But that’s not really an objection to objective morality, to say that the illusion would work as well as the reality. And we don’t need anything else to get that objective capability selected for.
You seem to be trying to argue here that its objectivity had to be implanted in it by evolution for it to be there, but that doesn’t seem to be the case for anything else. Again, that truth WAS objective is why it was selected for, but evolution didn’t give that to truth. It was already there, and it works out better for it to be the case that it is.
Perhaps you’re asking how evolution could select for the RIGHT answers to moral questions if being RIGHT about morality didn’t have selection value? Well, it can’t … but our moral intuitions aren’t and can’t be right either, so we can easily see that it DIDN’T. And that is obviously harder to do for morality than for truth — since what is selected for will, yes, be more subjective and depend more on what people believe than on what “really is” — and yet we don’t even have that for objective truth either. So we have capacities that don’t get everything right.
So it seems to me that from that last point the closest you can get is to argue that we can’t say that our evolved senses of morality have or always produce moral answers that are objectively correct. Sure, but all objectivists concede that and, in fact, use that fact to justify morality as being objective, saying — rightly, in my opinion — that you have to say that there’s NO right answer to moral questions whatsoever if you care about that, but this leads to, say, saying that slavery is not necessarily immoral and could even be moral, which is something that should raise red flags and so would need much argument and evidence to accept. And you don’t have evolutionary evidence here because, as I said, objectivists already accepted all of it, but just don’t come to the same conclusion as you.
Non-responsive. You’ve retreated to genes while dropping your actual claim — and what you were badgering me about — that morality depends on what OTHER PEOPLE feel about your actions. If morality depends on that and that’s what makes it subjective, then no objectivist cares. And if it depends on my own feelings about what is or isn’t moral, then that is sufficient to make it subjective and the big huge argument that you’ve gone on and on about is utterly irrelevant, and we could have been arguing about THAT instead of this sidebar.
See why this is confusing? There’s always at least two ways to take your statements. The interesting one is the one you end up denying, and the other one is uninteresting but gets you out of counters. So it really does seem to me like you’re making the classic mistake of working to make your theory bulletproof but making it a confused mess in so doing.
Except I EXPLICITLY made reference to who was doing it in each case, so you’re accusing me of not doing something I did do, which I don’t appreciate. To make it clearer: is it the individual’s consideration or the society’s consideration that determines it?
Then your punishment angle doesn’t matter, because it’s based on societal views, not personal views. What this means is this: whatever I feel is moral is, properly, what is moral for me, even if society would punish me for acting on it. That’s simple and clear. So do you accept that?
It means, from the above, that whether something is moral, immoral or amoral can only be assessed from the first-person perspective, by whether that person themselves thinks that it is. This means that if you say “X is immoral” and use that to imply that you will punish them socially if they don’t act in accordance with that, if they give in then typically they would at best be acting amorally and perhaps even immorally. Thus, you would be forcing them to act immorally by the only reasonable standard of morality that you can have, and so you’d need to justify that.
It would also deny that there is a fact of the matter about whether an action is moral or immoral or amoral. There is. It’s just defined entirely wrt the individual. Also, it means that if, say, I say that contraception is immoral then from my perspective IT JUST IS IMMORAL.
I won’t go into some (more?) issues with this, but this is what that entails. But these are things that you’ve been denying the entire time, PARTICULARLY the idea that if I act in line with the actions you threaten me with public punishment over that I am, most likely, acting immorally regardless of your feelings. This would indeed eliminate all Social Contract notions, but your previous comments really did seem to lean that way.
As stated in the short comment, moral statements, subjective or objective, are seen as having normative force. What you COULD be saying here is that there is no fact of the matter that applies to all people or necessarily to any individual, but I already knew that about subjectivist moralities, and you kept insisting in there being no fact of the matter even when I WAS careful to ask about it openly enough for it to fit subjectivist moralities.
The problem here is that morality gets selected for because it strongly determines how we act, and here you’re saying that we constantly and consistently act AGAINST its recommendations. It’s hard to get it to evolve in that way if that’s true, isn’t it? Again, it’s strongly telling us to damn the torpedoes and act morally even if we are going to be punished, and your explanation for that is that, well, we don’t do that that often (which is probably not quite true). How does that tendency remain or get into us if we aren’t supposed to ever act on it?
Got an actual ARGUMENT for that? Because I don’t see it and have given many arguments against it. To be fair, you don’t agree with them, but I haven’t seen an argument that I can simply not agree with [grin].
On what basis do you make that claim (you rejected the moral/conventional distinction earlier)?
Note that in order to say this, you would DEFINITELY have to be saying that they are amoral, which means that you’d have to have a criteria for that, which above you said was the wrong question or the wrong way to phrase it.
Well, let’s take some specific examples then.
For a strict subjectivist account as outlined above, imagine someone who grew up religious and accepted that morality was defined by religion. They then become disenchanted with religion and out of rebellion decide to act against the precepts of that religion. Thus, they remain convinced that the religion defines what it means to be moral, but find that repugnant and refuse to act on it. Their personal moral code says “X is moral/X is immoral” but their personal preferences and feelings call it undesirable. Thus, they disapprove of acting morally.
An emotivist account can get around this, by saying that what makes it moral is them feeling moral emotions towards those things, and they no longer do, so despite them BELIEVING that they find the religious precepts moral or immoral they lack the proper specific moral emotions and so don’t really hold that. This, of course, would contradict your insistence that there is no fact of the matter about that because the emotivist account, clearly, allows us to tell others that they are WRONG about what they THEMSELVES think is moral or immoral based on a clear and unequivocable criteria. Additionally, there can be a difference between approval/disapproval and morality here as well. Imagine that they consider an action and have an emotional reaction to it of guilt. However, upon assessing it rationally, they decide that it is in their own best interest to do it, and so decide to take the action regardless. At that point, they approve of the action — that’s why they’re taking it — but their moral emotion flags it as immoral. They are ignoring that in taking the action, and so approve of the action despite their moral emotions. This is how Stoics would always act towards moral emotions, as outlined above, although it would be rational assessment of morality that would be doing the work there.
Even for judgements of others, this can happen, where someone feels righteous anger at someone’s actions but approves of it nonetheless because they accept personal self-interest as a justification for action. The key to the emotivist approach is that it is the presence or lack thereof of the emotion that determines it, and not the actual beliefs of the person themselves.
So it looks like they can come apart without any special psychological “love/hate” relationship. In general, no one has to approve of the moral in any way.
The problem for your scheme is that the ACTUAL assessment of like/dislike or approve/disapprove will be what follows from that overall assessment in the brain, and so a combination of ALL of the factors, of which moral factors are a small part. This clearly means that what it means to say that something is moral or immoral CANNOT simply be “I like/dislike this”, as you keep asserting, and also means that approval/disapproval and moral/immoral can come apart.
Because it means that what everyone else in the world would call the “aesthetic experience” is the former, while the latter is the “like/dislike” experience. To concede this concedes that the important part of the aesthetic experience is the sensory portion, not the approval portion. Now, you CAN try to claim that any aesthetic experience has to be an aesthetic judgement and that an aesthetic judgement always includes approval and disapproval, but then this seems to move far away from what any kind of emotivist view would say about moral emotions, which focus on the emotions — ie the experience — itself and use that SPECIFICALLY to drive the approval or disapproval. In short, you feel the emotion and from the nature of the emotion the agent ought to consider the action moral or immoral. If you add in a separate judgement, it’s hard to see what that judgement consists of, what it’s based on, or how it differs or would differ from a Stoic view that assesses the actual morality of the action separately from the emotion itself.
In short, if these can come apart, the question it raises for your view is where the actual MORAL assessment comes in or ensues. Is it in the initial aesthetic “sensation”? The judgment of it? Both? Neither?
On what grounds do you assert that? You seem to be making a rather unsupported claim here that only other moral values can trump moral values, but not only have you never argued for that it seems quite unlikely if we could EVER have values that are not moral ones, or values that are not morally salient, and you’ve conceded that we can have those. So where is your support for this contention?
Let me restore the context of this discussion.
I said:
You replied:
While you still appealed to a moral example here, you agreed with my explicitly stating that we had agreed that the overall value judgement — approval/disapproval — could depend on non-moral considerations. I don’t want to dig back to prove that you really did agree to that, but here at least there are only two options: you agreed to what you now say you never said, or you weren’t paying attention to the critical parts of my actual comment. Either way, is not good for you [grin].
Besides that, taking this tack forces you to claim the ridiculous claim that if I say that I dislike lima beans I’m making a moral judgement, which is, as we’ve discussed before, absurd. So we MUST have values that are non-moral, and from there we have to accept that sometimes those values will win out over moral ones. And the entire history of humanity proves that, yes, we will put non-moral values ahead of moral ones. We don’t all always act morally, even when we are convinced of what is the moral thing to do.
Yes, but we do that by pointing to specific properties that are unique to those types of aesthetic experiences. I’ve been asking for those properties for moral experiences and you’ve been insisting that they DON’T EXIST. But to distinguish moral aesthetic experiences from visual aesthetic experiences they HAVE to exist. And, no, talking about them being directed or about other people or how we interact with them doesn’t work because we have some values about how to deal with other people that are NOT considered moral, like rules of etiquette. So either you can keep dodging describing them by insisting no such properties exist but then having no way to distinguish them, or else you can accept that they CAN be distinguished and thus that such properties exist, and then provide them.
That’s been a tension in your defenses of your meta-ethical theory that’s been very frustrating, where you make morality mysterious when you don’t want to or can’t give specifics, but denying that it’s at all mysterious and that everything’s clear when someone challenges your view for being too vague. This is a common result when someone ends up defending positions from specific challenges without seeing how it fits back into the big picture of their theory.
Again, this conflates the meta-ethical with the ethical. Under any subjectivist view, what propositions are considered morally salient will vary from person to person, but the view will clearly state what conditions have to hold for the statement to be morally salient, even if only to allow that agent themselves to determine it or not. I’m asking for those conditions: how would I tell if _I_ consider contraception to be a moral issue or not?
Also note that this means that, yes, there can be statements/values/whatever that are not morally salient, or else you’d have a clear fact of the matter about it: if it’s a value, then it’s moral.
So if I like the movie “Clue”, that’s a moral judgement?
See, when you make generic statements like this there’s really no other rational conclusion to draw that what makes a moral statement moral is whether it is one of approval or not. But then you get annoyed when I interpret it that way. So either this statement is horribly wrong or incredibly misleading.
Oh, and you contradict yourself later:
Yeah, from the above, saying that the moral content of a statement just IS approval/disapproval which you’ve been using interchangeably with like/dislike does indeed say that, whether you realize it or not.
Yes, but that’s not relevant, because what I’m calling the “error theory” component of your theory is MY assessment of it, not yours. YOU were wrong about what error theory means, but I’m not, and my assessment of that portion is your continually denying that there is any “fact of the matter” about the moral content of a statement. I THINK now that all you mean by that is that what specific propositions are moral or immoral varies from subject to subject, but your insistence on avoiding any HINT of objectivity is causing issues like this where you have to deny any kind of meaningful content — or, at least, meaningful content that we can discern — which is an error theorist approach that clashes with both emotivism AND aesthetic experiences/judgements because in BOTH cases we have clearly discernible, if subjective, content (emotivism, the emotions; aesthetic experiences, the aesthetic experience).
Non-responsive. If moral statements are about aesthetic experiences and aesthetic experiences of a specific type, as you assert, then I can always ask if that statement was conveying information about that aesthetic experience or not. And that is indeed always a fact, and gives us a proposition that has a truth value.
If morality is about a specific sort of aesthetic experience, then there must be a discernible fact of the matter about whether or not the subject had an experience of that sort. If there isn’t, then it can’t be about aesthetic experiences or judgements. Pick your poison.
Why are you using this equivocation again after you made it a big part of your original comment and I pointed out that it doesn’t mean the same thing and so can’t be used the way you want to use it, a discussion that you ENTIRELY IGNORED? If I didn’t accept it in the previous comment, why would I do so here when you just reasserted it?
To summarize it, of COURSE it doesn’t express an aesthetic experience there, because used in that sense it’s NOT an aesthetic term. It’s a slang term merely stating approval. To use it that way is like you taking the term “Cool!” and saying that because of that approval has a temperature, and that temperature must be below average room temperature, and then getting confused when people use “Hot!” as an expression of approval and thinking that it’s just completely contradictory. It’s not. You’re equivocating. Clearly. Please stop.
Please give examples of how they can differ in the strongest sense you can think of, beyond the fact that if morality is subjective people will have different views. Will some people use moral language to describe aesthetic experiences while others use it to describe rational statements in a personally held moral code?
Yes, but that person could still be having moral emotions and moral aesthetic experiences, and so we could STILL tell whether they were making moral statements or not and there would still be a fact of the matter about that. So unless you want to claim that the defining property of the moral is consideration of OTHER PEOPLE’S feelings — which is neither an emotivist nor an aesthetic experience nor a subjectivist view — this isn’t helping your case.
I thought you were supposed to assume that when I say “wrong” I was referring to the meta-ethical theory, not the ethical theory. Because you didn’t do that here. To rephrase:
If someone wanted to impose their love of Metallica on everyone in the world, and do so by force, would they be correctly understanding the nature of the aesthetic judgement of music? Because pretty much all of us would think that they are misunderstanding it, and we think that because the love of Metallica is subjective and you are not justified in imposing subjective preferences on other people. And there seems no way to carve out moral subjective preferences out from these and from this understanding. So, if you don’t accept that someone is understanding the Metallica subjective preference properly if they imposed it on others, then you need more than “I WANT to impose moral values on others” to justify doing that for them. The person above wants to impose Metallica on others as well, so there’s no difference on that score.
Here’s the original context:
You said:
I replied:
So you started from an actual immoral action and insisted that they “rationalized” it, which usually means that they used reason to consider it something that it wasn’t. Taking the standard subjectivist line, I asked why that rationalization, in and of itself, doesn’t just make it moral. Your reply is that the action that you called immoral actually is moral at any point in time, even before they rationalize it. That’s … horribly confusing, and reveals the tension in your view. You have accepted neither subjectivism nor objectivism nor error theory, but keep trying to mix in elements of each that you repudiate when they don’t get along.
Non-responsive. This was my original comment:
The important part there is my challenge to you over what normative force you claim my view is lacking when you don’t know what normative force is. Shouldn’t we settle that before you can claim in any reasonable way that my view would lack it? And that was what the discussions and that post were aiming at doing. I don’t even know what you MEAN by mattering in the real world here, and you’ve insisted that you, yourself, don’t know either!
Good. Now recall that I’m saying that morality is supposed to have Conceptual Normativity, which you ignored even while asking me to show you what normativity morality had.
So my response to that is this: it doesn’t, and you are clearly deluded if you think “twiffing” involves that. How would you defend yourself from my response? You can’t say that you defined “twiffing” as such and so it must be, because that’s not how concepts work, even in mathematics. You’d need to outline the basic principles of “twiffing” and we’d have to agree before you could just say that.
It’s not even that, because you forgot to provide the CONCEPT. Right now, it’s just a random statement about a meaningless word. You still have to hook it up to a concept for it to be purely conceptual [grin].
Says who? My post and my comment are all about arguing that, no, that doesn’t actually follow. I reject moral motivationalism, remember?
No, it gets me to the fact that they refuse to act morally because they are selfish people, because they reject morality because it is not in their self-interest. If someone accepts that and replies “Well, I’ll be selfish then!” what could I do?
This is different from twiff because you haven’t told me what it means to be twiff at all, in any way, and so if I reject being twiff there’s nothing specifically there I could be rejecting. But if you fleshed out twiffing, then the same thing could be said about it. But you haven’t.
Hi again Verbose,
No, not so! My meta-ethical scheme says that there is “fact of the matter” about what would make something moral, immoral or amoral.
Yes, agreed, but if the meta-ethical theory says: “moral language such as “moral” or “immoral” are reports of people’s subjective value judgements” then there is no fact of the matter as to what actually “is” moral, immoral or amoral, because there is no such thing as what “is” moral, immoral or amoral. People will make different judgments as to what is moral, immoral or amoral.
In the same way, there is no fact of the matter as to what would make something beautiful. (There might indeed by a fact of the matter as to “what would John regard something as beautiful”, and there might be a fact of the matter as to “what would make Sue regard something as immoral”, but those facts are about human psychology, they are not facts of meta-ethics or meta-aesthetics.
I disagree! All along I’ve felt that you’ve never really understood my scheme, and that you’re making objectivist assumptions about it that don’t hold.
There is no fact of the matter as to whether “something is morally relevant”. In the same way, there is no fact of the matter as to whether something is a work of art. Is a 5-yr-old doodles a “work of art”? Or the daubings of a chimpanzee? There is no factual issue behind those questions, they are just asking how humans regard things, and different humans regard things differently.
The category “moral values” is a subset of the category “values”, but different people draw the subset differently, so there is no fact of that matter.
In a similar way, the category “teenage females who are old enough to marry” is clearly a subset of “teenage females”, but across the world different people would delineate the subset differently.
The commonality between the above examples is that they (“old enough to marry”, “moral”, “work of art”) are all reports of how people think about things, and different people think about them differently, therefore there is no “fact of the matter”.
Because there is no such thing as “is amoral or not”, there is no fact of that matter! The only question is do *people* *regard* someone as immoral.
This question is only the “right” question to ask in the sense that it’s the *only* question to ask!
Again, that’s a wrong question to ask, since it assumes that there is a fact of the matter, that there is a concrete category “amoral”. Rather, the word “amoral” is a word used by one person to report how they feel about how another person is acting.
Yes.
I disagree what it leads to problems; it only does so if one tries to map it inot moral-objectivist presumptions.
Well, not even that, it makes it stronger if we do believe that it is objective, but it is not fully necessary (we regard many types of aesthetic judgement as subjective, yet aesthetic judgments evolved).
True, but then the objectivist has to explain how evolution selected for objective morality (which is *not* selection based on how other people think and feel, since those things are by-definition subjective).
Evolutionary biology has a whole evidence base and theoretical framework that is independent of human intuitions and feelings about morality. We are entitled to import all of that evolutionary theory in trying to understand one aspect of humans (moralty), just as we would if we were trying to understand human kidneys.
I think what you’re missing is that you’re not starting from regarding this the point of view to well-established evolutionary theory.
Start from considering a species of non-social, non-cooperative mammals. They happened to be in an ecological niche with strong selection pressure towards a social and cooperative way of life. What would then happen? (By which I mean happen in terms of basic evolutionary biology.)
What would happen is this: evolution would then program cooperative behaviour, and it would do that by programming attitudes and feelings in the mammal brains (which are by-definition subjective). So it would program social-behaviour-related attitudes and feelings. And that’s all there is to it. That’s all there is to meta-ethics.
I’d have no problem with a scientific field using a term differently from common parlance, and the scientists saying that what they mean by it and what common parlance means by it are different things. That’s not a problem.
So, if philosophers want to say that when they talk about “morality” they are not talking about what is commonly understood by “morality” but about something else, then fine, no problem, so long as they state that clearly.
But surely the interesting question here is understanding “morality” as that term is commonly understood, and if we’re doing *that* then no we can’t redefine the word to something different.
I don’t agree that there are any contradictions and problems in the evolutionary theory. There only seem to be if you make objectivist presumptions.
How do they, or how does evolution, “know about” the “objective morality”?
First, my scheme does *not* depend on “people acting as if morality is objective”, that’s merely a “second stage” that makes it stronger.
Second, ok then, how does your selection for objective morality get started? Let’s presume that it is objectively the case that “one ought not do X” (whatever that’s supposed to mean). Suppose someone does X. What is it about doing X that then leads them to leaving fewer descendants, and thus selection for this objective morality getting started?
If the answer is “other people would punish him for doing X” then explain how other people know about this objective “one ought not do X”.
Agreed. But evolution can *only* select for “what benefits individuals” (genes, strictly). So yes, **if** you completely identify “what is moral” with “what benefits individuals” *then* evolution can select for this “objective morality”.
But, if your objective morality is in any way different from “what benefits individuals” then evolution *cannot* select for it, it can only select for “what benefits individuals”. So either you have to empty out the concept of “morality” entirely and make it merely a synonym for “what benefits individuals” *or* you have no basis on which evolution can select for it.
Well exactly! But that destroys your whole claim that evolution can select for an objective morality.
To have selection for objective morality you need it to be the case that acting in an objectively moral ways leads to leaving more descendants, whereas acting in an objectively immoral way leads to leaving fewer descendants.
How did this “separate moral faculty that isn’t determined at all by selection” arise? This idea seems akin to the theologians “sensus divinitatis” (though instead of giving knowledge of god it gives knowledge of objective morality). But how did we get it? The theologians can answer “god put it there”. But if we don’t use that answer, and if it didn’t evolve through Darwinian evolution, how did we come to have it?
Seriously, can you outline in more detail why you think this is an issue for me? … the answer is obvious: we have a capacity that says X is immoral, and that determination remains even if sometimes doing X can be selected for.
Whence the “capacity that says X is [objectively] immoral” if it didn’t evolve?
Yes, or rather they are the only properties of morality that do exist (unlike objective oughtness, which doesn’t exist).
Our like/dislike system enables us to act cooperatively and so exploit a cooperative ecological niche. If we had no (subjective) moral sense, no feelings about proper ways of relating to each other, we’d fine it impossible to cooperate.
The answer to that is the theory of an “evolutionarily stable strategy” and the Nash equilibrum, which demands a balance between liking and disliking such behaviour. Neither can be completely dominant. The behaviour “warlord who left more children as a result of his violence” can never come to dominate, simply because you can’t have everyone being victorious over everyone else and you can’t have everyone leaving more children than everyone else!
So how do you ensure that evolution selects on your “objective morality” and not on the “subjective morality with a strong illusion of objectiveness”?
Yes, that’s exactly what I’m asking.
I’d prefer to say that there is no such thing as “whether something is moral, immoral or amoral”. (There is such a thing as whether people *think* those things.)
Actually, I don’t need to justify it! My meta-ethical stance says that, ultimately, there is no justification for anything. The idea that moral opinions are, or need to be, “justified” is an objectivist presumption.
No, there are merely opinions, only likes and diskikes, it’s mistaken to think that these amount to “facts of the matter”.
To me, the “it just is immoral” isn’t a meaningful thing to say. The only meaningful thing is that you dislike it, which is what declaring it immoral amounts to.
As above, evolution doesn’t select for one particular attitude, it selects for a tensioned equilibrium between an attitude and its opposite. So it selects for some balance between being agressive and peaceful, between being selfish and self-less, etc. Again, this is standard “evolutionarily stable strategy” theory. It’s entirely in line with the evoluti0onary account that we have a desire to act morally and a desire to act immorally, and that we’re continually in a tension between the two.
My argument is the dictionary (“beautiful”: “Pleasing the senses or mind aesthetically”) which tells us that most people conceive of “experiencing beauty” such that finding the experience pleasant is a necessary component.
That, to me, is again a contradiction. If their “personal preferences and feelings call it undesirable” then it is no longer the case that “their personal moral code says “X is moral””.
I don’t see any contradiction in people having a mixture of conflicting thoughts about something, such that they both like and dislike it (it’s easy for a smoker to both like and dislike their tabacco habit). So someone can have a “it is immoral” thought which amounts to “I dislike it”, and at the same time have “I like it” thoughts.
Why, of course both parts of the experience are important parts! Yes, if you’re having an enjoyable aesthetic experience eating spicy food then, yes, both the “sensory” aspects and the “approval” aspects are important.
… I suspect that use of the “emotivist” label is getting unhelpful again. If “emotivist” has connotations that I don’t agree with then ok then I’m not advocating emotivism.
There you go bringing notions of “ought” into again! (I’m pretty sure that emotivism does not ential that; but maybe you know better than me on that.) There are no oughts. There are no oughts. There really are no oughts. (OK, so there are *instrumental* oughts, but that’s all.)
Well anyway, that’s not what I am advocating. All I’m saying is that we *do* like some sensory experiences and we do dislike others. That’s all. “Ought” doesn’t enter into it. Why do you always over-complicate things? Surely the idea that we like some sensory experiences and dislike others is obvious and mundance?
Seperate judgement?? Again, why are you over-complicting here? If one person likes coffee with sugar in it and another dislikes sugarded coffee, what is the problem? Why can’t we just like or dislike sensory experiences? Why do we have to have some sort of “seperate judgement” about it?
The “moral” assessment is simply the approve/disapprove evaluation. Yes, the other aspects of the overall experience are important; but the “moral” evaluation is simpky one of liking or disliking. In the same way as liking or disliking coffee depending on how sugary it is.
Yes, you’re right here (I worded the bit you are replying to badly). “Moral values” can be trumped by other things, for example someone can steal out of elfishness even if they regard stealing as “immoral”. I re-iterate that (1) what is ir is not a “moral” value is not something about which there is a fact of the matter, different people construe things differently, and (2) human psychology is complex and we can have conflicting likes and dislikes, and conflicting desires.
I don’t agree that such “clearly stated conditions” are necessary (or even possible!). Why do we need such? (Again, this seems to be an objectivist presumption, where it would clearly matter whether something was in the “moral” category or not.
I don’t think that I myself have a clear idea of some conditions that would, in all cases, correctly discern whether I regard something as being in the “moral” category or not. Similarly, I don’t have a clear set of conditions for whether I regard something as “art” or not. Why would I need such?
Just ask yourself! Do you or not?
Your question seems to presume that morality works by: meta-ethical theory ==> set of rules for implementing morality ==> therefore finding X moral or immoral.
It doesn’t! Values don’t work like that! At the basis of it are moral values, that we have or don’t have. From there, the “conditions” are *commentaries* that people have arrived at about their moral values. (And most people’s commentaries about their moral values are not necessarily that accurate.)
No, because *moral* aesthetic approval/disapproval judgements are a subset of the wider category of aesthetic approval/disapproval judgements. (And different people construe that subset differently.)
No, the things people regard as “moral” approval/disapproval is a subset of all approval/disapproval judgements.
Ask people to make List 1 “things I approve of” and List 2 “things I *morally* approve of”. List 2 will be shorter, but everything on List 2 will be on List 1. But, again, how people will construe the subset will be subjective.
I’m sorry but I don’t see any contradiction there. The moral content of a statement really, really, really IS just approval/disapproval. But we only use the “moral” language about a subset of the topics about which we make approval/disapproval judgements.
You keep supposing that there must be some extra content that then establishes a fact-of-the-matter as to whether it counts as a *moral* judgement or not. There isn’t!
Yes, the “moral statement” does convey information. It conveys approval or disapproval. It conveys whether the aesthetic experience was agreeable or disagreeable. (Note that there is no fact of the matter about which “specific types” of aesthetic experience qualify for moral language. Therefore the statement does *not*, in itself, convey information about that.)
I’ve not said that morality is about a “specific sort” of aesthetic experience; I’ve said it is about a subset of aesthetic experiences where different people construe that subset differentiy and thus there is no fact of the matter delineating the subset.
Exactly, it expresses approval of that aesthetic experience.
People will do both of those things. They’ll use moral language in all sorts of weird ways, especially as most people’s meta-ethical conception is wrong. But, in both of those cases, the moral language will express approval or disapproval (whether that is approval of an aesthetic experience or approval of statements in a personally held moral code).
Your statement presumes that there is a fact of the matter about what are “moral emotions” such that other people could discern that fact of the matter. There isn’t.
There is no “defining property of the moral”.
He might or might not be. He might be entirely aware that love of Metalica is a personal and subjective preference, and yet still want to impose it on others.
Your presumption that one needs moral *justification* is a moral objectivist presumption. Ultimately there is no justification for anything (there are only instrumental justifications resting ultimately on our desires). It is also not the case that one needs justification in order to act; actually, we can act regardless of whether we are “justified” in doing so.
Again, you’re presuming that there is some fact of the matter as to whether the act was in the “moral” category or the “immoral” one. There isn’t, so this issue is no problem for me. Yes, the parent will think “Act X is usually regarded as immoral, but in order to save my child I’ll regard it as moral”.
There is no tension in my view, once you consider that there are no facts of these matters, all there is is approval and disapprval by humans. And given that human psychology is complex, we do not expect that what humans approve or disapprove of to form neat and distinct categories.
I do know what normative force is, it’s the thing that gives objective moralityu its oughtess. Morality is not just descriptive (“grass is green”), it says “you ought to do X”.
I deny that there is any normative force because I deny that there is any objective oughtness. The only oughts are instrumental oughts deriving from our values and desires (“I don’t want to get fat, so I ought not have a second slice of cake”). But our values are desires are subjective, so that only gives a subjective morality.
Anyone arguing for objective morality needs some sort of normative force in order to give objective oughts (ones that hold independently of our subjective values and desires). If you don’t have a good account of what the normative force is in your scheme, and whence the objective oughts, then you haven’t even begun to have a scheme for actual objective morality.
Well I do know what “mattering in the real world” means, it means that we, as real human beings in the real world, “ought to abide by” your conceptual moral scheme. And it’s the “normative force” that achieves that somehow.
The thing I don’t have any clue over is what the “normative force” actually is, how it works, why it matters in the real world, and where the objective “oughts” come from.
But that answer simply sticks a label on it, rather than giving any actual account or explanation of why your scheme is normative. Unless you have such an explanation your scheme is a non-starter. It amounts to being like “twiffness”, which is a conceptual scheme with no normativity.
It’s my conceptual scheme so I can define it how I like!
I have outlined the basic principles of “twiffing”! Indeed I’ve explicitly given you the full entirety of twiffing: “Being “twiff” involves the normative instruction that you must balance a pencil on your nose once a day. That’s it, there’s nothing more.”
The concept is: “you must balance a pencil on your nose once a day”. That’s it, there’s nothing more.
Oh yes I have. To be twiff is to balance a pencil on your nose once a day. That’s it, there’s nothing more.
Yes there is something specific that you are rejecting: you are rejecting the instruction to balance a pencil on your nose once a day.
Of course I’ve given you no good reason for being “twiff”, but then there is no good reason to comply with any conceptual scheme unless one of: (1) it has normative force, giving it objective oughtness; or (2) following it would attain our (subjective) aims.
Things have been hectic for a bit which is why I haven’t gotten back to this for a while. Plus, it takes me a while to write these so I have to have time in a block set aside to do it. I’m also going to try to be more focused and zoom in on the critical disputes instead of trying to answer everything.
You’re confusing the meta-ethical and ethical levels again. For you, there is no fact of the matter at the ethical level about what makes something moral, immoral or amoral, but as that’s a consequence of a meta-ethical theory or examination and you think that that consequence is correct at the meta-ethical level it is a fact that there is no fact of the matter of that at the ethical level.
Whatever you mean by that at the ethical level. See, most cases where someone says flat-out that there is no “fact of the matter” about what makes something moral they are advocating some kind of error theory: there’s no fact of the matter because the terms themselves are meaningless. That doesn’t seem to be your position. You really do seem to be advocating for some sort of subjectivist theory, but they’d usually say that what’s moral, immoral or amoral depends crucially on the appropriate subject. Of course, objectivists would just say that there’s an objective fact about that.
I’ve already outlined what that is for some standard theories, which you’ve ignored. So let me try to say how your views are confusing here. If you hold an aesthetic preference view, then that is similar here to an emotivist view, but to an emotivist there IS a fact of the matter about what makes something moral, immoral or amoral: the presence of specific experiences. This would also hold true if the defining property of the moral was specific aesthetic experiences … whatever you mean by that. Thus, I can in theory tell someone whether or not morality is active in them, just as in theory I can do that for beauty by looking for the behavioural correlates of those internal experiences. So if that’s your view, then saying that there is no fact of the matter about what’s moral or immoral is false; there is, and it’s a fact about the internal experiences of a specific subject. It’s thus still subjective, but we know precisely what internal experiences are associated with morality. So your denial sounds more like a strong subjectivist approach, where what is moral or immoral is determined by what the subject believes it to be, and nothing more. While saying that there’s no fact of the matter about that sort of view is a bit confusing, it’s reasonable because it is entirely subjective. But then if I say that my morality is Stoic, then that’s what it is, and so I can personally reject your aesthetic preferences approach and even insist that morality is always about the things I dislike and there’s no argument you can make about that.
So, given that, you need to decide what approach follows from your meta-ethical theory, and how it does, and if you think you have a valid hybrid you really need to say what that is and how it works. Personally, I think that you are closer to an emotivist — but not one, because you don’t base it on emotion — but are so scared of any sort of objectivism that you react against anything that looks anything at all like it. I think that you need to understand what would be real and so semi-objective without that having to make your ethical view objectivist.
But the meta-ethical question is: what broad process do people use to draw that subset? As noted above, if that process is what is subjective, then your aesthetic preference theory can’t be a proper meta-ethical one because at the ethical level people could have a completely contradictory one, for example Stoics and they coudn’t be wrong about that at the ethical level.
If this was actually true, then no one could have understood my essay on psychopaths being amoral, as in being physically incapable of understanding and acting on any kind of moral code because they can’t understand it. So this claim is … dubious, to say the least. Even more so because I had previously pointed out how you can get notions of amorality that follow from subjectivist theories. To remind you of that, for emotivists and any view that relies on the internal experiences of a subject it’s either someone who doesn’t have those experiences or who doesn’t act on them, and for strict subjectivists it would be someone who can’t build a moral code or who doesn’t follow their moral code. Given this, this statement doesn’t seem to be at all true.
Okay, I see the problem now.
What you’re doing is speculating on a way for morality to develop that consistent with evolutionary biology as a whole. But the point of using evolution to derive our meta-ethical theories was the hope that we could take our evolved sense of morality — which is mainly our intuitions about it — and our actual evolutionary history and just make that our meta-ethical theory, And any attempt to do that leaves a messy meta-ethical theory, mostly because of the clash between objectivism and subjectivism: our intuitions are strongly objectivist and it seems likely that our actual evolutionary history relied strongly on us considering morality to be objective, but there doesn’t seem to be an objective basis that we can use to justify it and people’s views of what is moral and what isn’t moral do strongly vary. So what you’re doing here is essentially what we’ve both been doing: trying to build a less messy theory while remaining consistent with our evolutionary history. Because you’re a scientismist, you’re trying to stick to evolutionary biology because that gives it more credibility in your eyes, but make no mistake: you’re speculating and philosophizing as much as I am, but are just doing it inside a different framework. So you don’t get any automatic bonus because you aren’t using the facts of our evolutionary history any more than I am. And at the end of the day, what we have is that you have a view that you think is consistent with our evolutionary history and one where you feel that you’ve patched over the challenges of that, and I have the same. You claim that mine has incompatibilities with our evolutionary history and I claim yours has, and we’ve both found arguments that we think are acceptable to patch over those, and both of us don’t think that the other’s patch ups really work. So you can’t declare, as you have, that your view is the only one that is compatible with our evolutionary history, as we both think that we are compatible with it. So we need to argue that out. Your evolutionary biological speculations don’t make them what actually happened.
You could argue that we don’t know enough about the evolutionary history of morality and so can only speculate about what could have happened. That’s true, but we still would have our evolved intuitions and it still wouldn’t mean that your speculations are correct either, and so they still wouldn’t get primacy over something that was derived more from our intuitions.
You’re are walking into a philosophical debate willingly, knowing that it is one, and trying to convince philosophers — one in particular — that your view is correct. It would seem a bare minimum of epistemic humility here is to ensure that you know what the common terms in that debate mean.
But let me cut to the chase: a dictionary definition tracks common usage, and so isn’t an argument. This is especially true if I’ve already conceded the definition but pointed out that it’s incomplete, and given arguments for that. You need arguments, and a dictionary definition doesn’t count.
For example, the definition of “beauty” includes that it’s pleasurable, but I’ve already conceded that, in general, in humans we find that experience pleasurable, so for common usage that’s accurate enough. But here it isn’t because you need it to be logically the case that beauty is pleasurable, and I’ve given arguments showing that conceptually that doesn’t seem to be true. So you need an argument for why mine are wrong and your view is correct.
It potentially could do that, but the latter is actually far more complicated and so would be harder to make work. Essentially, what you’re doing is the same thing as Plantinga asking how we know that our cognitive faculties are producing truths and not useful falsehoods. He’s right that we can trace an evolutionary path where that happens, but that’s over-complicating things, so we have no reason to think it’s true. You can argue that the differences between what people and cultures think is moral provides sufficient reason for your case, but then we can retort with useful falsehoods that DID survive. So all you really have there is your view that morality is subjective, and since I disagree with that, that’s not getting us anywhere.
So let me focus on a big difference in our views. Your view focuses on social punishment, where acting morality is reinforced by others disapproving of us, and this leaves you with an other-focused idea of morality where moral judgements are about criticizing others. This, as I’ve pointed out, clashes with our basic intuitions and behaviour where we put more emphasis on our own judgements than on those of others. I’ve made the mistake of following you down that path instead of focusing more on the heart of my theory, which is that morality evolved because it promotes cooperation because it promotes reliable selfless behaviour. In short, the key to morality is that it promotes us acting in ways that benefit the society even when no one is watching us, and so we don’t cheat and others can rely on us not cheating, which makes society stronger in a way that benefits everyone in it. The same thing applies to why religion evolved as it does the same thing.
Why this theory works is that it predicts and explains these properties of morality:
1) Why morality is selfless rather than selfish, so much so that theories like Egoism seem utterly incompatible with morality.
2) Why morality is so focused on the individual’s views even in opposition to those of society as a whole despite the fact that morality almost certainly evolved because it provides for stable societies.
3) How morality can be selected for: it’s pro-social but does so in a way that makes the individual responsible for their actions and accepting the pro-social aspects. Again, this means that they won’t cheat on it just because they think they can get away with it.
The key problems with your alternative is that people generally accept the punishments for acting against the public consensus if they think it’s the moral thing to do, and your view would promote an idea that people should act on their own interests when they think they can get away with it, which is both contradictory to morality AND would collapse rather than preserve morality.
You may reply here with “But wouldn’t an illusion of it being objective work here as well?”. It might. That’s why your view isn’t completely contradictory to evolution. But there’s no real reason to think that’s actually the case, and you only accept it because you’re taking the subjectivist tack, which I’m not. An actually objective morality does it just as well without risking people realizing that it’s all an illusion and breaking down the societal bonds that morality provides.
The key to all of this for you is this: evolution can only select for things that impact our behaviour. Any time you suggest that we don’t act the way morality would suggest, you weaken your case that your theory is really describing things as they are. We can indeed deviate in behaviour a bit and have things still evolve, but the more we don’t or can’t act on those things in order for it to evolve the less likely it is that those things evolved at all. And you are focusing on the claim that morality evolved.
You keep retreating to this line while refusing to answer how I can’t use that same argument to maintain that I don’t need to justify in any way my belief that God exists. You want us to act rationally right up until the point that it clashes with something you want to do and can’t rationally justified. That’s logically inconsistent at best and hypocritical at worst.
For morality specifically, demanding through social force that someone act immorally — from their perspective, which is the only one that matters for you — in the name of morality is logically inconsistent, and also seems like something you wouldn’t and haven’t approved of others doing to do … again, for example, religious people imposing their religious morality on you.
Again, you declare that declaring something immoral merely means you dislike it while later you insist that not all dislikes are morally relevant ones. You’re making the classic mistake of trying to patch around issues without figuring out how to incorporate that into a clear system. So you really need to tell me in detail how that all fits together, and how I, personally, am supposed to tell if, in myself, I have a moral dislike/aesthetic experience or not. And if you can’t do that, then you’re not even a subjectivist anymore, because in any subjectivist theory if anyone can know what counts for that subject as moral or immoral, it’s the subject itself. If even they can’t do that, then morality is a meaningless term.
(Briefly jumping ahead)
This means that if I decide that I dislike contraception but consider it moral then I’m right about that, which contradicts your entire view. It would also mean that if I decide that contraception is immoral based on feeling-less reason, I have to be right about that as well, undercutting your aesthetic preferences theory. This is exactly why you need to put all of this together into something coherent.
Morality is the selfless side. Pragmatics is the selfish side. So morality itself won’t contain selfish behaviours.
But people like that exist. Not everyone, at least not immediately, converts their view of morality should they reject religion. You can’t have a logical contradiction on something that exists in reality.
This would mean that they dislike it from the moral perspective and like it from other perspectives. Fine. But then even in your theory all saying that they find it immoral would be saying is that they dislike it from the moral perspective or, to put it better, that they find it immoral. There’d still be a moral perspective that we can and need to explore to determine what that statement really means.
It does. That’s precisely how meta theories work, and precisely the relation between meta-ethics and ethics. Values come in at the ethical level, not the meta-ethical level.
But doesn’t that imply that they are WRONG to use it that way? You can’t have it both ways. Either morality is only what they think it is, at which point their usages and meta-ethical theory can’t be wrong, or else they can be wrong and there’s a fact of the matter about what the right way to conceive of morality is.
But then you attacking my view and demanding that it provide sufficient normative force is a red herring, as you don’t think such a thing does or can exist and so there is no way that I could EVER provide that to your satisfaction. So we’d have to settle objective oughtness first, and your comments here are distracting from that and allowing you to completely ignore all of my explanations of conceptual normativity, that I’ve explained both here and in an entire post and that you haven’t directly addressed.
At which point, it’s not my conceptual scheme and so we’re talking about completely different things. To extend this to my notion of conceptual truths and morality, to pull this sort of trick for morality would mean that we don’t mean the same thing when we talk about morality, and so we don’t have any kind of meta-ethical theory or model, and the term morality is mostly if not completely meaningless. So that’s not what we’d be doing when defining what follows from the concept of morality. We aren’t just making stuff up, but are teasing out what that common concept of morality is.
Twiff is not a common concept, but surely morality is or else this discussion was pointless from the start.
That’s not a concept. It’s barely a rule. The issue here is that you have no idea what a concept actually is if you think that that would count as a concept and that concepts are just things we make up. Which is doubly sad since I deliberately linked concepts and meta-ethical theories to models so that you could understand them.
Hi Verbose, I’ve finally got round to replying!
Whereas I am saying that there is no “fact of the matter” because moral declarations are value judgements, and value judgements are not objective facts, they are subjective feelings.
Yes, I’m advocating a subjectivist theory. And what people judge to be moral or immoral depends on the person (it’s a subjective value judgement).
I prefer not to say “What IS moral depends on the appropriate subject”, because to me that implies that there is a fact of the matter as to what is moral, but that the fact depends on the person. That, to me, sounds contradictory.
It’s thus much clearer to say: There is no fact as to what is or is not moral; delcarations as to what is moral or immoral are simply declarations of what the speaker likes or dislikes, and those declarations will thus differ from person to person.
I’ll avoid classifiying that in one of the philosophical moral theories, because every time I do that you tell me that that theory has connotations that I don’t necessarily intend. 🙂
It seems to me that you’re way over-complicating the analysis of a rather simple view. Would you understand what we mean by “Tom likes coffee” or “Dan does not like coffee”? Yes, there are all sorts of “experiences” wrapped up in those judgements, but in the end there’s a simple evaluation that Tom likes the experience of drinking coffee and seeks it out whereas Dan does not like the experience of drinking coffee and so avoids it.
Do we need any more philosophical analysis than that? Do we need an invesigation of the various components of the experiences each of them are having? Sure we can do that, but isn’t the basic concept of liking or disliking something clear enough already?
If it is, then my account of morality is simply that “moral” and “immoral” are terms of approval or disapproval in the same way that “like” and “dislike” report approval or disapproval.
I guess so, yes. We could indeed — in principle, if we knew enough — put both Tom and Dan in a brain scanner while drinking coffee and deduce which of them was liking the experience and which disliking it.
No, there would indeed be facts of the matter: “Tom likes coffee” and “Dan dislikes coffee”, but there would be no fact of the matter “coffee tastes nice”.
Similarly, there would be facts of the matter: “Tom regards X as moral” (= “Tom approves of X”) and “Dan regards X as immoral” but there would be no fact of the matter “X is moral” or “X is immoral”.
Agreed, but surely those are facts of the form: “Tom regards X as moral” (= “Tom approves of X”)? Are you then trying to say that, therefore, “X is moral” is the fact of the matter, if we’re considering the “specific subject” of Tom?
I guess you could try to say that, but to me it is weirdly unhelpful and confusing.
Which I’m not saying. My whole stance is that trying to ask “what is moral or immoral” is the wrong question, and that we should only ask about the feelings and values of different people.
I’m quite happy to accept your report of your values and feelings, and thus what moral codes you want to adopt and live by.
I really have tried to state my position as clearly and succinctly as I know how. As above: “my account of morality is simply that “moral” and “immoral” are terms of approval or disapproval in the same way that “like” and “dislike” report approval or disapproval”. And that really is all there is to it.
But I do base it on emotion! So long as, by “emotion”, we mean feelings and values generally. (Which is how I understand “emotivism”.)
Different people use different broad processes. There is no “fact of the matter” as to what is in the “moral value” subset of “values”. The “moral values” subset os a human-constructed category (not a natural category) and different people will construct it differently. For example, some people will construct it using the process “those values that [I think] are mandated by God”, whereas others will not.
I’d readily use the word that way, yes, but there is no fact of the matter as to what an amoral person “is”, since all “moral” categories are human constructed ones, the only facts are about how humans use the word.
And I submit that we can.
And all we need do is say that the objectivist intuitions are merely an illusion that evolution programmed into us to make our moral senses work more effectively, and then we have a straightforward and problem-free meta-ethical theory.
Is that a “messy” theory? Well, in a sense, evolution does indeed cobble things together in an unplanned and “messy” way. The account of the evolution of our kidneys or anything else is also going to have messy aspects.
Why do I need it to be “logically the case that beauty is pleasurable”?
No it’s not harder work. Evolution’s traction will be on how people act, which derives from how the think and feel. Thus it will necessarily be the case that evolution is selecting on people’s *subjective* sense of morality.
Yet I see now way in which evolution has direct traction on “objective morality”, whatever that is.
It might well do, yes, agreed. That means that (1) we have subjective feelings about things, which are the result of (2) what leads to greater survival/reproduction. This is exactly how a *subjective* morality (one about human feelings) derives from non-moral evolution.
Agreed. And there does not have to be any objective truth behind religion for that to work. Similarly there does not need to be any objective morality for our subjective moral senses to evolve.
I don’t see mine as an alternative to yours; I agree with yours. You’re simply emphasizing different aspects of what evolution will select for.
Or, more to the point, we’ll have been programmed with a variety of both selfish and selfless attitudes. (Though we usually choose to use the term “moral” only for a subset of these attitudes, namely the selfless ones.)
Agreed.
Again, nothing you’ve said is out of line with my scheme. You are effectively saying that a non-moral process has programmed us with subjective feelings about how we behave. Agreed.
No, my scheme does not prescribe “shoulds” like that.
Nothing you have said above about how evolution works is about an *objective* morality! You’ve been expounding on an account of the evolution of a subjective scheme.
Evolution does not program us with an objective scheme! Evolution selects for what does or does not benefit survival and reproduction (that means it is a non-moral process). In some evolutionry niches that will lead to agressive, solitary animals; in other evolutionary niches it will lead to cooperative, social animals (like us). In the latter case it will program *subjective* moral feelings to promote cooperation.
What you are expounding ha