Another philosopher of science doesn’t understand science

Maybe I’m having a philosopher-bashing week. After disagreeing with Susan Haack’s account of science I then came across an article in the TLS by David Papineau, philosopher of science at King’s College London. He does a good job of persuading me that many philosophers of science don’t know much about science. After all, their “day job” is not studying science itself, but rather studying and responding to the writings of other philosophers of science.

Papineau writes:

No doubt some of the differences between philosophy and science stem from the different methods of investigation that they employ. Where philosophy hinges on analysis and argument, science is devoted to data. When scientists are invited to give research talks, they aren’t allowed simply to stand up and theorize, however interesting that would be. It is a professional requirement that they must present observational findings. If you don’t have any PowerPoint slides displaying your latest experimental results, then you don’t have a talk.

I wonder, has he ever been to a scientific conference? “When scientists are invited to give research talks, they aren’t allowed simply to stand up and theorize, however interesting that would be.” Err, yes they are! This is entirely normal. Scientists who do that are called “theorists”; and yes, they do indeed stand up at conferences and talk only about theoretical concepts and models. Such people are a major part of science. Universities have whole departments of, for example, “theoretical physics”.

How could Papineau have such a gross misconception? I suspect it comes from trying to see philosophy and science as distinct disciplines. The philosopher knows that philosophy is largely about concepts, and also knows that science is about empirical data. So the philosopher then leaps to the suggestion that science is only about empirical data, and not about theorising and concepts. After all, if science were about both empirical data and theories and concepts, then philosophy would not look so distinct and exalted in comparison.

Yet the “not about concepts” claim makes no sense since science is just as much about theories and models as about data. Without theories science would have only raw, un-interpreted streams of sensory data. In actuality, science is an iteration between theories and models, on the one hand, and empirical data on the other. Both are as important, with the real virtue of science being the iterative interaction of the two.

Papineau displays further his lack of understanding of science:

Scientific theories can themselves be infected by paradox. The quantum wave packet must collapse, but this violates physical law. Altruism can’t possibly evolve, but it does. Here again philosophical methods are called for.

Not so. There is no physical law that prohibits wave-function collapse (which is not the same as saying we have a good understanding of it). And the theory of reciprocal altruism gives a satisfactory account of the evolution of altruism, even in unrelated animals (kin selection explains it readily for close relatives). In neither case have the advances in understanding been driven by philosophers.

But Papineau continues:

We need to figure out where our thinking is leading us astray, to winnow through our theoretical presuppositions and locate the flaws. It should be said that scientists aren’t very good at this kind of thing.

Ah yes, the conceit that only philosophers can do thinking, whereas scientists are not so good at it. This, one presumes, follows from the suggestion that science is merely about data, whereas philosophers deal with the concepts? Again, this is about as wrong as it is possible to get.

The theory of evolution; the theories of special relativity and general relativity; the theory of quantum mechanics and quantum field theory; the standard model of particle physics; the Big Bang model of cosmology; the theories of statistical mechanics and of thermodynamics — these are all the products of science, and are demonstrably highly successful in giving understanding of phenomena in the world, in making predictions about those phenomena, and in enabling us to manipulate the world around us and to develop highly sophisticated technology.

What have philosophers got that is even remotely comparable in terms of demonstrated success? But Papineau wants to suggest that it’s the scientists who are “not very good” at theorising and thinking!

When they are faced with a real theoretical puzzle, most scientists close their eyes and hope it will go away.

He really doesn’t know very much about theoretical physicists, or about scientists in general, does he! He then claims it a “great scandal” that:

Led by Niels Bohr and his obscurantist “Copenhagen interpretation” of quantum mechanics, [physicists] told generations of students that the glaring inconsistencies apparent in the theory were none of their business.

This is just wrong. It’s not that there are “glaring inconsistencies” — quantum mechanics is consistent and works well in accounting for the data — it’s that the interpretation of it is unclear.

“Shut up and calculate” was the typical response to any undergraduate who had the temerity to query the cogency of the theory.

No it wasn’t. Generations of undergraduates have been told about the difficulties of interpretation. Papineau doesn’t realise the degree of whimsy in phrases such as “shut up and calculate”; it is not intended literally! In fact there is no subject that physicists have deliberated and discussed more over the last 100 years than the interpretation of quantum mechanics!

So, after touting the alleged superiority of philosophers when it comes to thinking, how does he then explain away the blatant fact that science has been vastly more successful and makes vastly more progress than philosophy?

He concludes that philosophy is simply harder!

The difficulty of philosophy doesn’t stem from its peculiar subject matter or the inadequacy of its methods, but simply from the fact that it takes on the hard questions.

I beg to differ. I don’t see the questions of philosophy as any harder. Instead the lack of progress is fully down to its methods, and the principal culprit is in seeing philosophy as distinct from science, rather than as a part of science. By regarding itself as separate from science, philosophy divorces itself from empirical data, and so from information about the real world. Humans are simply not intelligent enough to get far by thinking alone, without any prompts from nature. Scientists aren’t, and philosophers certainly aren’t.

Papineau finishes by giving one example of what he sees as actual progress in philosophy:

The deficiencies of established views are exposed . . . The “boo-hurrah” account of moral judgements was all the rage in the middle of the last century, but no-one any longer defends this simple-minded emotivism.

But no actual deficiencies of emotivism have been exposed; it may be out of fashion in the philosophical world, but that really is just fashion. Is this really Papineau’s example of progress? It’s as likely that it’s a retrograde step.

Emotivism — the idea that morality is a matter of value judgements, pretty much akin to aesthetic judgements, and amounting to emotional approval or disapproval of certain acts — is a widely held opinion within science. Indeed it is the only account of morals that is consistent with the fact that humans are evolved animals. If philosophers move away from that position, and wander off to explore other conceptual possibilities that don’t relate to how humans actually are, they will be condemning themselves to further irrelevance.

Advertisements

63 thoughts on “Another philosopher of science doesn’t understand science

  1. richardwein

    Hi Coel. You’ve done a good job of criticising Papineau’s errors regarding science. I thought I’d address some of his other errors.

    “Today’s philosophers still struggle with many of the same issues that exercised the Greeks. What is the basis of morality? How can we define knowledge? Is there a deeper reality behind the world of appearances?”

    Yes, many (not all) philosophers still struggle with those questions, more’s the pity.

    “What is the basis of morality?”

    This question is based on the incorrect assumption that there’s an objective morality that can in some sense have “a basis”.

    “How can we define knowledge?”

    I think we have a good enough understanding of the word “knowledge”. Problems only arise when philosophers look for an artificially precise definition.

    “Is there a deeper reality behind the world of appearances?”

    There’s the reality that physicists are in the business of telling us about. That’s all. The idea of a metaphysical “deeper reality” is mumbo jumbo.

    “Philosophical issues typically have the form of a paradox. People can be influenced by morality, for example, but moral facts are not part of the causal order. Free will is incompatible with determinism, but incompatible with randomness too. We know that we are sitting at a real table, but our evidence doesn’t exclude us sitting in a Matrix-like computer simulation.”

    There are no paradoxes there, just confusions. People can be influenced by their moral values and beliefs, which are part of the “causal order”. The fact that the supposed moral facts are not causal is one very good reason for doubting that they exist.”Free will” is such a confused concept that I say there no good grounds for saying that it is or isn’t compatible with determinism. If “real table” means (in this context) a table that is not part of a Matrix-like simulation, then it makes no sense to say that we “know” we are sitting at a “real table”.

    “In the face of such conundrums, we need philosophical methods to unravel our thinking.”

    What kind of “philosophical methods” does Papineau have in mind? We certainly aren’t going to resolve those conundrums by means of the “philosophical methods” of Papineau and traditional philosophy. The primary problem with traditional philosophy is that it asks confused questions. There are some philosophers (such as Wittgenstein) who’ve made this point. In addition, I think that many philosophers and non-philosophers alike can sense that there’s something wrong with these “metaphysical” questions, even if they can’t quite put their finger on where the problem lies.

    “The difficulty of philosophy doesn’t stem from its peculiar subject matter or the inadequacy of its methods, but simply from the fact that it takes on the hard questions.”

    The difficulty of traditional philosophy stems (in large part) from our ability to ask questions that _sound_ meaningful without actually _being_ meaningful.

    Reply
  2. Mark Sloan

    Coel,

    I would much prefer philosophers to be coworkers in advancing the usefulness of science rather than adversaries.

    However, “If acceptance of an idea threatens one’s job, it is remarkable how difficult it can become to understand that idea.” I expect this is part of the problem this philosophers of science has – that the philosophy of science is just not nearly as central to science as he would like to think it is.

    In my experience, moral philosophers typically have an even more dysfunctional relationship with the science of morality because it even more centrally threatens their livelihood. Regarding moral philosophy, it seems to me a useful dividing line to demark science’s domain as about what ‘is’, and philosophy’s as about what ‘ought’ to be. I don’t know if there is a similar simple division for science in general and the philosophy of science.

    Reply
  3. Anton Szautner

    I was aware there were philosophers who don’t understand science, but until now I never imagined there were any philosophers OF SCIENCE that get it so utterly wrong. It’s downright breathtaking. It seriously makes me wonder if he knows what he’s talking about when he’s talking about his own field – of philosophy. Based on his grasp of science and its method, I sincerely doubt it. I once heard a stern and frequent critic of philosophy refer to the field as “The Science of Pontification”, adding that “its mostly about making stuff up while delighting in the sound of one’s voice”. I thought, surely, that must be exaggeration. Yet here is just an example of one who constructs statements that satisfy an appearance of meaningful communication but are largely bereft of veracity. He can lay claim to that talent, at least: the skill involved isn’t inconsiderable. Thinking clearly – and perhaps sincerely – however, doesn’t seem to be necessary.

    Reply
  4. Phil

    Coel writes, “So, after touting the alleged superiority of philosophers when it comes to thinking, how does he then explain away the blatant fact that science has been vastly more successful and makes vastly more progress than philosophy? ”

    Progress towards what? What is the knowledge explosion leading to? Utopia? Collapse? It seems less than superior thinking to assume “more successful” and “more progress” without considering that question. How many hair trigger nuclear weapons have to be aimed down our own throats before the science clergy will stop blindly assuming an out of control knowledge explosion to be a huge success??

    Again, science is great at developing new knowledge. Developing new knowledge does not automatically equal progress and success. The success of the “more is better” relationship with knowledge has created a radically new environment where that paradigm can no longer be assumed to be true.

    Some people get this. Few of them appear to be scientists.

    This is understandable. Why would a scientist inspect and challenge the “more is better” relationship with knowledge when their cultural authority and paychecks depend upon it? Why would a scientist stand back and observe the big picture of the knowledge explosion as a whole when the reductionist nature of science rewards those with a talent for burrowing deeply in to narrow areas of investigation?

    Reply
  5. Phil

    What I see in many posts across the blog, including the last two, is the very human need to belong to something, and for that something to be declared superior, “the answer”.

    We used to express this in the context of religion. Religion has been discredited for many, but this human need remains even after religion is dead, so we go looking for new flags to wave.

    Some of us have chosen reason and science as our new “one true way”. And just like we used to do with religion, there’s a tendency for modern seculars to need enemies to push back against as a mechanism for reinforcing our allegiance to our chosen flag.

    What observing this seemingly universal process as objectively as possible can teach us is that these divisions don’t arise from religion, or science, or any other philosophical perspective. They don’t arise from the content of thought, but from the nature of thought, from the way thought operates. Seeing this clearly tends to unite us, because we are all made of thought and subject to it’s properties.

    This seems a very worthy topic for scientists and philosophers to study together, for it is when these thought generated divisions become the most acute that the dangers presented by knowledge can become the most pressing.

    Reply
  6. Phil

    If we’re going to have a smack down competition between philosophers and scientists, here’s where I’d like to see the contest take place.

    Which writers, in any field, are talking intelligently about the assumptions which are the motor driving the knowledge explosion assembly line? As example, who is willing to inspect the “more is better” relationship with knowledge and investigate what the limits to that paradigm might be?

    Scientists might be seen as the factory workers who have built the knowledge explosion assembly line and continually tweaked it in to ever accelerating high performance. Thanks to them, we now have the ability to produce new knowledge at amazing rates. That’s great, applause is indeed warranted, but…

    Who is asking what the appropriate rate for the assembly line should be? I honestly don’t know and would welcome education on that score.

    I’m not really that interested in details about the new products coming off the end of the assembly line, AI, genetic engineering etc. Thus, most futurists tend to lose my attention.

    I’m interested in the assembly which is producing such powers. Which group or individuals can speak the most intelligently to that? Let’s have that competition.

    Reply
    1. Neil Rickert

      Which writers, in any field, are talking intelligently about the assumptions which are the motor driving the knowledge explosion assembly line?

      Science is not an assembly line.

  7. Eric Sotnak

    “it is the only account of morals that is consistent with the fact that humans are evolved animals.”

    I beg to differ on this point. I can think of ways to defend naturalistic versions of consequentialism (especially utilitarianism), contractualism, or virtue ethics.

    Reply
    1. Coel Post author

      But every one of those would have to be rooted in human desires and preferences (and thus emotivism). Otherwise, what standing would (e.g.) utilitarianism have?

    2. verbosestoic

      Coel,

      If it based it entirely on a specific human’s desires, it would be Egoism, which it isn’t. But Utilitarianism does indeed base its morality on utility, which is global pain and pleasure. This clashes with some of our moral emotions — loyalty to family, for example — AND its reliance on that causes major issues for it that result in it suggesting strongly counter-intuitive solutions at times, which weakens the idea that our intuitive — and thus, our evolved — idea of morality can boil down to specifically human desires.

    3. Coel Post author

      But Utilitarianism does indeed base its morality on utility, which is global pain and pleasure.

      But any measure of “utility” can only be based on what humans care about. Further, there is no such thing as “global pain and pleasure”, there is only the pain and pleasure of individual humans. Thus Utilitarianism needs some way of aggregating over lots of humans, and yet there is no way of doing that except for people’s opinions about how to do it. And no human values all other humans equally, nor would they agree on who to value. Thus Utilitarianism’s claim to objectivity is spurious and illusory.

      This clashes with some of our moral emotions — loyalty to family, for example … which weakens the idea that our intuitive — and thus, our evolved — idea of morality can boil down to specifically human desires.

      This illustrates that attempts to make morality objective don’t work, but that *supports* (rather than weakens) the idea that all there is is people’s feelings and values.

    4. verbosestoic

      But any measure of “utility” can only be based on what humans care about.

      Done! Everyone pretty much cares about pain and pleasure. Seriously, this reasoning is pretty much the entire starting point for all hedonistic moralities, which includes Utilitarianism.

      Further, there is no such thing as “global pain and pleasure”, there is only the pain and pleasure of individual humans. Thus Utilitarianism needs some way of aggregating over lots of humans …

      Yes, which is why they solve that by … aggregating over all of the involved humans, and thus arriving at their notion of “global pain and pleasure”.

      … and yet there is no way of doing that except for people’s opinions about how to do it.

      So, are you insisting that Utilitarianism has to consider each person’s specific opinion on that before it can put forward its theory — and justifications — for why aggregating the total pain and pleasure of all relevant persons is the way to go? This would be you assuming that they CAN’T have a justification before they even get around to telling you what their justifications are …

      And no human values all other humans equally, nor would they agree on who to value.

      Sure, but Utilitarianism does not say or rely on all people agreeing, but instead insists that this IS the right way to go. And the only way to challenge them on this specific point is to build a morality that argues that it is okay for you to cause pain to someone else because you don’t like or don’t care about them. Any morality that would accept such a premise probably ought not be considered a morality at all (unless it’s explicitly Egoistic). From there, you can talk about the slightly tougher question of whether it is okay to withhold pleasure from someone because you don’t care for or about them, but again it seems pretty reasonable to say that someone who held that as a moral principle isn’t being moral at all. At which point, Utilitarianism seems to be off to a good start unless you can provide some justification for holding those sorts of moral positions, or at least that we ought not care about those implications.

      This illustrates that attempts to make morality objective don’t work, but that *supports* (rather than weakens) the idea that all there is is people’s feelings and values.

      How?

      1) These moral intuitions are SEEN as desire-independent moral conclusions, which is why these are seen as a problem for Utilitarianism.

      2) The point of that is that they EXPLICITLY DENY that calculating human desires and values is the determining factor there, which weakens the idea that morality can just be like that because we react rather violently to the idea that in those cases human desires and values trump morality.

      You can argue that they are wrong about that, but they clearly do not support your position and provide something that you need to explain and justify.

    5. Coel Post author

      Everyone pretty much cares about pain and pleasure. Seriously, this reasoning is pretty much the entire starting point for all hedonistic moralities, which includes Utilitarianism.

      But any scheme in which what is moral depends on people’s subjective feelings doesn’t give you an objective or moral-realist scheme (which is what Utilitarianism purports to be, isn’t it?)

      [“Moral Realism (or Moral Objectivism) is the meta-ethical view (see the section on Ethics) that there exist such things as moral facts and moral values, and that these are objective and independent of our perception of them or our beliefs, feelings or other attitudes towards them.”]

      Yes, which is why they solve that by … aggregating over all of the involved humans, and thus arriving at their notion of “global pain and pleasure”.

      But how do you do the aggregation? Does everyone count equally? If so, what’s your justification for that (if your justification is that it feels right to you, then that’s subjective)?

      So, are you insisting that Utilitarianism has to consider each person’s specific opinion on that before it can put forward its theory — and justifications — for why aggregating the total pain and pleasure of all relevant persons is the way to go?

      They’re very welcome to justify their method of aggregation, so long as it, at no point, depends on people’s subjective opinions on the matter.

      Sure, but Utilitarianism does not say or rely on all people agreeing, but instead insists that this IS the right way to go.

      Sure, but establishing that one way is *the* right way to go is a pretty tough hurdle. (No appealing to human judgement in doing that!) Even explaining what the phrase “the right way to go” means would be hard enough (“right” or “wrong” generally being value judgements, which need a person doing the judging).

      And the only way to challenge them on this specific point is to build a morality that argues that it is okay for you to cause pain to someone else because you don’t like or don’t care about them.

      I only have to do that if I first buy their claim that there is an objective “right way to do it”, and that morality is indeed objective. If I don’t accept those things I can simpkly tell the Utilitarians that they have not made their case.

      Any morality that would accept such a premise probably ought not be considered a morality at all …

      Is that your personal feeling on the matter? 🙂 Again, such claims depend on us having agreed what “morality” actually is, which is what he have not yet done!

      but again it seems pretty reasonable to say that someone who held that as a moral principle isn’t being moral at all.

      Again, is that an appeal to how people feel about the matter?

      These moral intuitions are SEEN as desire-independent moral conclusions, …

      Again, I’ll readily concede that human *intuition* is moral realist (which is why so many people try to construct moral-realist schemes that work). I just don’t accept that as a strong argument itself.

    6. verbosestoic

      But any scheme in which what is moral depends on people’s subjective feelings doesn’t give you an objective or moral-realist scheme (which is what Utilitarianism purports to be, isn’t it?)

      Sure it can, at least, because you’re mistaking the underlying moral principles for how you determine what is moral in the real world using them. Any morality that doesn’t completely ignore the desires of humans is going to have to consider those at some point when determining what is or isn’t moral, but they will do so appealing to moral principles that are deemed correct no matter what any specific person thinks of them. So, for Utilitarianism, you will always calculate utility on the basis of overall pain and pleasure, which is the objective part, while all specific decisions will involve finding out what specific pain and pleasure all the relevant parties will feel. But if Utilitarianism is right then if someone who is, say, a Stoic denies that and insists that pain and pleasure are irrelevant to morality they will be wrong.

      The subjectivism that objectivists like me worry about are the cases where the proposition “Slavery is morally wrong” depends on what a person or other grouping THINKS is morally right or wrong. That it may vary due to circumstances isn’t much of an issue for most of them, especially the typically objectivist Virtue Theories (who build that into their idea of virtues most of the time).

      But how do you do the aggregation? Does everyone count equally? If so, what’s your justification for that (if your justification is that it feels right to you, then that’s subjective)?

      You count everyone equally in Utilitarianism because all people are moral units and there is no way to justify treating them unequally. This is indeed something that Utilitarianism gets challenged on, because there are arguments that at least sometimes you SHOULD do that. But it’s certainly not just based on feelings, and philosophically an objective justification for that is demanded of Utilitarians because they are expecting it to be objectively justifiable.

      (Also, as a reminder, I am NOT a Utilitarian [grin]. I just know lots about it from my work on moral philosophy).

      They’re very welcome to justify their method of aggregation, so long as it, at no point, depends on people’s subjective opinions on the matter.

      As to whether that is the correct aggregation to use, right? Because you often drift into arguing that if the aggregation is over subjective things that counts as well, which is not correct.

      Even explaining what the phrase “the right way to go” means would be hard enough (“right” or “wrong” generally being value judgements, which need a person doing the judging).

      The “right way to go” here is meant to reference correct, not morally right, which is the OTHER right you are talking about here. If I can’t ever use simple colloquial phrasings without you jumping all over it, we aren’t going to get anywhere [grin].

      I only have to do that if I first buy their claim that there is an objective “right way to do it”, and that morality is indeed objective. If I don’t accept those things I can simpkly tell the Utilitarians that they have not made their case.

      But to deny that that is a moral implication — and thus that their moral view is incorrect because it holds that — you have to build a valid morality that doesn’t include or imply it. They have no reason to care about your insistence that they haven’t established it if you can’t show how a morality can work without, again, either explicitly stating or implying that statement. And if you can’t, then they have sufficient reason to think that they are on the right track.

      Is that your personal feeling on the matter? 🙂 Again, such claims depend on us having agreed what “morality” actually is, which is what he have not yet done!

      Nope, it’s a consequence of examining it and asking the question “Could we have anything that even remotely resembles ANYTHING like what we think is a morality if it includes the idea that you can hurt someone just because you don’t like them?”. There don’t seem to be any moralities that do that, and it seems for good reason. If you have to accept that idea to attack Utilitarianism, while it’s certainly not invalid for you to bite the bullet and accept that you aren’t going to get very far with a morality that does that, unless you have a VERY good argument for why that works. Which you haven’t provided, probably because you don’t actually believe it yourself [grin].

      Again, I’ll readily concede that human *intuition* is moral realist (which is why so many people try to construct moral-realist schemes that work). I just don’t accept that as a strong argument itself.

      But since those intuitions are pretty much the only empirical evidence we have of any kind of morality at all, if you want to dismiss them it seems that the burden of proof is on you, not on those who are at least consistent with them most of the time.

    7. Coel Post author

      So, for Utilitarianism, you will always calculate utility on the basis of overall pain and pleasure, which is the objective part, …

      But my argument is that you’ll need a utility function (even if it’s just “utility is maximised if you maximise pleasure and minimise pain”), and that choice of utility function is subjective. You can only get to it by the advocacy of a human, based on their preferences and values.

      The subjectivism that objectivists like me worry about are the cases where the proposition “Slavery is morally wrong” depends on what a person or other grouping THINKS is morally right or wrong.

      But that’s not really subjectivism, it’s a bastard mixture of subjectivism and objectivism that makes no sense. It is effectively saying that propositions such as “slavery is morally wrong” do have truth values, but that the truth value is dependent on someone’s opinion. That seems to me to be incoherent.

      In my form of subjectivism, “Slavery is morally wrong” amounts to the speaker declaring “I dislike slavery”. Someone else can, of course, declare “I like slavery”, but there is no more to it than those likes and dislikes.

      You count everyone equally in Utilitarianism because all people are moral units and there is no way to justify treating them unequally.

      There’s no way to justify treating them *equally*, either, except by human advocacy. There’s no way to determine what is a “moral unit” except by the advocacy of whoever it proposing that version of utilitarianism. There are no “defaults” here.

      As to whether that is the correct aggregation to use, right?

      Yes. The method of aggregation is a subjective choice. Just as is who counts as moral units, whether they all count equally, and what utility function to adopt. All of this cannot be established a priori, it all derives from the advocacy of the human advocating utilitarianism. That’s why it rests on subjective foundations.

      “Could we have anything that even remotely resembles ANYTHING like what we think is a morality if it includes the idea that you can hurt someone just because you don’t like them?”. There don’t seem to be any moralities that do that, and it seems for good reason.

      The good reason is that we all have a lot of human nature in common, so the moralities that we advocate all have a lot in common.

    8. verbosestoic

      But that’s not really subjectivism, it’s a bastard mixture of subjectivism and objectivism that makes no sense. It is effectively saying that propositions such as “slavery is morally wrong” do have truth values, but that the truth value is dependent on someone’s opinion. That seems to me to be incoherent.

      Except it isn’t. There is a truth value, for example, to the proposition “I like chocolate ice cream”, but it depends entirely on the subjective state of the person being referred to. Or perhaps a statement like “That hurt!” is better. It has a truth value, but its truth value depends on the internal state of that person; whether the thing hurt or not. So such a thing is not incoherent.

      But this is progress, as since you reject that sort of subjectivism you are boxed into a non-cognitivist approach, insisting that statements like “Slavery is immoral” CANNOT have a truth value. If you say they do, then you have to reject the subjectivist line, and so would have to take the objectivist line there. Thus, for you to maintain your specific subjectivism, it must be a non-cognitivist approach.

      There’s no way to justify treating them *equally*, either, except by human advocacy. There’s no way to determine what is a “moral unit” except by the advocacy of whoever it proposing that version of utilitarianism.

      What do you mean by “human advocacy” here? Utilitarians are defining it that way, and arguing for the definition. They are not merely declaring it because they like it better, and if they felt that their reasons were insufficient and so it was only based on personal preference they’d reject Utilitarianism. Thus, you need to deal with the arguments and not merely talk about a vague “human advocacy”, particularly in light of my other comments that someone may accept that X is the moral thing to do while still refusing to do it because they don’t see it as being in their own self-interest. So wanting something to be moral and believing it to be moral are two different things.

      All of this cannot be established a priori, it all derives from the advocacy of the human advocating utilitarianism. That’s why it rests on subjective foundations.

      Who says? You need the second part to be true before we have any reason to accept that the first part is true, making this a circular argument.

      The good reason is that we all have a lot of human nature in common, so the moralities that we advocate all have a lot in common.

      No, it seems like no morality that did that could achieve any of the things that we use morality to do. It’s not a good reason to think that just because we have some common ideas that things they advocate in common therefore have good reasons to be so. See “sweet tooth”, for example.

    9. Coel Post author

      … as since you reject that sort of subjectivism you are boxed into a non-cognitivist approach, insisting that statements like “Slavery is immoral” CANNOT have a truth value.

      Yes, I think that I am indeed taking a non-cognitivist approach. (Though I’m sure that I’m usually using such philosophical terms wrong. 🙂 ). It depends, though, on what one means by “slavery is immoral”. If by “is immoral” one meant “contrary to an agreed code as to how we treat each other” then “slavery is immoral” *would* have a truth value.

      One of the problems with the whole field of meta-ethics is that the moral realists have still not told the rest of us what “is immoral” is supposed to mean.

      What do you mean by “human advocacy” here? Utilitarians are defining it that way, and arguing for the definition. They are not merely declaring it because they like it better, …

      I submit that that is exactly what they are doing! They arrive at a utilitarian framework based on their own subjective values, then they have a gut feeling that there must be some objective justification for that framework, and so look for post-hoc justifications for it.

      Thus, you need to deal with the arguments and not merely talk about a vague “human advocacy”, …

      Agreed, and I’m happy to examine the arguments, though I’m fairly sure that one cannot arrive at a utilitarian framework from a priori reasoning; at some point you need to add in “moral axioms” and those come from people’s subjective value system.

      … particularly in light of my other comments that someone may accept that X is the moral thing to do while still refusing to do it because they don’t see it as being in their own self-interest.

      I’d be interested to ask such a person what they think they mean by “the moral thing to do” when they say “X is the moral thing to do”.

      It’s not a good reason to think that just because we have some common ideas that things they advocate in common therefore have good reasons to be so.

      You always intepret me as claiming some sort of objective justification (“… have good reasons to be so”). I’m not. The whole point is that there is nothing such. All I was doing was pointing to de facto widespread agreement based on widespread commonality in human nature. I was not saying “therefore it is justified that …”.

    10. verbosestoic

      Yes, I think that I am indeed taking a non-cognitivist approach. (Though I’m sure that I’m usually using such philosophical terms wrong. 🙂 ). It depends, though, on what one means by “slavery is immoral”. If by “is immoral” one meant “contrary to an agreed code as to how we treat each other” then “slavery is immoral” *would* have a truth value.

      Well, if you think that the proposition “Slavery is immoral” has a truth value, regardless of what criteria we use to determine what the truth value is, then you are a cognitivist. Which would mean that, as we agreed, you’d be boxed into an objectivist position, because you consider the subjectivist cognitivist position incoherent. And note that Hume’s emotivist position — that you seem to favour — is indeed non-cognitivist.

      So let’s stop just tossing labels around and get into positions. The subjectivist non-cognitivist position says that moral statements have truth values, but that you can only determine that truth value by referring to a specific group. These, then, generally encompass relativistic theories: the truth of a moral proposition can only be determined relative to the appropriate moral grouping. Individualistic relativistic theories say that that’s relative to the individual, meaning that you have to refer to a specific individual. Thus, the proposition “Slavery is immoral” has a truth value, but since the truth of that is determined by the individual itself, you need to refer to the individual. Thus, “Slavery is immoral” is true if and only if an individual accepts that slavery is immoral. Cultural relativism does the same thing, but the reference is to a culture, not to an individual. Thus, “Slavery is immoral” is true if and only if we are referring to a culture where slavery is considered immoral. And so on.

      Non-cognitivist theories say that asking if those propositions have a truth value is absurd, as they simply don’t have them. Hume’s non-cognitivism, is simply saying that we are expressing an emotional reaction or view of them, which is the “Boo”/”Hooray” theory. We would never say that it is true if we mean it a certain way, because we only EVER mean it, really, as that sort of expression. It’s the same thing as applauding or booing a performer; we aren’t really expressing a true statement like “This performer is good”/”This performer is bad”, but instead are merely expressing our personal reaction to them. And just like arguing from those reactions to a true statement about whether the performer is good or bad is an invalid argument, it is invalid to argue from that to the claim that a moral proposition is true or false. Yes, often people DO that, but it is invalid nonetheless.

      Here’s where I think the confusion is coming in, or at least how I think your position shakes out. I think that you are really a cultural relativist: you think that what is moral is defined by the culture/society someone lives in. This is consistent with what you say above and pretty much how you argue for this. But I also think that you agree with a different argument of Hume’s, which is that values are required for any kind of action, and that values or any kind of motivation require an emotional connection, and thus emotions are important for motivation. How that all shakes out can get complicated, but it’s at least a coherent position. However, you seem to miss that that applies to ALL actions, not merely moral or even normative ones, and so make that an important part of the definition of, well, any normative claim and especially any moral claim. This is what draws you to emotivism, and therefore to consider yourself a expressivist because emotivism is an expressivist claim. This also explains why you try to make all expressivist claims emotivist ones, because as you yourself say all values reduce to emotions. But values aren’t just moral values, and so we want to know what distinguishes moral values from other values. Hume’s move here is about motivations, and so applies to ALL values, including pragmatic ones. If I want to act pragmatically, I’ll need an emotional motivation underneath it, just as I’d need an emotional motivation underneath acting morally. Thus, strongly aligning morality with emotion is not a move you need to make; we can emotionally value and use that as a motivation to act morally without morals being just defined by our moral reactions to them. Again, we can reason ourselves into a moral proposition and that assignment can ITSELF generate the motivating emotion without any contradiction.

      So I think you conflate emotion and morality far too strongly, and in general fit the cultural relativist position better. But if you disagree, we can still use this framework to tease out what your position actually is without having to rely overmuch on labels.

      One of the problems with the whole field of meta-ethics is that the moral realists have still not told the rest of us what “is immoral” is supposed to mean.

      The examinations and the labels that we are talking about are meta-ethics’ attempt to tease out what, in detail, that’s supposed to mean. One of the reasons you keep asking questions like this is, in my opinion, that you don’t have enough knowledge of meta-ethics to see what the various positions imply, including what they problems are. Thus, you ask that like it’s a simple question when if we’ve learned anything from meta-ethics it’s that that’s not a simple question [grin].

      I submit that that is exactly what they are doing! They arrive at a utilitarian framework based on their own subjective values, then they have a gut feeling that there must be some objective justification for that framework, and so look for post-hoc justifications for it.

      Except it is clear that that ISN’T what they are doing, given the empirical results:

      1) Many of them come into, say, introductory ethics courses with no idea or with other ideas of what morality is, and are convinced that Utilitarianism is right by the arguments.

      2) Utilitarianism is actually strongly counter-intuitive in a number of scenarios and yet these results rarely get them to change their position.

      3) If you managed to convince them that their acceptance of Utilitarianism was nothing more than that, they’d abandon Utilitarianism.

      So you really need to stop treating them like this is what they are really doing. It is possible that their arguments cannot be justified any other way, but they are clearly not justifying it consciously that way, and think the arguments work. Thus, going beyond the arguments in any other way to insist that they are really basing it merely on intuitions is not going to make any progress in the debate.

      Agreed, and I’m happy to examine the arguments, though I’m fairly sure that one cannot arrive at a utilitarian framework from a priori reasoning; at some point you need to add in “moral axioms” and those come from people’s subjective value system.

      And this would be the first problem, as you see any “moral axiom” as being subjective, and they don’t. So if they ever introduce one — no matter how they support it — you will then claim that it has to be subjective. As we saw above, it’s easy to argue that your conflation of value and moral axiom is the real cause of the issue, but at a minimum that would have to be settled first before you could insist that they can’t do it a priori. In short, you often jump to dismissing their specific positions on the grounds that the axioms must be subjective when the real debate is over whether they could even possibly be objective.

      I’d be interested to ask such a person what they think they mean by “the moral thing to do” when they say “X is the moral thing to do”.

      I have no clue why you think that there’s some kind of interesting vagueness to explore here, but maybe I can clear this up with specific examples of how someone can choose to act counter to their own moral values, using Utilitarianism as the moral code:

      We have two people, Person A and Person B. Both of them are convinced Utilitarianism accurately describes morality. Both of them are put in a position where they can choose to save the life of either a renowned scientist who is close to a breakthrough that will save thousands of lives, or their spouse. Both accept that Utilitarianism says that the action with the most utility is to save the scientist and not their spouse. A decides to save their spouse, knowing that they are committing an immoral act, but not being able to bear letting their spouse die when they could save them. B decides to save their spouse, because they have decided that they not only do not care to act morally, but wish to deliberately flout morality and act immorally as a way of thumbing their nose at moral expectations. Again, both consider the moral thing to do to be what Utilitarianism says, and yet both deliberately go against that.

      In what sense is this puzzling?

      You always intepret me as claiming some sort of objective justification (“… have good reasons to be so”). I’m not. The whole point is that there is nothing such. All I was doing was pointing to de facto widespread agreement based on widespread commonality in human nature. I was not saying “therefore it is justified that …”.

      Well, since you were talking to me, you should have known or at least assumed that when I said “For good reason” I meant an objective, in this case meaning that it is a conceptual impossibility for something to count as a morality and yet not accept that. You then offered the “commonality” argument as that good reason, which then implies that it would fit into that objective good reason I was looking for. If you didn’t want that implication, you should have said that there IS no such good reason, not offered one [grin].

      Note that since some things that have evolved are now counter-productive — see the sweet tooth — it is even possible to demand a good reason like I did in the quote for an evolved sensibility, so if you accept evolution you have to accept that my demand is still reasonable: how do we know that we still have a good reason to act on that sensibility? It might now be maladaptive.

    11. Coel Post author

      I think that you are really a cultural relativist: you think that what is moral is defined by the culture/society someone lives in.

      Here’s my best attempt to explain my position using your explanation of the terms.

      1) The claim “slavery is wrong” does indeed have a truth value by reference to a particular moral framework or code. Thus, “according to Western moral codes, slavery is immoral” has a truth value. The sentence “according to Western moral codes, slavery is immoral” is a *descriptive* statement about Western moral codes.

      2) Most people are intuitive moral realists. Thus, when a Westerner says: “Slavery is immoral” they are intending to say that it is not just immoral “according to Western moral codes”, but that it is immoral in an objective sense.

      3) But, in thinking that, people are making an error. There is no such thing as “immoral in an objective sense”. “Immoral” is a value judgement and one cannot have a value without a valuer (that is a simple category error). Thus, to my mind, “slavery is immoral” does not have a truth value — in the shortened form its meaning is too unclear to have a truth value — though the longer form “according to Western moral codes, slavery is immoral” does have a truth value.

      Thus I would say that the superficial purport of the language is cognitivist (it *attempts* to make objective moral claims with truth values), but that this is an error [I thus hold to “error theory” about morals].

      4) So where do the moral codes come from? They are reports of people’s value systems. Thus what people are *actually* doing when they “slavery is immoral” is expressing their value judgement, and effectively saying: “I abhor slavery”. This latter is pretty much emotivism.

      So, attempting to put on labels:

      People saying “slavery is wrong” are purporting to make an objective moral claim with a truth value [the phrase *purports* to be cognitivist]. And if interpreted as meaning: “slavery is against my value system” then it *is* indeed cognitivist. But, the claim of objectivity is an error [error theory]. Thus “slavery is wrong” does not have a truth value [non-cognitivism]. The *underlying* meaning is a report of one’s emotional dislike of slavery [emotivism].

      So this is both cognitivist and non-cognitivist, depending on exactly what phrase one is talking about, where the disjoint between the two is the error that error-theory points to, and where the underlying meaning is emotivist.

      How does that sound to you? Am I misusing terms in the above?

    12. verbosestoic

      So this is both cognitivist and non-cognitivist, depending on exactly what phrase one is talking about, where the disjoint between the two is the error that error-theory points to, and where the underlying meaning is emotivist.
      How does that sound to you? Am I misusing terms in the above?

      Well, yes, so much so that you seem to be holding utterly incompatible positions and I can’t make heads or tails out of what you really think.

      So let me break it down for you with two related questions:

      1) When you use the phrase “X is immoral” what do you mean by it? What do you want me to take away from that and how do you want me to interpret it?

      2) What is the right way to view or use the phrase “X is immoral”? What OUGHT we mean when we use the phrase?

      In meta-ethics, that’s what we’re after: the right way to conceptualize morality. By mixing in so many diverse concepts, you end up with something that is conceptually contradictory, making it incredibly confusing. I’m not interested in what people other than you DO mean when they say that, but what you think they OUGHT to mean, if anything, and that should be consistent with what you mean when you say it if you are not inconsistent. So without using the terms, tell me what you mean by it. That should help clear everything up.

    13. Coel Post author

      When you use the phrase “X is immoral” what do you mean by it?

      I mean by it: “I dislike X”. (In some contexts I might mean: “I dislike X AND I consider that X violates accepted societal norms”.)

      What is the right way to view or use the phrase “X is immoral”? What OUGHT we mean when we use the phrase?

      As above, “X is immoral” indicates that the speaker dislikes X (and, again, in some contexts it could also indicate that the speaker also considers that X violates accepted societal norms).

      Edit: “I dislike X AND I consider that X violates accepted societal norms” could also be phrased: “I dislike X and consider that most people also dislike X”.

    14. Coel Post author

      1) Many of them come into, say, introductory ethics courses with no idea or with other ideas of what morality is, and are convinced that Utilitarianism is right by the arguments.

      That’s because utilitarianism’s tenets comport (superficially!) to a lot of human moral intuitions.

      2) Utilitarianism is actually strongly counter-intuitive in a number of scenarios and yet these results rarely get them to change their position.

      Yes, some of the implications are strongly counter-intuitive, but they don’t persuade people to give up utilitarianism because of (as above) the fact that the tenets of utilitarianism appeal to human intuition, and because they are moral realists and think that something along these lines must be the case, so they instead conclude that they haven’t quite got the right version of utilitarianism, and so then construct all sorts of variants.

      3) If you managed to convince them that their acceptance of Utilitarianism was nothing more than that, they’d abandon Utilitarianism.

      Persuading moral realists against moral realism is hard, isn’t it? (Since moral realism is so intuitive!) Any idea how to do it?

      … but they are clearly not justifying it consciously that way, and think the arguments work.

      Absolutely, they are not *consciously* indulging in post-hoc rationalisation, in an attempt to design something that comports with their value system, but that’s still what they are doing.

      All the axioms of utilitarianism come from human advocacy, from their values (where else could they come from?), and the art of being a utilitarian is to scheme up a set of axioms that best matches ones moral intuitions (what else would one use to choose axioms?).

      I have no clue why you think that there’s some kind of interesting vagueness to explore here, but maybe I can clear this up with specific examples …

      OK, and indeed your example neatly illustrates the ambiguity:

      We have two people, Person A and Person B. Both of them are convinced Utilitarianism accurately describes morality.

      Interesting phrasing: “… Utilitarianism accurately describes morality”. So there is a thing called “morality” and utilitarianism accurately describes it? I’ll invoke Euthyphro here. Is something “moral” because it comports with utilitarianism, or is utilitarianism “moral” because it comports with “morality”? If the latter, can we then just ditch utilitarianism and discuss actual “morality”?

      A decides to save their spouse, knowing that they are committing an immoral act, …

      OK, so what do they mean by “an immoral act”? If they simply mean “one that does not accord with utilitarianism” then ok, but then we’re back to that phrase that “Utilitarianism accurately describes morality” and I don’t know what that’s supposed to mean.

      Indeed, my basic problem with every meta-ethical system other than emotivism is that if morals are not ultimately about human values and emotions then I don’t know what the is meant by the term.

      Again, both consider the moral thing to do to be what Utilitarianism says, and yet both deliberately go against that. In what sense is this puzzling?

      It’s puzzling because I don’t know what they think “the moral thing to do” *means*. If by the “moral thing to do” they mean “the thing that is in accord with Utilitarianism” then all that says is that “the thing that is in accord with Utilitarianism is the thing that is in accord with Utilitarianism”.

      If they don’t mean that, if by “moral” they mean something other than utilitarianism, then why don’t we bypass utilitarianism and discuss actual morality (whatever that’s supposed to be).

    15. verbosestoic

      That’s because utilitarianism’s tenets comport (superficially!) to a lot of human moral intuitions.

      That doesn’t explain the people who come out of the classes as deontologists, Virtue Theorists, or relativists, or those who start with one of these and change their minds as they delve deeper into the issues. It’s much more reasonable to assume that the arguments are resonating with them than that it’s a simple reliance on intuition. Yes, in some sense everyone accepts the argument that makes the most sense to them, which involves intuitions, but that’s what we do for all arguments all the time AND is also what you do when accepting your position. There is no special psychological state here for objectivists that is confusing them but not you.

      Yes, some of the implications are strongly counter-intuitive, but they don’t persuade people to give up utilitarianism because of (as above) the fact that the tenets of utilitarianism appeal to human intuition,

      Or, rather, than they feel the arguments in favour of it outweigh the counter-intuitive conclusions, exactly the same way as it works for you with your position.

      Persuading moral realists against moral realism is hard, isn’t it? (Since moral realism is so intuitive!) Any idea how to do it?

      1) They wouldn’t necessarily abandon moral realism, just Utilitarianism, unless you convinced them that moral realism was false.

      2) Since presumably you started as a moral realist, you should already know how to overcome that, since you did it for yourself.

      3) I’m actually just a moral objectivist, not a moral realist, so the two can come apart (the reason moral objectivists are often moral realists is that they think that there has to be an object out there in the world for them to be hooked up to to make an objectivity claim. I think that conceptual truths can be — and always are — objective about concepts without having to have concept thingies floating around in the real world).

      Absolutely, they are not *consciously* indulging in post-hoc rationalisation, in an attempt to design something that comports with their value system, but that’s still what they are doing.

      I was not aware that you had telepathic powers [grin].

      Look, insisting that that is what they are doing because that’s all they COULD be doing is you assuming your conclusion, and I have provided much evidence that they clearly AREN’T doing that, either consciously or subconsciously. You probably should stop psychoanalyzing your opponents and instead work on understanding their arguments, because no one will accept that they are wrong based on your assessment of their subconscious motives when they have the arguments all laid out.

      As for the specific notion about values, let me use the defeater I use against Richard Carrier against you here:

      The thing I value most of all is … being moral. Under your view, this is an impossible and incoherent value, but it seems not only perfectly reasonable but the heart of what it means to be moral. Why, then, is it incoherent if I don’t simply assume that you are correct? After all, your base view reduces down to values, and that seems a valid value that people can and do have. Why isn’t it?

      So there is a thing called “morality” and utilitarianism accurately describes it? I’ll invoke Euthyphro here. Is something “moral” because it comports with utilitarianism, or is utilitarianism “moral” because it comports with “morality”? If the latter, can we then just ditch utilitarianism and discuss actual “morality”?

      Morality is a concept or conceptual framework, and they think Utilitarianism captures it, at least in the practical sense of how to live your life morally. Sort of like how Euclidean geometry is a conceptual framework and you can use its rules to work out math problems using it.

      OK, so what do they mean by “an immoral act”? If they simply mean “one that does not accord with utilitarianism” then ok, but then we’re back to that phrase that “Utilitarianism accurately describes morality” and I don’t know what that’s supposed to mean.

      Again, you skip the examples to focus on something other than the primary point of the examples and discussion up until that point. The point there was that we can consider something immoral and yet not want to do it, which you found contradictory. I gave two examples of that, and for my examples to work all that was required was for them to BELIEVE that the action was immoral. They did, and yet still didn’t want to do it and wanted to do something else more, meaning that in line with my example above they valued something else over being moral. Do you concede that this is a valid state, or do you want to continue to argue that it is nonsensical or at least problematic?

      If they don’t mean that, if by “moral” they mean something other than utilitarianism, then why don’t we bypass utilitarianism and discuss actual morality (whatever that’s supposed to be).

      Let me invoke Russell’s distinction between morality and ethics here. One of these is used to describe the underlying principles, and the other is to start from those principles and determine how to apply them as moral agents. Utilitarianism is a better description of the latter than the former, with “Maximize pleasure and minimize pain” as the principle that drives that. Thus, it doesn’t really make sense to talk about “actual morality”, as they think that their principles capture the essence of the concept of morality and the moral rules how to act like a moral agent. And they do this by looking at what we think morality is and deriving the simpler principles that let us capture all of those notions, discarding the ones that don’t make sense but don’t seem that important either. Thus, again, there’s no “actual morality” to describe outside of the conceptual framework trying to capture it; we’re just trying to make, say, a conceptual model to describe the concept that we think we know implicitly but which is hard to describe when we think it out.

  8. verbosestoic

    I have a post coming out about your other post on Friday, because it involved getting deeper into some issues that I wouldn’t have room for there, but let me comment on this post here because I shouldn’t need to do that.

    How could Papineau have such a gross misconception? I suspect it comes from trying to see philosophy and science as distinct disciplines.

    I suspect rather that it comes from scientismists who insist that philosophy doesn’t base its claims on empirical data and then assert that that is why it can’t get truth and science can. If those claiming science’s superiority insist that it is because of how it grounds all of its propositions on empirical data, people will eventually start to believe that.

    Ah yes, the conceit that only philosophers can do thinking, whereas scientists are not so good at it. This, one presumes, follows from the suggestion that science is merely about data, whereas philosophers deal with the concepts?

    In my post on whether or not science can be trusted, the two major failings of science that I identified were that when science went wrong, it was that it didn’t consider potential confounds and/or overgeneralized. These would both be reduced if science did more philosophical style reasoning. In fact, in order to get a science degree do you ever have to take ANY courses that teach you specifically how to analyze and create valid arguments? I looked at the physics program at my old university and there’s nothing like that, and no direct, for example, course even on symbolic logic. Surely if science is going to theorize it really ought to put some emphasis on how to build well-structured logical arguments, since that’s what a scientific hypothesis ultimately is.

    What have philosophers got that is even remotely comparable in terms of demonstrated success?

    Well, science, for example. It seems ludicrous to deny that science is a product of philosophy, and look how well it’s doing [grin]. You might argue that science doesn’t need philosophy to justify it, or at least to continue justifying it, but it definitely was produced by philosophy. It’s only if you set up a very biased idea of “product” that you can deny philosophy proper credit for that product.

    By regarding itself as separate from science, philosophy divorces itself from empirical data, and so from information about the real world. Humans are simply not intelligent enough to get far by thinking alone, without any prompts from nature.

    Um, except that philosophy does not divorce itself from empirical data as a presumption or matter of principle or of method. Philosophy not only accepts empirical data in some cases, it in fact even SEEKS IT OUT when it thinks it will help. When philosophy says “Empirical data won’t settle this question”, it is because it has done an analysis and discovered that trying to use empirical data for that question has massive issues and, in general, won’t settle the question. Philosophy does not think that empirical data is “icky” and not worthy of philosophy, but has noted that for a lot of the questions it is looking at empirical data can’t settle them (see the naturalistic fallacy, for example).

    But no actual deficiencies of emotivism have been exposed; it may be out of fashion in the philosophical world, but that really is just fashion. Is this really Papineau’s example of progress? It’s as likely that it’s a retrograde step.

    Except that emotivism DOES have serious problems, and philosophical views don’t go out of fashion unless those problems seem insurmountable. To list a couple of less purely theoretical problems with emotivism (and so ones that you should be concerned about):

    1) If you base your view on evolution, how do you account for the fact that we seem to have intuitive/natural views of what makes a proposition aesthetic and what makes a proposition moral, and we DON’T think that they are the same thing or act the same way? Sure, we can be wrong, but it seems like it will be difficult for you to empirically justify accepting those intuitions that support your claim while denying those intuitions that clash with your claim, and for the most part we think that aesthetic judgements, at the end of the day, need not be justified and don’t have any real truth value, but think that moral judgements do need to be justified and have a specific truth value. To add more empirical evidence to the fire, the group that treats moral views more like aesthetic judgements happen to be psychopaths (see the essay “Fearlessly Immoral” on my blog) who are seen as acting immorally by pretty much everyone.

    2) How do you distinguish your view that morality is determined by us having specifically moral emotions like righteous anger — regular anger is not seen by anyone as being necessarily moral, so it won’t work for your purposes — from the view that moral emotions follow from a judgement — conscious or otherwise — that something is moral or immoral based on what we have developed as our own morality? The empirical evidence even supports this, as it explains why we think moral rules are more than simply aesthetic judgements, as that is baked into our concept first and the emotion comes later.

    Emotivism — the idea that morality is a matter of value judgements, pretty much akin to aesthetic judgements, and amounting to emotional approval or disapproval of certain acts — is a widely held opinion within science.

    No, it isn’t. Science has no official field studying this and has no formal theories making this an opinion in science. Most scientists even in the closest fields don’t have ANY opinion on this. What we have are SOME scientists who think that the empirical data leads to it, but that’s insufficient to make it widely held.

    Indeed it is the only account of morals that is consistent with the fact that humans are evolved animals.

    No, pretty much ALL deontolgical, consequentialist and Virtue Theories are at least compatible with evolution, and Social Contract theories are a better explanation than evolved emotions since it explains, again, why we think moral rules are so serious and have objective meaning, as well as why they differ from culture to culture but don’t generally differ from individual to individual in the same social grouping. As above, you overstate your case, which is not something that will help the image of science and scientists.

    Reply
    1. Coel Post author

      I have a post coming out about your other post on Friday, because it involved getting deeper into some issues that I wouldn’t have room for there, …

      I look forward to it!

      … the two major failings of science that I identified were that when science went wrong, it was that it didn’t consider potential confounds and/or overgeneralized.

      Your major examples of science going wrong are about human health. This area suffers from four big problems: (1) humans are about as complicated as things get; (2) different humans are different; (3) you can’t do properly controlled trials for ethical reasons; and (4) since human health matters, messages are often dumbed down and simplified for public consumption, even when the science is accepted as less clear. It’s thus not surprising that “orthodoxy” gets revised a lot.

      As for Newtonian mechanics, it is not “wrong”, it works very well in nearly all the domains that we need mechanics for. It is still used all the time for a vast array of purposes. Yes, we do now know that in some domains it breaks down, but the point that science can only be trusted to the extent that it has been tested, and thus in domains where it has been tested, is pretty much accepted.

      These would both be reduced if science did more philosophical style reasoning. In fact, in order to get a science degree do you ever have to take ANY courses that teach you specifically how to analyze and create valid arguments?

      No, scientists don’t (in general) take formal courses of this nature. They learn “on the job”. But I’m not convinced that they are any worse than philosophers at creating and analyzing valid arguments, and I’m not convinced that formally studying philosophy would actually help them.

      Surely if science is going to theorize it really ought to put some emphasis on how to build well-structured logical arguments, since that’s what a scientific hypothesis ultimately is.

      Well no, not really. Scientific hypothesizing and theories don’t usually come from “well-structured logical arguments”, they come from all sorts of things, including intuition and guesswork. One then critiques and tests such models in terms of how much expanatory and predictive power they then have, and to *test* the theories one should indeed be pretty rigorous.

      It seems ludicrous to deny that science is a product of philosophy, and look how well it’s doing …

      I don’t accept that science is the product of “philosophy”, if by that one means philosophy as understood today, and as distinct from science. Rather, there was a time that what we now call “philosophy” and “science” were pretty much the same thing done by the same people. They would use both empirical methods and conceptual reasoning, as appropriate and as they saw fit. Thus “science” today is really the product of proto-(science+philosophy).

      Science today, *still* uses “both empirical methods and conceptual reasoning, as appropriate and as scientists see fit”. Philosophy, though, has somewhat lost its way by seeing itself as distinct from science and by trying to get places by conceptual analysis alone.

      Philosophy not only accepts empirical data in some cases, it in fact even SEEKS IT OUT when it thinks it will help.

      What do you see, then, as the difference between a scientific approach and a philosophical one?

      Philosophy […] has noted that for a lot of the questions it is looking at empirical data can’t settle them

      Even in science, empirical data *alone* never settles anything, it is always a matter of both empirical data and reason and concepts. You always need the concepts in order to evaluate what the empirical data point to.

      Reply to be continued …

    2. verbosestoic

      Your major examples of science going wrong are about human health. This area suffers from four big problems: (1) humans are about as complicated as things get; (2) different humans are different; …

      You realize that these two reasons are, in fact, the two big reasons I gave FOR specific failures in those areas? However, if you look at my link to a case where I found confounds, that seems to be as objective and universal a claim you’re going to FIND in psychology … and I still easily found the potential confound — starting from Computer Science — that if the array had a loading time and if that loading time was sufficiently long, then their experimental numbers would be overwhelmed by it. We can extend the same sort of thinking to the Libet experiments to find confounds with the idea that conscious deliberation is the result of rather than an active player in making decisions, by noting that Libet was testing decisions made at random, and in general in computing when you do that you set an “alarm” and tell the random number generator to wake you up when it gives you an answer and waits that long. So it wakes you up and takes the action at that random time, and also tells you what the answer was. The Libet experiments also have an issue where if we really made decisions subconsciously up a second before we were aware of what that decision was for actions taken immediately we ought to notice that we start the action a second before we realize what decision we made, and so, for example, we would decide to go to the fridge to get a snack and then notice that we are already standing up to go there when we “decided” to have a snack. We’d almost certainly notice that [grin].

      And these sorts of analyses are precisely what philosophy not only aims for and trains itself for, but is really, really good at. In fact, one of philosophy’s biggest hurdles has ALWAYS been that it continually finds ways in which arguments or theories don’t necessarily prove what they claim to which it then has to take seriously, while science is far more willing to take the best current theory and run with it.

      Yes, we do now know that in some domains it breaks down, but the point that science can only be trusted to the extent that it has been tested, and thus in domains where it has been tested, is pretty much accepted.

      Doesn’t this greatly impact the utility of science, though, if we can only trust it in cases where it has explicitly tested them? One of the reasons we think science works is that if the THEORY is sufficiently tested, then we can use the implications of that theory to reliably figure out what will happen in the cases that we haven’t tested. Sure, it might get some of them wrong, but it gets enough of them right — if the theories are sufficiently analyzed and tested — for us to rely on those theories in those cases. If science is only trustworthy for the cases that it has explicitly tested, then all of its methodology and particularly its conceptualizing are pointless … or, perhaps, it’s rather that they are only useful for deciding what should be tested.

      Conforming knowledge to directly tested empirically observed statements or phenomena is, in fact, the easy thing to do, and isn’t what most impresses us about science’s success. I’d actually even say that if science is only trustworthy for the things that it has directly tested, then science can no longer be said to “work”.

      But I’m not convinced that they are any worse than philosophers at creating and analyzing valid arguments …

      So, you aren’t certain that the field that has as its entire methodology creating and analyzing arguments and determining their validity and formally teaches every possible method for that is better at it than a field that only does so tangentially? I’m not sure you can hold that view and not be forced to claim by implication that you are anti-philosophy.

      You also won’t want to claim that critical thinking is important, since critical thinking pretty much IS the main philosophical method, and so it’s going to be hard to claim that we should all learn critical thinking while claiming that the field that explicitly teaches it doesn’t do it very well.

      Now, scientists may not benefit from doing much formal philosophy, but I definitely think they ought to learn directly how to construct valid arguments, and if they aren’t willing to do that then should turn them over to the field where analyzing arguments for validity is their raison d’etre … just like if scientists weren’t willing to formally train in mathematics they should let mathematicians check over their math to see if they’re doing it right.

      Scientific hypothesizing and theories don’t usually come from “well-structured logical arguments”, they come from all sorts of things, including intuition and guesswork.

      I didn’t say they came from well-structured logical arguments, but that at the end of the day they ARE well-structured logical arguments. They say that if certain things are true — the premises — then their conclusion is true. They then use these implications to determine what should be tested and what it means if a test fails, which includes noting what premises are broken and what it means for the overall theory if those premises are false. Even confirmation relies on this, because if you have multiple competing theories to confirm one over the other means that you have to make a valid argument saying that if X is seen, then one theory is confirmed. This is vulnerable to confounds, where the observation of X can support more than one theory if certain things are true. As an example, take the argument of “Changes in brain state causing changes in mental state refutes Cartesian dualism”. Since Cartesian dualism is explicitly interactionist, it in fact INSISTS that changes in brain state can cause changes in mental state, so that result doesn’t actually disconfirm it. Proper analysis of hypotheses and theories can find these confounds in advance and thus produce better tests that would make science more reliable.

      Looking at the “Finish your antibiotics” example, one could analyze the theory — which I think was that the ones left behind would be the more resistant ones and so if they continued to multiple what you’d have left were the resistant ones — and balance that against an argument that the longer the antibiotic is in the system and interacting with bacteria the more likely it is that they will develop a stronger resistance, and test accordingly to see what it really the case using that analysis. But this sort of analysis is what philosophy does.

      Science today, *still* uses “both empirical methods and conceptual reasoning, as appropriate and as scientists see fit”. Philosophy, though, has somewhat lost its way by seeing itself as distinct from science and by trying to get places by conceptual analysis alone.

      Except it doesn’t. It uses empirical reasoning and conceptual analysis as appropriate, and always has. Which fields try to “naturalize” stronger varies with advances in science and with which problems are being considered, but as I have repeatedly said when philosophers dismiss some empirical data it isn’t because they are saying “That’s icky empirical data!” but because they are, in general, saying “We tried that already and it didn’t work”.

      Science — and particularly most scienitismists — are far, far more dismissive of conceptual analysis than philosophy is of empirical analysis.

      What do you see, then, as the difference between a scientific approach and a philosophical one?

      Other than the differences in method that I have mentioned before — science is more directly reliant on the empirical and less skeptical than philosophy is — the big difference is focus. Philosophy is primarily about conceptual analysis with a side of “We’ll also look at any question that either no one else is looking at or that bridges too many disciplines, since we look at all of them”, while science is primarily about the empirical world. What this means is that philosophy uses empirical data as a tool when it will help it answer conceptual issues, while science will use conceptual analysis as a tool to help it answer issues with the empirical world. And except for a few people — whom I find misguided — both are happy with that division of labour. Science’s focus on the empirical world makes it far better at analyzing it than philosophy is, and philosophy will be more than happy to appropriate anything science discovers that they find interesting. And science is more than happy to do the same for philosophy, like it does for mathematics.

      To be honest, I see the purported divide being more a result of “New Atheism”, where often the scientifically minded members of that movement distrusted philosophy for purportedly providing succor to theology, and then tried to wade into philosophical matters insisting that science would solve all of them … failing miserably and most often simply promoting ideas that philosophy had already considered and ignoring philosophers who pointed out what the issues with those ideas were.

    3. Coel Post author

      Hi verbose,

      And these sorts of analyses are precisely what philosophy not only aims for and trains itself for, but is really, really good at.

      But those analyses could as readily be regarded as psychology rather than philosophy and I’m not aware of any evidence that philosophers in general are any better than scientists at such analyses. It seems to me that philosophers like to assure each other that they are the experts at incisive thinking, but I’m convinced. In the area of science that I know best, physics and astrophysics, I’m not aware of much in the way of incisive-thinking contributions from philosophers.

      Doesn’t this greatly impact the utility of science, though, if we can only trust it in cases where it has explicitly tested them?

      It’s not so much about *cases* where it hasn’t been test, as *domains* where it hasn’t been tested. So, if you (say) make the gravitational field 10,000 times stronger than the regimes where you’ve test the model, you should indeed be cautious. And this does indeed impact the utility of science; but then science isn’t about being perfect, it’s about trying to do as well as we can do.

      So, you aren’t certain that the field that has as its entire methodology creating and analyzing arguments and determining their validity and formally teaches every possible method for that is better at it than a field that only does so tangentially?

      Correct, I’m not. Except that science isn’t about arguments and thinking only tangentially, those things are central to all of science.

      it’s going to be hard to claim that we should all learn critical thinking while claiming that the field that explicitly teaches it doesn’t do it very well.

      So far my argument isn’t that philosophy doesn’t do critical thinking very well, it’s that — in general — it doesn’t do it any better than science.

      I definitely think [scientists] ought to learn directly how to construct valid arguments, …

      You’re presuming that they don’t already, or that science’s “on the job” methods of teaching it are worse than philosophy’s formal approach.

      Science — and particularly most scienitismists — are far, far more dismissive of conceptual analysis than philosophy is of empirical analysis.

      Absolutely not, scientists spend all of their time working with concepts and analysing them.

      I see the purported divide being more a result of “New Atheism”, where often the scientifically minded members of that movement distrusted philosophy for purportedly providing succor to theology, and then tried to wade into philosophical matters insisting that science would solve all of them … failing miserably and most often simply promoting ideas that philosophy had already considered and ignoring philosophers who pointed out what the issues with those ideas were.

      That’s rather a strawman about New Atheists!

    4. verbosestoic

      But those analyses could as readily be regarded as psychology rather than philosophy and I’m not aware of any evidence that philosophers in general are any better than scientists at such analyses.

      No, because its about the logical validity of the argument and if there is the possibility of the conclusion being false while the premises are true. If any field is obsessed with that and so should be good at it, it’s philosophy.

      It’s not so much about *cases* where it hasn’t been test, as *domains* where it hasn’t been tested.

      But how do you know that? Science has made mistakes in both cases and in domains, and only justifies either cases or domains using the overgeneralization of induction. It’s actually easier for science to doubt conclusions across domains than it is across cases, and so science is in general less likely to insist that the conclusion holds across domains without directly testing it than it does across cases, and so is LESS likely to hold a strong conclusion that is wrong about domains than it is about cases. So it seems, again, that it’s the cases part we should be worrying about, and likely where most of the errors crop up.

      Correct, I’m not. Except that science isn’t about arguments and thinking only tangentially, those things are central to all of science.

      Arguments play the same role — or even a slightly less central role — than mathematics does, and mathematics is tangential to science. Meaning, science uses both frequently because it needs to, but considers both tools, not necessary components.

      So far my argument isn’t that philosophy doesn’t do critical thinking very well, it’s that — in general — it doesn’t do it any better than science.

      What’s your evidence for that? Remember, philosophy explicitly makes that — or at least the key components of it — part of its core curriculum, and science doesn’t. You’re coming dangerously close to making a claim like arguing that philosophers are better at physics than physicists are [grin].

      You’re presuming that they don’t already, or that science’s “on the job” methods of teaching it are worse than philosophy’s formal approach.

      We agreed that it isn’t part of the curriculum, at least not directly, and I think I’m justified in saying that formal training is better than ad hoc training unless you can provide evidence that it isn’t.

      Absolutely not, scientists spend all of their time working with concepts and analysing them.

      And often dismiss it when it comes to producing anything actually useful or anything true.

      That’s rather a strawman about New Atheists!

      Jerry Coyne once dismissed two philosophers’ opinions about whether an examination would be of interest to philosophy precisely because it involved assuming — for the sake of argument — that a theological point was “true”, so they could examine the conceptual consequences of it. And HE thinks that philosophers are actually good at doing and teaching reasoning, which you deny. And both of you are STILL more accepting of philosophy than a lot of other New Atheists are, outside of the Horsemen.

    5. Coel Post author

      And HE thinks that philosophers are actually good at doing and teaching reasoning, which you deny.

      I’m not suggesting that philosophers are *bad* at reasoning and correct thinking, but I’m not convinced they are generally better at it than others. A lot of fields require correct reasoning, including science, mathematics, engineering, the law, etc.

      The argument that philosophers are better at it because that’s what they particularly study doesn’t convince me. If I wanted evidence that engineers were fairly good at correct reasoning, I wouldn’t point to any formal study of it, I’d point to the fact that the products of engineering generally work. The problem with much of philosophy is that is has no comparable method of validation.

    6. verbosestoic

      The argument that philosophers are better at it because that’s what they particularly study doesn’t convince me.

      If someone tried to sell you on a professional mechanic by telling you they knew as much about cars as most people who tinker with them, wouldn’t you take that as meaning that they aren’t all that great at fixing cars? The reason for that is that a professional mechanic has all of the time to learn about and get experience working on cars, and so should be better at it than someone who doesn’t. Thus, if philosophers do logic all the time but aren’t better at it than those in other fields, shouldn’t we take that as a sign that they aren’t very good at it? Thus, you need to provide evidence that they aren’t better at it than scientists, not merely assert that they aren’t and demand proof that they are.

    7. Coel Post author

      Hi verbosestoic,

      … how do you account for the fact that we seem to have intuitive/natural views of what makes a proposition aesthetic and what makes a proposition moral, and we DON’T think that they are the same thing or act the same way?

      My suggestion is that evolution has programmed us to *think* that morals are objective, and more than “merely” our preferences, since that makes moral intuitions more effective. They seem to matter more so we act on them more. But, anyhow, human intuition is very fallible. I don’t think we should place much store on it unless it can be backed up by solid argument or external verification.

      … we think that aesthetic judgements, at the end of the day, need not be justified and don’t have any real truth value, but think that moral judgements do need to be justified and have a specific truth value.

      I agree that most of us think that. Again, I think we’re wrong to think that, and that evolution has caused us to think like that purely because it makes us regard moral sentiments as more important.

      But no-one has backed up this moral-realist intuition with any reasoned account of what objective morality even amounts to. If you ask a moral realist what it even means to say: “We are morally obligated to do X”, they don’t even know.

      How do you distinguish your view that morality is determined by us having specifically moral emotions like righteous anger …

      Well I wouldn’t say that morality is “determined by” anything, as though righteous anger establishes a truth value. I’d say it is a “matter of …”. But to answer the question:

      … from the view that moral emotions follow from a judgement — conscious or otherwise — that something is moral or immoral based on what we have developed as our own morality?

      I woudn’t necessarily distinguish those two. We have indeed developed our own sense of morality, but that moral sense is a matter of our moral judgements and emotions.

      But again, your arguments boil down to the fact that most human intuition is moral realist. And that really is the only argument for moral realism. But I don’t see that as carrying much weight unless it can be backed up. If one starts constructing actual moral-realist schemes they do not work (unless they, ultimately, fall back on people’s personal value judgements).

      No, it isn’t. Science has no official field studying this and has no formal theories making this an opinion in science.

      Whenever science deals with morality (such as in psychology or similar fields) de facto it assumes emotivism. That is, it assumes that what science is studying is human values and judgements (indeed human psychology!), and that that’s all there is to the matter.

      No, pretty much ALL deontolgical, consequentialist and Virtue Theories are at least compatible with evolution …

      No scheme of *objective* morality is compatible with evolution since no such scheme can explain what objective morals actually are, and no such scheme can explain how humans know about objective morals. Human intuitions result from our evolutionary heritage, but what matters for evolution is how we act and how other humans act, and that is determined by our feelings and values and by other people’s feelings and values. Those are what our moral sentiments will be about, not any hypothetical objective scheme.

      … and Social Contract theories are a better explanation than evolved emotions since it explains, …

      Yes, “social contract” is indeed what a lot of morality as about. But social contracts have to derive, ultimately, from the values and judgements of each of us. Thus a social-contract account is entirely in line with emotivism.

      … again, why we think moral rules are so serious and have objective meaning, …

      Social contracts derive their standing from the advocacy of humans. They don’t give an objective moral scheme complete with truth values.

      … as well as why they differ from culture to culture but don’t generally differ from individual to individual in the same social grouping.

      That is entirely in line with emotivism. We are all, of course, affected by our culture. And the fact that moral differ from culture to culture is a problem for any objective scheme in which moral statements have truth values, but is not a problem for emotivism.

    8. verbosestoic

      Sorry about the delay in replying, but things were pretty hectic for the past couple of weeks. Also, I’m not going to refresh myself on all the comments, so some comments may touch on things that come up later.

      Anyway,

      My suggestion is that evolution has programmed us to *think* that morals are objective, and more than “merely” our preferences, since that makes moral intuitions more effective. They seem to matter more so we act on them more.

      But from an evolutionary perspective, isn’t that what gives it its survival value, or at least contributes greatly to it? Anyone taking an evolutionary line can argue that morality only exists BECAUSE it is taken as universal, and thus that from an evolutionary perspective that’s part of what it means for a rule or judgement to be a moral one. Thus, you can’t argue that morality isn’t a universal judgement while still being consistent with evolution, which is important to you. And they can make this objective in the important way by taking the Social Contract line, and arguing that morality allows for stronger social groupings, which is why it exists, and then moving from that to arguing that there are some rules that are required for any society to function and thus are moral or immoral by definition, while others may depend on the specific society, but there is still an objective criteria for determining what is or isn’t moral: does it foster the survival of the society or not? This, contrary to your comment later, is a far cry from emotivism.

      But, anyhow, human intuition is very fallible. I don’t think we should place much store on it unless it can be backed up by solid argument or external verification.

      But if you are willing to jettison evolution and willing to jettison intuition and are suspicious about conceptual analysis (from philosophy), what’s left? Given all of this, it looks like you are just as much asserting your view without evidence as those you oppose.

      I agree that most of us think that. Again, I think we’re wrong to think that, and that evolution has caused us to think like that purely because it makes us regard moral sentiments as more important.

      But since this is what most people accept, why is the burden of proof on those who accept it and not you who argues against it? Most of your arguments boil down to “It’s mistaken to think that way” but I have not seen any strong or convincing argument that your view is right.

      But no-one has backed up this moral-realist intuition with any reasoned account of what objective morality even amounts to.

      We went down this rabbit hole before, and from what I recall it ended up with you demanding a precise and proven specific morality, not merely a notion of objective morality in general. So before I engage you on this, you need to tell me what you want such a reasoned account to be and what questions you want it to answer, otherwise we’ll just be talking past each other again.

      I woudn’t necessarily distinguish those two. We have indeed developed our own sense of morality, but that moral sense is a matter of our moral judgements and emotions.

      My question goes beyond that, because it talks about causation. Do we feel a moral emotion and thus “read” our moral judgements from that, or do we form a moral judgement which causes us to feel a moral emotion? Your position insists on the former, while mine is consistent with the latter.

      To put it another, is it possible to make a moral judgement dispassionately, without feeling a moral emotion? If it is, your view is wrong, and it is important to examine your empirical evidence to see which theory it is most compatible with (from my perspective, it seems that it is more compatible with the latter case, as the emotions tend to come FROM the things we think moral and don’t tend to themselves change what we think is moral or immoral. But other views may differ).

      Whenever science deals with morality (such as in psychology or similar fields) de facto it assumes emotivism. That is, it assumes that what science is studying is human values and judgements (indeed human psychology!), and that that’s all there is to the matter.

      Emotivism is about emotions, not about values. Psychology, in general, reads off what humans THINK is moral or immoral, sure, but it also tends to assume that we think that there is a right answer to moral questions, and thus that they have a truth value, and thus that humans are not expressivists about morality.

      No scheme of *objective* morality is compatible with evolution since no such scheme can explain what objective morals actually are, and no such scheme can explain how humans know about objective morals.

      Most objective moral views don’t insist that we have direct or inherent access to the truth values of moral statements, and so only argue that we have a capacity to learn morality and act morally. Thus, we could evolve that just as we’ve evolved a capacity to do mathematics. That does not mean that moral propositions can have truth values any more than it means that “2 + 2 = 4 in base 10” can’t have a truth value.

      Yes, “social contract” is indeed what a lot of morality as about. But social contracts have to derive, ultimately, from the values and judgements of each of us.

      As I touched on later, there’s an issue here with moral motivism; you are asking why humans would want to act moral, which has to come down to desires, but are trying to use that to define what morality is. That’s like asking why someone would want to buy a car before determining what it means for something to be a car.

    9. Coel Post author

      Hi verbose,

      Anyone taking an evolutionary line can argue that morality only exists BECAUSE it is taken as universal, …

      Yes, perhaps so. But saying that people survive and reproduce better if they *think* that morality is objective is not the same as morality actually being objective.

      In the same way, people might survive and reproduce better if they think they are well above average in driving ability, leadership ability, likeability and sexual attractiveness, but that isn’t the same as those things being true.

      Thus, you can’t argue that morality isn’t a universal judgement while still being consistent with evolution, which is important to you.

      No! I can argue that there are evolutionary reasons why 70% of students rate themselves above the median on leadership ability and 85% rate themselves above the median on ability to get on well with others, AND that such opinions are largely false.

      And they can make this objective in the important way by taking the Social Contract line, and arguing that morality allows for stronger social groupings, which is why it exists, …

      Agreed, that’s true. And that’s descriptive not normative, and it is only the *description* that is objective. Any normativity comes from human advocacy, so is subjective.

      and then moving from that to arguing that there are some rules that are required for any society to function and thus are moral or immoral by definition, …

      So you are defining “moral” as “enables society to function”? Yes, if you adopt an axiom such as that, *then* morality is objective. But the whole issue is that axiom.

      but there is still an objective criteria for determining what is or isn’t moral: does it foster the survival of the society or not?

      Sure, but that requires the axiom that “moral” means “fosters the survival of the society”.

      This, contrary to your comment later, is a far cry from emotivism.

      No, it’s not far from emotivism. It starts from “I *want* society to function, I *want* to live in a society that functions, and thus *I* *declare* the axiom that “moral” equates to whatever achieves that aim. And *that* is emotivism: what gets labelled as moral depends on our wants and desires.

      I recall it ended up with you demanding a precise and proven specific morality, not merely a notion of objective morality in general.

      I’d be quite happy with any notion of objective morality that did not, ultimately, rest on human wants and desires. Who is it who wants societies to survive and function? Is it termites? Is it trees? Is it rocks? Is it inhabitants of Alpha Centauri? Or is it humans? Is it *humans* who decide on the utility metrics, based on their values? If so, you have subjective morality not objective morality.

      But there is nothing wrong with that! And note that then making objective *descriptions* of the subjective morality schemes that humans declare does not make the morality objective.

      Do we feel a moral emotion and thus “read” our moral judgements from that, or do we form a moral judgement which causes us to feel a moral emotion? Your position insists on the former, while mine is consistent with the latter.

      I’d say that both are so entwined in the brain (that’s how neural networks work) that it doesn’t make sense to distinguish them.

      To put it another, is it possible to make a moral judgement dispassionately, without feeling a moral emotion? If it is, your view is wrong, …

      Agreed. Or, to put it another way, it is not possible to make a moral judgement without a value judgement being part of it.

      Emotivism is about emotions, not about values.

      Here I disagree. As I see it, “emotion” (as used in “emotivism”) is a broad term including all human desires, wants and values. These are all subjective, and all pretty much the same thing.

      Most objective moral views […] argue that we have a capacity to learn morality and act morally. Thus, we could evolve that just as we’ve evolved a capacity to do mathematics.

      We have evolved a capacity to do maths because maths maps to the real world. Somone who adopts the model “1+1=2” will be better able to model the real world than someone who adopts “1+1=5”. What features of the real world map to moral realism? Which moral-realist aspects of the real world would evolution have traction on?

      If the answer to that is that our survival is affected be being able to model other humans’ *behaviour* and their *brain states*, then agreed. That’s where evolution programming for morality came from. But what matters for that is what humans THINK about morality, and thus their subjective brain states. That would not be about any objective independent-of-humans morality.

      Which independent-of-humans aspects of the real world give evolution traction on objective morality?

    10. verbosestoic

      Yes, perhaps so. But saying that people survive and reproduce better if they *think* that morality is objective is not the same as morality actually being objective.

      The strong evolutionary argument would be that morals can only mean what they were evolved to mean, particularly if that meaning is required for them to have been selected. Since even you admit that morals need to be considered to be objective for us to have evolved that sense, this means that from evolution morals are objective and there is no way to justify any other meaning of morals and our moral sense. Thus, morals must be objective, at least to any species that evolved a sense of morality where they are considered such and that definition is critically responsible for them evolving and surviving in the first place. Given this strong position, how can you argue that your view is compatible with evolution? What non-evolutionary justification can you provide that trumps the evolutionary definition?

      In the same way, people might survive and reproduce better if they think they are well above average in driving ability, leadership ability, likeability and sexual attractiveness, but that isn’t the same as those things being true.

      All of those have independently — read: objectively — defined meanings that we can appeal to outside of what it means to a specific person. You have denied yourself any such objectively defined definition, and so you can’t use that to say that there is more to what it means to be moral than what we have evolved to think it is.

      No, it’s not far from emotivism. It starts from “I *want* society to function, I *want* to live in a society that functions, and thus *I* *declare* the axiom that “moral” equates to whatever achieves that aim. And *that* is emotivism: what gets labelled as moral depends on our wants and desires.

      Except, that’s not what they do. They don’t argue that we want it that way, but that the Social Contract requires an objective morality, and morality evolved because it facilitates that. Thus, if someone decided that they didn’t care about the Social Contract, they’d STILL be acting immorally, because that’s all morality is or can be, which makes it objective in the stronger sense. Additionally, it means that there is a right or wrong answer to moral questions — does this facilitate the Social Contract — which emotivism denies as it denies that they have truth values. And we’ve already talked about the problems with how loosely you define well-known terms like “emotivism”, which relates to emotions and not necessarily merely desires (which can, in fact, under some theories have truth values).

      I’d be quite happy with any notion of objective morality that did not, ultimately, rest on human wants and desires.

      And what, precisely, do you mean by that? Recall the issues with motivism: if someone agrees with what is moral but decides that they don’t want to be moral, would you consider that to be morality resting on human wants? Because I’d just consider them to be at best amoral and at worst immoral, but that that would clearly not refute the objectivity of the moral principles being appealed to, and most of your counters tend to come down to arguing about what happens if someone doesn’t want to act morally in those cases (usually on the basis of personal benefit).

      I’d say that both are so entwined in the brain (that’s how neural networks work) that it doesn’t make sense to distinguish them.

      The brain has many faculties that are intertwined and yet can be distinguished, so this view is inconsistent with neuroscience. And as I have pointed out, we can have the judgements separate from the emotions — judging without emotion, and emotion when our judgement is counter to that — which means that we must have some mechanism in the brain that allows them to fire independently. So, to reiterate, is your view that the emotion triggers the judgement or the judgement triggers the emotion? The evidence tends to suggest that it’s the latter.

      Or, to put it another way, it is not possible to make a moral judgement without a value judgement being part of it.

      That’s not the same thing, though. And you have to be very careful about “value judgement”, because using your terms a moral judgement under MY view is ITSELF a value judgement, and thus there is no need for any emotion or anything else whatsoever, but it still remains objective under my view. In short, value judgements are not necessarily subjective.

      Here I disagree. As I see it, “emotion” (as used in “emotivism”) is a broad term including all human desires, wants and values. These are all subjective, and all pretty much the same thing.

      That’s not how it is used in emotivism, let alone how it is used by anyone else. It might be best if you didn’t use terms in such idiosyncratic ways as if it helps to shorten/clarify your position, when it just leads to people assuming standard ideas that you then deny you hold.

      Onto this: is there anything in humanity’s mental life that is NOT subjective and an emotion to you? What about beliefs/facts?

      We have evolved a capacity to do maths because maths maps to the real world.

      That’s a specific evolutionary benefit, but there can be other evolutionary benefits that are not that that could spawn morality, as we already discussed above. Thus, we don’t need “real world” here, and note that our mathematical ability extends beyond mere descriptions of the world. Thus, we have a capacity to extend beyond the real world that evolved because it had an evolutionary benefit. The same thing, then, could be said for morality, and so we don’t need to have any “moral objects” in the real world to hook up to to evolve a capacity to determine objective moral propositions, if that view is correct.

    11. Coel Post author

      The strong evolutionary argument would be that morals can only mean what they were evolved to mean, …

      I don’t see how that follows. To give an example, one can easily see why thinking that one is “god’s gift to women” in looks might be evolutionarily favoured over thinking that one is ugly, regardless of the truth of the thoughts.

      Since even you admit that morals need to be considered to be objective for us to have evolved that sense, …

      No, I only think that “thinking morals are objective” makes them more effective. I think they could still have evolved without that (even if they were somewhat less effective).

      More replies later …

    12. verbosestoic

      I don’t see how that follows. To give an example, one can easily see why thinking that one is “god’s gift to women” in looks might be evolutionarily favoured over thinking that one is ugly, regardless of the truth of the thoughts.

      You are still confusing the objective criteria with the internal impression of it. We have an external definition of “sexually attractive” that we can measure someone against. If they possess the traits, then they are “sexually attractive”, to varying degrees. Then we can note that SECOND category, which is the person’s internal view of their own attractiveness. Yes, it indeed can be beneficial for them to consider themselves extremely attractive even if they aren’t — it might make them more aggressive in pursuit, for example — but there is still something to compare their impression to that is objective and is defined by evolution.

      By your definition, we only have one of these two categories. If we only have the second, then morality is just defined by that impression because there is nothing else to compare it to. And if we only have the first, then it is still the case that what it means to be moral is just what we have evolved to think it means to be moral, just as what it means to be sexually attractive is just what we have evolved to find sexually attractive. There is no separate way to view sexually attractive, and so there wouldn’t any separate way to view or define morality either.

      No, I only think that “thinking morals are objective” makes them more effective. I think they could still have evolved without that (even if they were somewhat less effective).

      If you are going to talk about natural morals as opposed to societally imposed ones, it seems unlikely that you could get people to follow them if they didn’t consider them objective, and it is certainly the case that if they didn’t consider them objective they wouldn’t trust that anyone else was following them either, leading straight to Prisoner’s Dilemma problems which would make their use as a cohesive force for communities at a minimum extremely problematic [grin].

    13. Coel Post author

      … it is certainly the case that if they didn’t consider them objective they wouldn’t trust that anyone else was following them either, leading straight to Prisoner’s Dilemma problems which would make their use as a cohesive force for communities at a minimum extremely problematic [grin].

      Which is a comment about the usefulness of moral concepts (and thus about why they might have evolved), not a comment about their truth.

    14. verbosestoic

      Which is a comment about the usefulness of moral concepts (and thus about why they might have evolved), not a comment about their truth.

      See, here is where the discussions get frustrating:

      1) You completely ignored the critical part of the comment about how your examples don’t work because you conflate the impression with the thing itself, which means that they have an objective basis that you claim morals lack, without acknowledging that you agreed with my analysis, meaning that at some point later you are likely to make the argument again without acknowledging that I already addressed it.

      2) This part of the argument was me claiming that without morals being considered objective they WOULDN’T have evolved, which you seem to concede here but then insist that that doesn’t mean that the theory is true … in the context of an argument where I was giving the evolutionary argument saying that morals just are what they evolved to me, justified by 1). Which was in response to you insisting that your view better aligned with evolution and my saying that it doesn’t.

      So, strike as non-responsive [grin]?

    15. Coel Post author

      This part of the argument was me claiming that without morals being considered objective they WOULDN’T have evolved, which you seem to concede …

      No, I don’t concede it. I think morals could still have evolved when considered as subjective, just in a somewhat less effective way.

      in the context of an argument where I was giving the evolutionary argument saying that morals just are what they evolved to me, justified by 1).

      I don’t really understand the argument you’re making here. You seem to be arguing that if morals evolved such that we consider them to be objective, then likely they are objective (or even must be objective). I don’t see how that follows.

  9. Phil

    Which scientists or philosophers are willing to examine and challenge the “more is better” relationship with knowledge which is the foundation of science?

    Is there anyone from any side up to this? If not, why are we concerning ourselves with a chest thumping contest between the two groups?

    Reply
  10. Phil

    Ok then….

    Until such time as someone can provide a list of scientists and/or philosophers who are willing and able to inspect and challenge the “more is better” relationship with knowledge, I propose there is little point in listening to either group, and a contest between the two can be defined as irrelevant to anything other than the egos of those involved.

    This position can be reached with the most elementary logic.

    What would happen if we had a “more is better” relationship with power in regards to our children? As example, what would happen if we handed out loaded handguns, the keys to a car, and a case of booze in a junior high school? It’d only be a matter of time until tragedy struck, right?

    That’s where all these blind “experts” are leading us, off a cliff. Until that’s fixed, nothing they are working on today really matters because it’s all going to be swept away in a coming crash. All those gloriously complex papers by the philosophers, pointless. All those clever experiments by the scientists, leading nowhere. All of it, a waste of time.

    In all of nature a failure to adapt to a changed environment typically leads to extinction. Until we see the list being requested here, we have to assume we are failing to adapt to the new environment being created by the knowledge explosion.

    Reply
  11. Eric Sotnak

    “But every one of those would have to be rooted in human desires and preferences”

    I agree to a point (though I would leave out “human”). I don’t think any moral theory can lay claim to plausibility unless it accepts that all cases of mattering are cases of “mattering to”.

    But emotivism is just one particular model – a very simplistic one, in my view – of what is involved in mattering.

    Reply
    1. Coel Post author

      I don’t think any moral theory can lay claim to plausibility unless it accepts that all cases of mattering are cases of “mattering to”.

      Agreed.

      But emotivism is just one particular model – a very simplistic one, in my view – of what is involved in mattering.

      One thing that I’ve learned about philosophical terms, such as “emotivism”, is that they gain very narrow definitions in terms of the precise wording of whoever first used them. Then, other philosophers come along, emphasize things a bit differently, and their version becomes another and distinct -ism. Thus we get emotivism, expressivism, subjectivism, and probably several others, that are all pretty much the same thing.

      It may be that the vast number of terms for different theories of morality serve to confuse the issue more than clarify it.

      There really are only two major possibilities: either morality is about human preferences and values, and about people’s moral judgements based on their feelings and value (=> morality is subjective), or there are supra-human reasons why humans are obliged to act in particular ways (=> morality is objective).

    2. verbosestoic

      “One thing that I’ve learned about scientific terms, such as “electron”, is that they gain very narrow definitions in terms of the precise wording of whoever first used them. Then, other scientists come along, emphasize things a bit differently, and their version becomes another and distinct term. Thus we get electron, subatomic particle, molecule, and probably several others, that are all pretty much the same thing. ”

      Yeah, I have no idea what philosophers you are reading or studying that caused you to “learn” precisely the opposite of what philosophy does. Post modern philosophy — which I don’t subscribe to — deliberately tries to subvert categories and categorization, and analytic philosophy pretty much categorizes the same way science does: it puts things into categories based on useful or important similarities and creates new ones — and even subcategories — based on sufficiently different implications.

      Let’s look at the terms you mention here, start with emotivism (from Wikipedia):

      Emotivism is a meta-ethical view that claims that ethical sentences do not express propositions but emotional attitudes.[1][2] Hence, it is colloquially known as the hurrah/boo theory. Influenced by the growth of analytic philosophy and logical positivism in the 20th century, the theory was stated vividly by A. J. Ayer in his 1936 book Language, Truth and Logic,[3] but its development owes more to C. L. Stevenson.[4]

      Emotivism can be considered a form of non-cognitivism or expressivism. It stands in opposition to other forms of non-cognitivism (such as quasi-realism and universal prescriptivism), as well as to all forms of cognitivism (including both moral realism and ethical subjectivism).

      It’s a form of expressivism, but there are other expressivist views that are incompatible with it, so you can’t use the terms interchangably. And what is expressivism (from Wikipedia):

      Expressivism in meta-ethics is a theory about the meaning of moral language. According to expressivism, sentences that employ moral terms – for example, “It is wrong to torture an innocent human being” – are not descriptive or fact-stating; moral terms such as “wrong,” “good,” or “just” do not refer to real, in-the-world properties. The primary function of moral sentences, according to expressivism, is not to assert any matter of fact, but rather to express an evaluative attitude toward an object of evaluation.[1] Because the function of moral language is non-descriptive, moral sentences do not have any truth conditions.[2] Hence, expressivists either do not allow that moral sentences have truth value, or rely on a notion of truth that does not appeal to any descriptive truth conditions being met for moral sentences.

      So then we can compare this to subjectivism (again, from Wikipedia):

      Ethical subjectivism is the meta-ethical view which claims that:

      1) Ethical sentences express propositions.
      2) Some such propositions are true.
      3) The truth or falsity of such propositions is ineliminably dependent on the (actual or hypothetical) attitudes of people.[1]

      This makes ethical subjectivism a form of cognitivism. Ethical subjectivism stands in opposition to moral realism, which claims that moral propositions refer to objective facts, independent of human opinion; to error theory, which denies that any moral propositions are true in any sense; and to non-cognitivism, which denies that moral sentences express propositions at all.

      So it’s not compatible with expressivism because it think that moral statements are propositions, but that their truth value depends on what people THINK are moral, which expressivism, as stated, rejects. And it’s important that you understand the differences when you toss terms around, because otherwise it becomes very difficult to understand what your view actually is (for example, the 3) above is compatible with your view, but if I recall correctly you tend to deny that moral statements are propositions. If you claim to be an emotivist, that makes sense, but if you claim to be a subjectivist, that doesn’t make sense at all). It also stops you from losing credibility for tossing around terms that you don’t seem to understand, and thus looking like the equivalent — when it comes to moral philosophy — of someone arguing “Those stupid biologists talking about how we evolved from apes! We’re clearly different races, and so can’t interbreed”. You aren’t at that level, of course, but often you seem to insist on things about philosophy that are, indeed, completely counter to what philosophy actually does or says.

      There really are only two major possibilities: either morality is about human preferences and values, and about people’s moral judgements based on their feelings and value (=> morality is subjective), or there are supra-human reasons why humans are obliged to act in particular ways (=> morality is objective).

      Here I think you make the same mistake that so many people do and are inadvertently changing the question from “What is morality?” to “Why should I act morally?”. The last sentence where you talk about humans being OBLIGED to act in particular ways for supra-human reasons pretty much makes that clear (and is the same move that Richard Carrier makes). And this is common in philosophy as well; there is an assumption that once we understand what morality is we will therefore have sufficient justification and motivation to act morally. What I now think is that that is the wrong way to look at morality. You don’t need any kind of underlying motivation or reason to act morally other than the fact that it is the moral thing to do. And the reason I think this is that the search for an underlying, non-moral reason to act morally leads to this case: you can have two people who agree on what morality is, but one of them says that it isn’t in their best interest to act morally and so they don’t. Under the motivism theory, this would be a problem for the morality, but when we look at it more carefully we can see that the person who doesn’t want to act morally because it isn’t in their self-interest is actually, at best AMORAL, because they actually agree on what it means to be moral but choose other, non-moral priorities over being moral. Since in that case we would conclude that the person accepts a moral premise but act amorally, then you can’t argue that a moral system can’t be correct simply because it wouldn’t provide that motivation; you need to argue that motivation MUST be included, and so the two of them simply COULDN’T be talking about a valid morality. And that’s a claim that most people who argue for this don’t make.

      There are two big distinctions that you — and others — tend to not make here:

      1) Conflating what makes a specific action in a specific situation moral or immoral with what determines what, in general, makes an action moral or immoral. Objectivists have no problem with a specific action being considered moral or immoral because of the personal preferences or desires of a specific person, but argue that what determines when those preferences are to be used, which preferences are to be used, and whose desires are to be used is not dependent on the personal preferences or desires of a person.

      2) Conflating what is moral with the motivation to be moral. Again, someone can be amoral because they are incapable of understanding what is moral, but also because they don’t CARE to be moral, even if they understand and accept what it means to be moral.

    3. Coel Post author

      “One thing that I’ve learned about scientific terms, such as “electron”, is that they gain very narrow definitions in terms of the precise wording of whoever first used them. …”

      But scientists don’t do that. They take a term, such as “electron”, then adapt it and improve it to make it better (a better model of reality). The *original* definition of it gets superseded and loses relevance. In contrast, in philosophy, terms are often taken as tied to whoever first defined them. If you want something different (even if only slightly different) you define a new term.

      So [subjectivism is] not compatible with expressivism because it think that moral statements are propositions, but that their truth value depends on what people THINK are moral, which expressivism, as stated, rejects.

      But your quote doesn’t tell us what the “propositions” are. So let’s now quote the Stanford Encyclopedia:

      “The present discussion uses the label “non-objectivism” instead of the simple “subjectivism” since there is an entrenched usage in metaethics for using the latter to denote the thesis that in making a moral judgment one is reporting (as opposed to expressing) one’s own mental attitudes (e.g., “Stealing is wrong” means “I disapprove of stealing”).”

      So it’s not that simple! If “subjectivism” is taken as expressing propositions of the form “I disapprove of stealing” — which does have a truth value! — then subjectivism is indeed compatible with emotivism and expressivism.

      In other words does the “proposition” and the cognitive status apply to the superficial purport of the language, or to the underlying “translation” of the language?

      And it’s important that you understand the differences when you toss terms around, because otherwise it becomes very difficult to understand what your view actually is

      Agreed. But such terms have often acquired so much baggage, and come in several different versions, that such terminology can often hamper discussion. Given that academic philosophy is pretty split down the middle on moral realism versus anti-realism, and doesn’t seem to be making any sort of progress towards a consensus, I suggest that how they’re approaching the topic is sub-optimal.

      … and are inadvertently changing the question from “What is morality?” to “Why should I act morally?”.

      I agree that those are distinct, but they are closely related, and any moral-realist account (or indeed any meta-ethical account) needs to answer both.

      You don’t need any kind of underlying motivation or reason to act morally other than the fact that it is the moral thing to do.

      OK, but you then need to give an account of what “the moral thing to do” means. I’m not aware of any moral realists answering that in a way that doesn’t just beg the question.

    4. verbosestoic

      But scientists don’t do that. They take a term, such as “electron”, then adapt it and improve it to make it better (a better model of reality). The *original* definition of it gets superseded and loses relevance.

      The point of that was to give you an example of something using your reasoning that looked as ridiculous to you as your original reasoning seemed to me. It looks like I succeeded [grin]!

      In contrast, in philosophy, terms are often taken as tied to whoever first defined them. If you want something different (even if only slightly different) you define a new term.

      This is entirely false. Philosophy does the same sort of conceptual categorization that science does (unless you’re doing post modern philosophy, which is a bit different but has a reason for what they do). Things that are interestingly similar enough are grouped into categories, even if that means creating an overarching category to fit them in (see Expressivism as the upper category and emotivism and the others as separate views in it). When views are split, it is because they are interestingly incompatible and have significantly different implications so that they can’t be the same theory.

      Take Stoicism. Despite the fact that the Greek and Roman Stoics had radically different claims at times — Seneca, for example, seems to have a different idea of what makes something a passion than the Greek Stoics did, as well as whether the indifferents can be desirable in any way or not — they are all still called Stoic views, because they all agree on the key aspects of Stoicism. Another example is that Bentham’s and Mill’s Utilitarianism are still Utilitarian views despite the fact that they radically differ in how to determine utility, and Rule and Act Utilitarianism are also both Utilitarianisms despite them having radically different implications. Substance and property dualisms are both dualisms. Philosophy introduces differences when they are seen to matter to the concepts, not because the two have some small difference that the originator didn’t think of.

      But your quote doesn’t tell us what the “propositions” are.

      But why does that matter? If you think that moral statements ARE ANY KIND of proposition, then you aren’t being an expressivist by definition, and thus aren’t being an emotivist. If you think they are expressing a proposition but that the truth value depends critically on what the person believes, then you fit into the more classical subjectivist line, and so aren’t an emotivist either.

      So it’s not that simple! If “subjectivism” is taken as expressing propositions of the form “I disapprove of stealing” — which does have a truth value! — then subjectivism is indeed compatible with emotivism and expressivism.

      I couldn’t find that quote in the Stanford encyclopedia, but it looks like he was doing that to try to avoid a confusion that would come up in the context of that post, and it seems to me that the Stanford encyclopedia argues more than it simply describes. So I’m not sure you should take that as far as you have here; it’s a special case where subjective is used in two different senses.

      In other words does the “proposition” and the cognitive status apply to the superficial purport of the language, or to the underlying “translation” of the language?

      Typically, to the statement “X is morally right” or “X is morally wrong”. “I disapprove of X” is not generally considered to be saying that a moral proposition has a truth value, all the way back to Hume, and emotivist positions tend to tie it directly to the experienced emotion, which is even further away.

      Agreed. But such terms have often acquired so much baggage, and come in several different versions, that such terminology can often hamper discussion.

      But other than “objective/subjective”, YOU are the one who tossed that terminology around. If you think it hampers discussion, you can feel free not to use it. Some philosophers will look down on your for it, but I’d rather you simply say what you mean than attempting to tie it to views that have implications that you may not be aware of and don’t accept. I’m more interested in a good discussion than a quick win [grin].

      I agree that those are distinct, but they are closely related, and any moral-realist account (or indeed any meta-ethical account) needs to answer both.

      Not while I’m trying to define what morality is or what the right morality is or determine if morality is objective or subjective, I don’t. You can argue from personal motivation to the claim that morality must be subjective then, because motivation is always subjective but that doesn’t mean that morality has to be in the sense required to make your various cases. To take on your view more directly, you can argue that I’d need to be emotionally invested to act morally, but that doesn’t mean that moral judgements are determined by and follow the emotions that we are having.

      OK, but you then need to give an account of what “the moral thing to do” means. I’m not aware of any moral realists answering that in a way that doesn’t just beg the question.

      I’d need you to clarify what you mean by that, as I said just before. There are some simple ways to do that, but they don’t seem to be what you’re looking for.

    5. Phil

      verbosestoic writes, “Post modern philosophy — which I don’t subscribe to — deliberately tries to subvert categories and categorization, and analytic philosophy pretty much categorizes the same way science does: it puts things into categories based on useful or important similarities and creates new ones — and even subcategories — based on sufficiently different implications.”

      I’d like to read more about post modern vs. analytic philosophy, specifically their relationship with categorization. Why does post modern philosophy deliberately try to subvert categorization?

      Philosophers can contribute by helping us shift some focus from the content of thought to the nature of thought. Categorization seems an example of how thought operates by a process of division. The reductionist structure of science is another example.

      It seems fairly easy to see how this division process is the source of thought’s power, as it allows us to rearrange reality in the form of conceptual objects in our heads, ie. be creative.

      The price tag for this division driven power is illusion, distortion. We see division everywhere we look (things, categories etc), but the division we perceive is a property of the tool we are using to observe reality (thought), and not reality itself.

      Point being, as both philosophy and science relentlessly attempt to create better thoughts, it seems quite helpful to at the same time have an awareness that the medium of thought introduces distortion as a waste product of how it operates.

      Do you see the problem here? If the operation of thought generates distortion then it would seem to be impossible to think one’s way to an observation of reality free of such distortion. Having a better thought (philosophy and science) doesn’t transcend the distortion because the better thought is made of thought too, and thus also inherits the same distortion.

      To further complicate matters, we aren’t just using thought, we ourselves are made of thought psychologically. The thinker is of course, a thought.

      It seems to me the inherently divisive nature of thought will always be a kind of boundary line for both philosophy and science, restricting both to a limited sphere which will never be able to fully grasp reality, due to the limitations of the medium itself.

      This isn’t an attack upon either discipline, but rather an attempt to use them to see their own limitations, thus perhaps cracking open a door to somehow transcending those limitations.

    1. Phil

      So critique the article and give us the other part of the picture. Your next blog post, on a silver platter. 🙂

  12. Phil

    Objective morality exists in the general sense, but not in the specific sense, which means….

    OBJECTIVE: Morality is a collection of rules which address the fundamental human condition, the sense we all have that we are alone and separate, an illusion created by the inherently divisive nature of what we’re all made of, thought. Moral rules help ease the pain of that illusion by more closely binding us to other living things. Morality is objective in this general sense because it is a response to something beyond personal opinion, the human condition.

    SUBJECTIVE: Morality is subjective in the specific sense, in that different people and groups of people will come up with different moral rule systems. Which moral rules work best for a particular group of people is a matter of opinion.

    Reply
  13. Phil

    Coel writes, “OK, but you then need to give an account of what “the moral thing to do” means.”

    The moral thing to do is whatever binds you more closely to that which is not you.

    Reply
  14. Phil

    From my perspective, when discussing morality let’s forget about “good” and “bad”. Instead, let’s talk about what works and what doesn’t work.

    By “works” I mean, whatever addresses the fundamental human condition, 1) the thought generated illusion that we are separate from reality, and 2) the fear/pain/conflict that arises from that illusion.

    As example, here’s some Christianity translated in to secular language.

    Jesus says, “Thou shalt love the Lord thy God with all thy heart, and with all thy soul, and with all thy mind.” Replace the word “Lord” with the word “reality” and we see a prescription for healing the illusion of division between “me” and “reality”.

    Jesus says, “Love thy neighbour as thyself.” This is the same prescription aimed at our relationship with our fellow humans.

    Jesus says, “Die to be reborn.” Same message again, just different words. Die to “me” and be reborn in to union with everything, that is, transcend the thought generated illusion of division at the heart of the human condition.

    These moral statements are objective, in that they work for pretty much every human being. They are objective morality because they address something that is beyond personal opinion, the fundamental human condition. Even Hitler loved his dogs at least, his own meager effort to escape the tiny prison cell of “me”.

    Sticking the brand name “Christianity” or “Jesus” on these procedures is subjective, a matter of personal preference and opinion. We could alternately label these procedures effective psychological insights.

    If anyone has become allergic to the word “morality” due to all the clerical guilt tripping etc that has been associated with that word, the solution is to simply discard the word morality and the clerics too and approach the illusion of division from some other angle one is not allergic to. For example, in the East the focus is often on meditation, which attempts to heal the illusion of division by reducing the volume of thought, that which is generating the illusion.

    The logical person doesn’t spend a lot of time arguing about which approach is best, and is content with a “to each their own” philosophy. The logical person instead focuses their time on addressing the fundamental human condition by whatever method is best suited to them.

    If this typoholic sermon doesn’t actually answer your question, apologies, please try again.

    Reply
    1. Coel Post author

      I’m simply asking for a straightforward, one-sentence statement of what, in your opinion, the word “moral” means, in a statement such as “it is moral to do X”.

  15. Phil

    I gave you that sentence already above…

    The moral thing to do is whatever binds you more closely to that which is not you.

    Reply
  16. Phil

    Coel asks, “But what do you mean by “moral” in the phrase: “the moral thing to do”?”

    By “moral” I mean, that which works in addressing the fundamental human needs of the person taking the action.

    Me stealing your money would not be moral because such an act would strengthen my own sense of isolation, my own pain. Me giving you money in a time of need would be moral because as I loosen my tight grip on “my” money, I’m also loosening my tight grip on “me”, the illusion based concept which divides me from everything else.

    The good/bad judgments typically associated with morality are just a social reward/punishment system designed to guide us towards what is in our own enlightened self interest. As example, if a child is too young to intellectually grasp the dangers of a hot stove, we simplify the conversation to “Good!” and “Bad!”

    Religions tend to rely heavily on the simplistic good/bad, reward/punishment system because most human beings are not sophisticated enough to participate in a in depth exploration of the fundamental human condition.

    But that’s what morality is at heart, a collection of suggestions designed to help us liberate ourselves from the thought generated illusion of division, separation, isolation.

    Reply
    1. Coel Post author

      By “moral” I mean, that which works in addressing the fundamental human needs of the person taking the action.

      So things like eating and going to the loo when needed are “moral” acts? That’s somewhat out of line with most people’s conception of the term. It’s also somewhat different from your other definition, that:

      “The moral thing to do is whatever binds you more closely to that which is not you.”

  17. Phil

    I’ve explained what I mean by “fundamental human need” over multiple pages of your blog, including above in this thread. A few posts above I explained it as…

    ….the fundamental human condition, 1) the thought generated illusion that we are separate from reality, and 2) the fear/pain/conflict that arises from that illusion.

    Science attempts to harness the great power of thought, religion and moral systems more generally attempt to address the price tag that comes along with that power, the illusion of division. The power and the price tag arise from the same source, thought operating by dividing the single unified reality in to conceptual parts.

    That process of conceptual division allows us to rearrange reality in our minds, giving us great power. This same division process also divides us from reality psychologically, making us somewhat nuts. As example, we are brilliant enough to know how to create nuclear weapons, and mad enough to actually do so.

    The fundamental human condition, the power and the price tag, arises out of the nature of what we’re made of, thought. Moral systems are an attempt to manage the price tag on both the social and psychological levels. I’m addressing the psychological level in my comments because what is happening externally is just a mirror of what’s happening internally.

    Reply
  18. Mark Sloan

    Hi Coel and verbosestoic,

    Your conversation on moral realism versus emotivism parallels several Coel and I have had on the subject.

    With your forbearance, and based on your interests in moral realism and emotivism, perhaps I could get your reactions to the following?

    Coel said: No scheme of *objective* morality is compatible with evolution since no such scheme can explain what objective morals actually are, and no such scheme can explain how humans know about objective morals.

    if we take *objective* morality to refer to what is universally moral, independent of opinion or emotions, then I do have a candidate.

    We could look for a candidate universal moral principle in the origins and function of ‘moral’ behaviors in biological and cultural evolution. But evolution is only the ‘means’ by which morality was encoded in our biology and cultural norms. Evolution is not morality’s ultimate source.

    That source is the cross-species universal cooperation/exploitation dilemma that must be solved by all beings that form highly cooperative societies: how to obtain the benefits of cooperation without future benefits of cooperation being destroyed by exploitation. Solving this dilemma can be difficult because exploitation is virtually always the “winning” strategy in the short term and can be in the longer term. Fortunately for us, our ancestors chanced across ‘morality’ – the diverse, contradictory, and sometimes strange elements of cooperation strategies encoded in our moral sense and cultural moral norms.

    All these cooperation strategies begin by solving the cooperation/exploitation dilemma in an in-group, even when cooperating to exploit or make war on out-groups. To sustainably obtain that cooperation, people in the in-group are not exploited. Solutions to the cooperation/exploitation dilemma thus have a necessary universal component, a universal moral principle: “Solve the cooperation/exploitation dilemma without exploiting others”.

    The cooperation/exploitation dilemma is simultaneously the ultimate source of descriptively moral norms (norms described as moral in one culture, but perhaps not in others) and the universal moral principle – universal because it is a necessary component of all those descriptively moral cooperation strategies.

    Societies may prefer this universal principle for refining their moral codes as an instrumental ought, the moral principle most likely to meet their common needs and preferences. For example, it advocates increased cooperation which increases material goods benefits as well as the emotional rewards triggered by cooperation, in particular cooperation with family and friends. These cooperation strategies are innately harmonious with our moral sense since our moral sense was selected for by the benefits of cooperation it motivates. Also, the principle is practical; common moral heuristics such as the Golden Rule and “Do not kill” are universally moral when they exploit no one and increase the benefits of cooperation, but immoral if benefits are decreased. Finally, this moral principle is a useful objective reference for resolving moral disputes because it accurately tracks what is universally moral as a necessary component of cooperation strategies.

    In moral philosophy, “universally moral” commonly refers to what everyone, everywhere ought to do regardless of their needs and preferences. No such imperative ‘ought’ is generally agreed to exist. However, science’s universal moral principle appears to be culturally useful without claiming any such mysterious, “magic” bindingness. Morality’s universal principle is real: any imperative innate bindingness is an illusion.

    In summary, the universal moral principle is “Solve the cooperation/exploitation dilemma without exploiting others” and applying it to common moral heuristics we can get universally moral norms such as:

    “Increase the benefits of cooperation by ‘Doing to others as you would have them do to you’ without exploiting others“.

    “Increase the benefits of cooperation by following ‘Do not kill’ without exploiting others“.

    Our moral emotions and intuitions are biological heuristics for motivating behaviors that increase the benefits of cooperation. However, they are not always reliable indicators of what is universally moral because 1), as fallible heuristics, they do not always increase the benefits of cooperation and 2), due to our evolutionary history, they can motivate increasing the benefits of cooperation by exploiting others.

    Reply
    1. Coel Post author

      Hi Mark,

      In summary, the universal moral principle is “Solve the cooperation/exploitation dilemma without exploiting others” …

      What do you mean by “moral” when you label this principle a “moral” principle? What commentary or description are you adding to the principle by labelling it “moral”?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s