Jump to content

Home

Revisiting Moral Objectivism with Mathematical Notation


tk102

Recommended Posts

Once upon a time I said

With no objective way to measure morality, the argument of what is the most moral course of action is word-play for politicians. You cannot pretend that it is anything like a mathematical equation. It does not hold the same "truth".
Well that statement has bothered me for a couple months and I wanted to explore further the idea of moral objectivity with the help of mathematical notation. I hope others can provide insights into these definitions. I admit having to consult Wiki's Naive Set Theory to remember how to syntax set notation. :p

 

* * * *

 

If x is an action in the set of all possible actions that you can perform, A:

x ∈ A

 

And m(x) is the morality of action x, then set of all possible moral outcomes, M, is:

M := { m(x) : x ∈ A}

 

And the moral person seeks to perform the most moral act:

Mmax = max { m(x) : x ∈ A}

 

So how do we measure the morality of an action? It is inversely proporational to the amount of distress, D, the particular act, x, causes.

m(x) ∝ 1/D(x)

 

For the sake of clarity, let's define our D function such that we can write this as an equality.

m(x) = 1/D(x)

 

So our goal to maximize morality could be rephrased to say we seek to minimize distress.

Mmax = Dmin = min{ D(x) : x ∈ A}

 

So how do we measure D(x)? It should be the summation of all distress felt by all organisms for the given action. To arrive at that formula we must define its elements.

 

Given an organism capable of feeling distress, y, in the population of all organisms capable of feeling distress, P:

y ∈ P

 

Then relative distresses, δ, for all organisms for all actions is the set:

δ := { δxy : (x ∈ A) and (y ∈ P) }

 

But how do we compare distresses of one organism to another? We don't weigh distresses of one organism the same as the distresses of another organism. Most would consider it morally right to kill a mosquito that landed on a friend's neck, for example. In order to define D(x), we will need to translate these relative distresses into an absolute scale that can be summed. Let us propose a conversion factor K where:

 

For a given action x, in a given organism y, Ky is the proportion of universal distress, Dx, to relative distress, δxy

Ky = Dxxy

 

A mosquito's relative distress at being smashed would be much greater than the discomfort a mosquito bite would cause in a person, but because Kperson>>Kmosquito, Ddon't smash > Dsmash.

 

We can now define D(x) as:

D(x) := ∑ δxyKy : (y ∈ P)

 

(Pardon the notation -- I'm limited by bbcode... read the above as a summation of δ*K for each member of y in P for a given action, x.)

 

 

And the most moral act is therefore

Mmax = Dmin = min { D(x) : x ∈ A} = min { (∑ δxyKy : (y ∈ P) ) : x ∈ A }

 

* * * *

 

Okay, so what do we see here? Mmax is a function of δxy and Ky. It remains up to us to use our faculties to best interpret δxy and Ky as well as recognizing the full sets of A and P. Let's assume A and P are well-defined. δxy could estimated using faculties of reason, heuristics, and the understanding of organism y. Ky is more difficult to define in an objective manner. Does the size organism matter? Do its mental faculties matter? Or maybe its own ability and desire to act morally? Have I oversimplified something?

Link to comment
Share on other sites

  • Replies 110
  • Created
  • Last Reply
And the most moral act is therefore

Mmax = min { (∑ δxyKy : (y ∈ P) ) : x ∈ A }

In other words, the most moral action is the action that causes the least amount of distress to all other organisms, where the distress of an organism is 'weighted' so that is can be compared to other organisms.

 

Therefore, we have to be able to determine the magnitude of distress an organism will feel from a particular action (δxy), and we have to determine how to weigh that organism's distress in relation to other organisms (Ky). These two determinations are still a bit mysterious, but I suppose that is a different subject for analysis -- this exercise was just an attempt to define what factors need to be evaluated to derive morality objectively.

 

Did I miss anything? Seems oversimplified somehow, but maybe all the difficulty lies in interpreting the variables themselves.

Link to comment
Share on other sites

Er, what was the point of adding in mathematical notation? It didn't do much besides adding a layer of confusion, even though your words were enough to say the same. Despite which though, I do find it a fascinating brainstorm.

 

Anyway, you do fall prey to oversimplification in your descriptions, tk102. For instance, what is distress? Is it simply the avoidance of pain? Or is it the avoidance of anything that causes pain? For instance, it would be the difference of finding a bee on a window still and either avoiding it or squashing it; the former would cause one to be cautious and perhaps retreat to another room, while the latter might compel one to destroy the bee in his vicinity. In other words, despite both options resulting in the same amount of pain to the human, people would still go to the stretch of killing the bee. Why one would act upon killing the bee would depend on a variety of reasons; phobia, paranoia, bad experiences, curiosity, ect. Such emotions, particularly curiosity, do not fall under direct pain and give ambiguity to the notion of distress as the sole moral impetus.

 

(There are other factors besides distress, such as knowledge and ignorance. A little kid could flood an ant farm with no concept of the fact that ants are dying and are in pain, while a teenager could very well do the same with malevolent intent. Despite the fact that both cause the same level of distress to the ants and to the participants, most would say that the kid would be less evil because he had ignorance of his actions. How would you apply that to the equation?)

 

Also, the main problem I have with moral objectivity is that humans can never be truly objective; we always have some bias because we always have an opinion and a limited view of existence. Without knowing all of the ramifications that actions can have, and without knowing all the possible sides to a given argument or idea, morality is based less on ultimate distress caused and instead cut off by some arbitrary level depending on the person. For instance, killing a Jew is different to a Christian to a Nazi to a Muslim to a Jew to a Humanist, ad infinitum. Killing a Jew would depend on the person's past, ideas, point of view, knowledge and ignorance, mental wellbeing and mood, ect ecetera; such things are inescapable, and so it would impossible for humans to ever create a totally objective moral system. Therefore, it's pointless to bring up a particular formula for morality as all variables would be to the whim of the person with the pen.

 

That's just my opinion, though. I could be wrong.

Link to comment
Share on other sites

One man's pain is another man's pleasure. The severity of the pain would be on some kind of bell curve, with no pain being zero, and for instance severity of pain increases negatively and pleasure increases positively. The bell curve would be skewed well into the negative, but there'd be a few on the positive who got some kind of pleasure out of the pain. You'd have to add in a probability equation (just to make things more fun).

 

Also, maximize morality is not always inversely proportional to distress. Some medical procedures cause distress, but are not immoral. You'd have to account for those things that cause distress but have little or no bearing on morality, unless you limit your sets to only those things where distress has an impact on morality, and even that will vary to some degree.

Link to comment
Share on other sites

Er, what was the point of adding in mathematical notation? It didn't do much besides adding a layer of confusion, even though your words were enough to say the same.
The point was to abstract the concepts with symbology in order to avoid misinterpretation that comes from words. If morality can truly be considered objective, it must be able to conform to logical statements with unambiguous terms. Symbology provides a simple way to keep terminology constant.

Anyway, you do fall prey to oversimplification in your descriptions, tk102. For instance, what is distress? Is it simply the avoidance of pain? Or is it the avoidance of anything that causes pain? For instance, it would be the difference of avoiding being stung by a bee on a window still and squashing it; the former would cause one to be cautious and perhaps retreat to another room, while the latter might compel one to destroy the bee in his vicinity. In other words, despite both options resulting in the same amount of pain to the victim, people would still go to either option depending on altogether seperate reasons. This means that distress is not the sole factor in morality.

Well, if the bee was not considered so be part of set P, then there would be no difference in the morality of the two actions. However, becuase the bee is an organism that can be distressed, it is a part of P and so the action of leaving the room causes less universal distress than the act of killing the bee (Dleave < Dkill ) and so leaving the room has greater morality.

 

As for the defintion of distress, it's true I didn't make any attempt to define it. I have a dictionary that I could quote if it makes any difference. I think each creature knows distress when it feels it. I did suggest that each distress is a local phenomenon that must be converted to some sort of universal scale so that it could be weighed against other organisms' distresses.

Also, the main problem I have with moral objectivity is that humans can never be truly objective; we always have some bias because we always have an opinion and a limited view of existence. Without knowing all of the ramifications that actions can have, and without knowing all the possible sides to a given argument or idea, morality is based less on ultimate distress caused and instead cut off by some arbitrary level depending on the person.

I wouldn't say that makes morality subjective. It's just that the precision of the variables discerned by different actors may be different. In 1650 BC, an Egyptian scribe named Ahmes declared the number π was 256/81 -- off by approximately 1%. In 1997, π was calculated out to 51.5 billion digits. It's still not exactly right, but it's much closer. You wouldn't say that the ratio of a circle's circumference to its diameter is subjective though, right?

Therefore, it's pointless to bring up a particular formula for morality as all variables would be to the whim of the person with the pen. That's just my opinion, though. I could be wrong.

Believe me, I've been wrestling with that too. That's why I started this thread. I appreciate your response. The counter-argument to my analogy with pi would be that unlike morality, the values of pi have been shown to converge with increasing precision. That's why we can believe there is a true number π. There's no way to show this with a given action because we can't freeze all the conditions and perform an analysis to ever greater precisions. :) If there's no convergence, then it's impossible to show there is a true (objective) morality. :confused:

 

 

One man's pain is another man's pleasure. The severity of the pain would be on some kind of bell curve, with no pain being zero, and for instance severity of pain increases negatively and pleasure increases positively. The bell curve would be skewed well into the negative, but there'd be a few on the positive who got some kind of pleasure out of the pain. You'd have to add in a probability equation (just to make things more fun).
Distresses are local to the individual and are inherent to their own dispositions, tolerances, and adaptabilities to external stimuli. The summation notation adds up the distresses felt by each individual, so I think that's covered. I guess you're suggesting to use a probability equation to estimate the unknown distresses of the unknown population P? Okay. :)

 

Also, maximize morality is not always inversely proportional to distress. Some medical procedures cause distress, but are not immoral. You'd have to account for those things that cause distress but have little or no bearing on morality, unless you limit your sets to only those things where distress has an impact on morality, and even that will vary to some degree.
Well I assume a doctor performs a procedure to improve or save a life, and wouldn't undertake a procedure that didn't have some long-run benefit (an overall decrease in distress, with the assumption that death is the ultimate distress.)
Link to comment
Share on other sites

Also, the main problem I have with moral objectivity is that humans can never be truly objective; we always have some bias because we always have an opinion and a limited view of existence.
If something is objective, isn't that to say that it exists outside of human bias? In other words, wouldn't it be reasoned out rather than opined?

 

<snip> and so it would impossible for humans to ever create a totally objective moral system.
Indeed it would probably be impossible for humans to create an objective moral system. However if objective morality does exist, it would not need to be created just as we didn't "create" the principles that govern mathematics or the physical laws of our universe.

 

One man's pain is another man's pleasure.
That is true but that is obviously not a basis for any objective system of morals.

 

Also, maximize morality is not always inversely proportional to distress.
I actually agree with you here. Sometimes following the moral option requires some exposure to pain or distress. Anyone that has children knows that immunization is not a lot of fun for your little ones. It's not uncommon to have to restrain a child so that they can be poked with a big sharp needle and injected with solution that will cause them to become mildly ill for a few days.

 

If we were to follow a mathematical model for morality, the formula might spit out a result that says the pain and distress caused by immunization would make the process immoral. Since the comparative result (disease or general illness) is not certain, our formula might lead us astray. Then again, math has never been my strong point and my reliance on conceptualization might be doing me a disservice here.

 

Regardless, I'm pretty sure this is why all the ethics courses fall under the philosophy umbrella rather than applied sciences :D

 

Thanks for reading.

 

EDIT for teekay's post above:

 

As for the defintion of distress, it's true I didn't make any attempt to define it. I have a dictionary that I could quote if it makes any difference. I think each creature knows distress when it feels it. I did suggest that each distress is a local phenomenon that must be converted to some sort of universal scale so that it could be weighed against other organisms' distresses.
Would it be inaccurate to interject that what we're looking for is a creature's capacity for suffering as compared to its capacity for happiness? A bee has relatively diminutive capacity for suffering or happiness especially when compared to the highly allergic human that it is about to sting, correct? So it wouldn't be immoral to kill a bee that was trying to attack you. Conversely, a cat has a relatively higher capacity of suffering and happiness, therefore it would not be moral for an allergic person to randomly kill cats.

 

Also, we would then have to somehow show individual suffering/happiness as compared to social suffering/happiness (e.g. deontology). For instance, is it moral for someone to throw oneself onto a grenade in order to save a group of complete strangers? How about to save a hive of honey bees?

 

Lastly, I'd like to inquire about the component of will. If I'm a doctor and I have 5 patients that will all die within an hour if they do not receive organ transplants and I have a patient that just died that happens to have matching organs, would it be ethical to perform the transplants if the deceased is not an organ donor? What if the potential donor were not dead but in a persistent vegetative state? Unconscious from a head wound, but expected to make a full recovery within hours? Fully alert and there to see about a sprained ankle?

 

I think my point is that you can have a completely objective set of morals that are not represented by one equation. Mathematics is objective, yet we do not try to nail the study down to just one rule, rather we accept that there are a wide variety of rules. Similarly, just as "math" represents arithmetic, algebra, calculus, geometry, finite mathematics, etc, "morality" might not be reducible to a single line of logical statements.

 

Somehow I feel as though I derailed the section that I quoted, but hopefully, I have not.

 

Thanks for reading.

Link to comment
Share on other sites

Would it be inaccurate to interject that what we're looking for is a creature's capacity for suffering as compared to its capacity for happiness? A bee has relatively diminutive capacity for suffering or happiness especially when compared to the highly allergic human that it is about to sting, correct? So it wouldn't be immoral to kill a bee that was trying to attack you. Conversely, a cat has a relatively higher capacity of suffering and happiness, therefore it would not be moral for an allergic person to randomly kill cats.
That is the whole concept of Ky above. A person has a high Ky and a bee has a small Ky. An person allergic to bees also would have a high δy.

 

Also, we would then have to somehow show individual suffering/happiness as compared to social suffering/happiness (e.g. deontology). For instance, is it moral for someone to throw oneself onto a grenade in order to save a group of complete strangers? How about to save a hive of honey bees?
That's where the summation comes in (∑ δxyKy).

 

 

Lastly, I'd like to inquire about the component of will. If I'm a doctor and I have 5 patients that will all die within an hour if they do not receive organ transplants and I have a patient that just died that happens to have matching organs, would it be ethical to perform the transplants if the deceased is not an organ donor?
I think we're talking about the distress that would be caused to society (eg. families affected, the hospital, etc.) to know that doctor disregarded the will of the deceased. But what if no one knew and the doctor felt no distress over his action? Hmm, I don't see how the equation would take that into account. There's no factor for something like a categorical imperative. :(

 

What if the potential donor were not dead but in a persistent vegetative state? Unconscious from a head wound, but expected to make a full recovery within hours? Fully alert and there to see about a sprained ankle?

Again, forgiving the lack of categorical imperative... assuming society knew about the doctor's action, there would be increasing δsociety for each of those scenarios. Since the donor would is still, there would be some factor δpatient to account for, especially since organ donation would likely result in his death (assumed to be the maximum distress for an organism).

I think my point is that you can have a completely objective set of morals that are not represented by one equation. Mathematics is objective, yet we do not try to nail the study down to just one rule, rather we accept that there are a wide variety of rules.

Similarly, just as "math" represents arithmetic, algebra, calculus, geometry, finite mathematics, etc, "morality" might not be reducible to a single line of logical statements.

The idea that there exists objective morality suggests there should be a definable function of variables that defines morality. The assertion is that morality can be arrived at by logic. If we cannot apply logical symbology to morality then we cannot make this assertion. And if we cannot say morality is based on logic, we cannot say it objective. If there are many various theories and fields describing the origins of morality, then morality indeed is subjective.
Link to comment
Share on other sites

That is the whole concept of Ky above. A person has a high Ky and a bee has a small Ky. An person allergic to bees also would have a high δy.

<snip>

That's where the summation comes in (∑ δxyKy).

Fair enough. Thanks for clarifying.

 

I think we're talking about the distress that would be caused to society (eg. families affected, the hospital, etc.) to know that doctor disregarded the will of the deceased. But what if no one knew and the doctor felt no distress over his action?
This seems to assume that the family and/or the doctor should feel distress over his actions (assuming the "dead" scenario and not any of the others).

 

EDIT: I just re-read this section and realize that we may have missed each other slightly here. You appear to be looking at this from the perspective of the negative social consequence for the hospital, the deceased person's family, etc. This assumes that the action itself in inherently immoral and should be viewed negatively. My argument (which was vague) is such judgments should be the result of process, rather than part of the process itself.

 

To restate my point without posing it as a question:

 

I believe that it would be absolutely moral for a doctor to "part out" a corpse to save 5 lives because this would maximize social happiness (the other patients, their families, and loved ones, etc) with absolutely no impact to social (or individual) suffering. The patient is dead, so his or her rights and freedoms aren't violated because they are no longer applicable. In some cultures, the deceased person's family would feel the act is immoral, however if the deceased was a organ donor, they would not. This tells me that the act itself is not immoral, rather the how the act is perceived determines its "morality". In other words, this moral stance is relative, rather than absolute.

 

There, I think I got it all that time. ;)

 

Hmm, I don't see how the equation would take that into account. There's no factor for something like a categorical imperative. :(
The categorical imperative is what I had hoped to highlight. :)

 

Again, forgiving the lack of categorical imperative... assuming society knew about the doctor's action, there would be increasing δsociety for each of those scenarios. Since the donor would is still, there would be some factor δpatient to account for, especially since organ donation would likely result in his death (assumed to be the maximum distress for an organism).
Which is tricky. Certainly we would be providing the maximum social benefit with minimum social suffering by parting out the sore ankle guy, but we would still be murdering a healthy man. Thank goodness the categorical imperative prevents us from doing so.

 

However there might be a case for deceased/vegetative state scenarios. The reason that I brought up will is that I think it should be reduced to 0 in some cases and such an equation would need to factor that in (unless this is accounted for/agreed upon in the part that accounts for categorical imperative).

 

The idea that there exists objective morality suggests there should be a definable function of variables that defines morality. The assertion is that morality can be arrived at by logic. If we cannot apply logical symbology to morality then we cannot make this assertion. And if we cannot say morality is based on logic, we cannot say it objective. If there are many various theories and fields describing the origins of morality, then morality indeed is subjective.
No, I agree, but my point was we don't point to one line of logical symbols and claim that it represents all of mathematics (at least I assume that we don't). In other words, there is no one equation that perfect encapsulates all of the study of mathematics, so why should that be the case for something equally complex?

 

Then again, it might be that I'm not understanding your point as well as I think I am.

 

Thanks for your reply.

Link to comment
Share on other sites

I find the hardest thing in getting the "real level of morality" (read: the amount of distress caused to other beings) of an action is that one would have to wait until the end of time to be sure that all "distressing reactions" regarding that action are taken into consideration. Everything else is just assumption about the morality of an action.

 

And, for instance, you could not simply consider killing Hitler a pretty moral action. Because killing him in December 1944 would be less moral than if he'd be killed by a drunken driver in a car accident as a young boy, what on the other hand might just increase chances that Russia takes over whole Europe, except if he dies one day later when he's been pushed down a cliff and he could tell someone about his plans to take on the world, who will now do it instead, with more success, btw.

 

Err, well Spider, lead me words ad absurdum already :p

Link to comment
Share on other sites

one man's pain is another man's pleasure

That is true but that is obviously not a basis for any objective system of morals.

So work in the probability stat to cover that.

I actually agree with you here. Sometimes following the moral option requires some exposure to pain or distress.

 

If we were to follow a mathematical model for morality, the formula might spit out a result that says the pain and distress caused by immunization would make the process immoral. Since the comparative result (disease or general illness) is not certain, our formula might lead us astray. Then again, math has never been my strong point and my reliance on conceptualization might be doing me a disservice here.

 

If you work in another equation that compares long-term gain over short-term distress, it would fix that problem.

 

Regardless, I'm pretty sure this is why all the ethics courses fall under the philosophy umbrella rather than applied sciences :D

But it's fun to think about this, anyway. :D

 

I guess you're suggesting to use a probability equation to estimate the unknown distresses of the unknown population P? Okay

I think you'd have to do that, since it would not be a simple coefficient (i.e. the amount of distress of an action is exactly the same for every single person), it would vary within the population.

Link to comment
Share on other sites

Actually the probability equation would be an estimate to determine δ for a given populace, which would then be substituted into the above equation. That would be more of an auxiliary equation whereas the above attempts to define the variables at play. I think the categorical imperative should be added but I'm not sure how to define that one yet... I was taking a break from this to let it sink in some more and decide whether this was worth further pursuit.

 

 

I was hoping Spider would weigh in at some point, but alas. :-)

 

Link to comment
Share on other sites

Once upon a time I said

Well that statement has bothered me for a couple months and I wanted to explore further the idea of moral objectivity with the help of mathematical notation. I hope others can provide insights into these definitions. I admit having to consult Wiki's Naive Set Theory to remember how to syntax set notation. :p

 

* * * *

 

If x is an action in the set of all possible actions that you can perform, A:

x ∈ A

 

And m(x) is the morality of action x, then set of all possible moral outcomes, M, is:

M := { m(x) : x ∈ A}

 

And the moral person seeks to perform the most moral act:

Mmax = max { m(x) : x ∈ A}

 

So how do we measure the morality of an action? It is inversely proporational to the amount of distress, D, the particular act, x, causes.

m(x) ∝ 1/D(x)

 

What about doing something that is negating to -m(x) respect to society's moral standards, that will benefit the individual if

fear F(x)>> D(x) :x ∈ I

 

where F(x) is fear function of not doing something that is highly immoral respect to society's set moral standards if that immoral action x that is a element of I can lead to survival of the individual, since survival will have a higher value over M and D(x) distress function of the specific individual will have in commiting a immoral act.

When x is a element ∈ of a immoral set I.

 

F(x):= { if x ∈ Ei then F(x)∝Ei/D(s) } where Ei is the bias influencing emotions i of a individual that will negate the moral objectivity of that individual.

 

That will produce different set rule in D(x) ∝ UM(s)/Ei if x ∈ I where s ∈ M

 

where UM(x) is universal morality function stantards of a society.

 

So if -D(x) ( decrease) cause by Ei increase.

 

Then this set rule will have to apply.

 

UM(x) @ m(x) if {m(x) ∝ Ei/Dx: x ∈ I}

 

So, to remain on the moral bandwagon of the moral standards of society UM will have to be substituted @ with the morality of action function m(x) to counter I immoral action set that will cause the distress D(x) to change in a negative way, that will make the act of commiting the immoral less distressful to specific individuals with different values of Ei.

 

 

 

Also Ei will have to be a statistical variable, since emotions of specific individuals of a specific society are ruled by uncertainity of probability.

 

 

 

δ := { δxy : (x ∈ A) and (y ∈ P) }

 

Also δxy= KyUM/Fxy and Fxy>>Dx : x ∈ I I suspect! :)

 

where Fxy is the relative fears for all organisms for all actions x if P := { m(x) ∝ 1/Fxy: if x ∈ I and y ∈ P }

 

So, if relative distresses is to be minimize the moral standards of society probably will have take a greater level of value over specific individual's own moral standards when it's actions are influence greatly by fear of death.

Or, a immoral may result.

 

But fear as a bias emotion can negate objectivity of any individual that has emotions since all organisms have emotions society's moral standards probably

can't in the long run take higher value over fear of death for some specfic individuals.

 

That is for some cowardice take highter value over relative distresses of individuals in society.

 

So, an immoral act will result for those who have a cowardice charcteristic.

 

Example: In the death camps some Nazi soldiers who didn't kill Jews because of the relative distresses of society that with their faculties would result in a immoral act, obviously.

If they don't commit the immoral act of murder they will be kill themselves.

So, if EiC where C is a cowardice set that is ∝ F(x), then F(x)>> UM and DPI where DPI is the relative total distresses of society of commitng a immoral act when x ∈ I.

 

So, the immoral act of murder may result,

because F∝Ei/UM(s) & F(x)>> D(x) and where x ∈ I and s ∈ M

 

may result because emotions Ei is ruled by probability.

 

If UM(s) increase influence over fear function F(x) at a constant Ei value then fear of death go down and murder = 0

 

and Ei ∈/ cowardliness set C.

 

:∈/ is not a element:

 

 

Also good use of using Set Theory to illustrate your thoughts on morality, tk102. ;)

Link to comment
Share on other sites

TK, I wonder why you'd want my input? Achilles has already done a good job of covering all the most significant points.

 

I may have a couple of things to add:

 

1. On the idea of representing moral values mathematically in general:

 

As others have noted, your idea is a fine intellectual exercise, very intriguing from the perspective of a puzzle. But there are two questions that immediately spring up:

 

1. Does your equation tell us anything about morality that hasn't already been elucidated in longhand by the major moral philosophers of history?

 

2. More importantly, COULD it tell us anything new about morality in the future?

 

The answer to no.1 would appear to be "no", but that's a minor issue. The answer to number two is "maybe, but only if all moral values were accurately represented in the equation. Perhaps then the numbers could be manipulated mathematically to show us something new and interesting."

 

But of course, to accurately represent all variables would probably be the work of a lifetime. And while your OP shows one way that moral values might be transcribed, it does not cover the whole gamut. (which I'll comment specifically on shortly.)

 

So in short, you've started on what would be a quite serious undertaking. There's no reason you shouldn't be the one to complete it, however. I for one would be most interested to see what you eventually come up with.

 

 

2: On the specifics of your original "moral equation":

 

Originally posted by tk102:

So how do we measure the morality of an action? It is inversely proporational to the amount of distress, D, the particular act, x, causes.

Up to this line, there was little to add, however- as has been noted subsequently- distress is not the only variable in the moral equation. It's an important variable, to be sure. It is central, in fact. But it's not the only one. As well as physical and psychological suffering, there's the concept of "loss". Let me illustrate:

 

If you murder someone it's immoral, whether the method you use is painless or not. You can murder someone painlessly; you can for instance drug them so that they first fall unconscious and then expire. The fact that they did not feel any physical or psychological distress is a factor- I mean, someone who tortures a person for three weeks before killing them has arguably committed a more immoral act- but any killing of this type, painful or painless, results in the loss of the subject's life.

 

Existence- rationally speaking- is all we have. And our time on this earth is all we ever will have. For various reasons which have been discussed elsewhere, we feel a desire to maintain our lives. When you kill a creature (whether it's aware of its impending doom or not) you are literally taking away all it has, and all it ever would have had. To paraphrase a Clint Eastwood line.

 

This is why I have tried to define morality in the past- as you may remember from my response to one of your own questions in the "moral relativism" thread- as the objective, universal standard of behaviour that aims to minimise one's negative impact on other creatures... This heading of "negative impact" encompasses any and all suffering, but also loss of life and also any more minor violations of established rights, etcetera.

 

Therefore any purely mathematical expression of the moral equation would have to incorporate these additionable variables, possibly under a heading similar to "negative impact".

 

Originally posted by tk102:

A mosquito's relative distress at being smashed would be much greater than the discomfort a mosquito bite would cause in a person, but because Kperson>>Kmosquito, Ddon't smash > Dsmash.

Secondly, this assumption that the person's proportion of "universal distress" would be "much greater than" the mosquito's, (in essence that the person is intrinsically more "valuable" than the mosquito, because its capacity to suffer is so much larger) begs a certain degree of analysis. Once again, the question of "distress" is by no means the be-all and end-all of morality, but let's address distress alone at this point.

 

It is a general social convention that we humans are "more valuable" than other animals. But let's examine that convention and see whether we can discern the reasoning behind it, and whether this reasoning gives us any insight into the question of how we should classify other organisms on the "distress" or "suffering" scale specifically.

 

First we must define the boundaries of such a scale.

 

Science has made this first step quite easy, by teaching us a lot about the biology of simple life-forms. There are forms of life that have literally zero cognitive ability, literally zero capacity to feel suffering, fear, pain etcetera. It stands to reason that we should not concern ourselves with causing suffering to a creature that is unable to suffer.

 

Once again, let us define the boundaries: We know that higher life-forms show signs of fear and pain, and that their brains are really quite similar to our own, structurally and relatively speaking. Therefore it is reasonable to assume that most of the more complex mammals- including humans- would be highest on the scale of "capacity to suffer".

 

We know that the most simple life-forms (single-celled organisms for instance) have no complex nervous system and no higher reasoning powers, nor any organ that fulfils a similar function to the complex brain of the higher life-forms. Therefore we have successfully (if roughly) classified the positions of several forms of life on the "capacity to suffer" scale. At the one end we have mere "biological robots", those rudimentary animals that operate on a simple set of rules (a few lines of code, one might say) and at the other end we have highly developed mammals.

 

Therefore we have two simple moral rules already:

 

1. Indiscriminate attacks on say... bacteria through artificial means such as disinfectant, are not intrinsically immoral from a "suffering" perspective. (Leaving aside for now the question of whether such robotic life-forms have an intrinsic right to life.)

 

2. any maltreatment of the highest forms- highly developed mammals- is VERY MUCH immoral.

 

However we run into a sticking point in the middle of the scale, pretty much AS SOON as life-forms become complex to any degree. As soon as the rudiments of a nervous system appear in a primitive creature, our previously easily defined boundaries become blurred.

 

As an example, we might dig up an earthworm. A simple invertebrate, it seems to operate on a simple set of rules. At first appearance it would appear to be a biological robot, a form of life too simple to warrant the care and attention afforded to a cat, a dog, a monkey or another person. But of course, research has shown that earthworms do indeed have a nervous system developed enough to pass along information about injuries and to trigger reflex reactions to these stimuli... but research also suggests that the brain of the worm is probably not complex enough to accommodate higher functions that we might define as "distress". (As in emotional responses like fear and horror at the pain one is suffering combined with the fervent desire to live, etcetera...)

 

So if the earthworm does indeed register pain... but cannot interpret it quite the way we do, is it capable of what we call "suffering"? Well in an attempt to answer this difficult question, let's use a hypothetical:

 

Suppose some very advanced alien lifeforms arrive on earth from another distant world. Then suppose that their equivalent of brain functions are so advanced and complex that to them, we seem like mere robots, mere biological automatons. Suppose that they- like us- have some sort of logical standard of moral behaviour that they wish to adhere to. Then suppose that they decide that they can do whatever horrible things they want to us, without danger of being immoral. Because we are simply unable to experience the complex emotional state that they define as "distress".

 

Clearly we would consider this an appallingly unfair and short-sighted decision on the part of the aliens. But from the aliens' perspective, it might seem quite logical. In this respect it's comparable to our routine decision to value humans more highly than other animals. What do humans have that other animals do not? Merely slightly more complex brains.

 

This hypothetical highlights the fact that "rating" other organisms on a scale of intrinsic value purely by the apparent complexity of their cognitive functions may not be a moral thing to do. After all, if taken to its inevitable conclusion, this concept of "intellect as value" would lead us to terrifying consequences. It would perhaps mean that torturing a severely mentally retarded person would be regarded as "more moral" than torturing a college professor... Children's brains don't develop fully for some years, by the above standard it would presumably be regarded as being "more moral" to torture a young schoolkid than it would to torture his or her adult teacher.

 

In short, such a scale probably isn't moral anyway.

 

So, returning finally to the question of the single mosquito... you can't arbitrarily decide that a mosquito's suffering is in some way less intrinsically important than a man's suffering. What you CAN do is note that in some countries mosquitos carry fatal or severely debilitating diseases. Therefore if you're IN one of those countries, you should kill the mosquito as the risk to the human is great indeed. If you're NOT in one of those countries, why not let the critter bite you?

 

Because in my country mosquitos present little or no danger to me, I don't kill mosquitos. If they appear, I let them fly around my house, and they can bite me if they wish. Because a small insect bite that may itch for a couple of hours is a TINY inconvenience to me... It poses no danger to me, it doesn't affect my life in any meaningful way. Therefore it certainly does not warrant the killing of the organism in question.

 

Anyway, in the course of our reasoning, even without addressing any question other than the question of "capacity for suffering", we've arrived at the fairly conservative principle that the moral man must give other creatures the benefit of the doubt whenever possible, in terms of their capacity to suffer. This fairly universal principle would have to be factored into any mathematical "moral equation" of the type you're attempting to construct.

 

I'm afraid that's all I can think of on the topic right now.

 

-

 

Originally posted by Achilles:

Would it be inaccurate to interject that what we're looking for is a creature's capacity for suffering as compared to its capacity for happiness? A bee has relatively diminutive capacity for suffering or happiness especially when compared to the highly allergic human that it is about to sting, correct? So it wouldn't be immoral to kill a bee that was trying to attack you. Conversely, a cat has a relatively higher capacity of suffering and happiness, therefore it would not be moral for an allergic person to randomly kill cats.

Hmm. On these examples, Achilles: A bee-sting can be fatal to a person allergic to bee-stings. Therefore it might well be moral for this allergic fellow to kill the bee, as it qualifies as self-defence.

 

If the person allergic to cats might ALSO be killed by the cat, (improbable) AND if the killing of the cat efficiently removed the threat (which it probably would not) then the killing of the cat might also be self-defence and therefore moral.

 

Given these variables and the probable circumstances surrounding each example, I personally don't think that the "capacity for suffering vs. happiness" question is addressed by the examples at all. Conversely, I don't think the question is relevant to any discussion of these examples specifically.

 

-

 

Originally posted by Tyrion:

Also, the main problem I have with moral objectivity is that humans can never be truly objective; we always have some bias because we always have an opinion and a limited view of existence.

I'm afraid that's the same non-sequitur that many people seem to churn out in these debates, Tyrion. Human objectivity (or lack of it) has NOTHING whatsoever to do with moral objectivism. Logic dictates that morality must be universal or it is not morality. Therefore, morality by definition IS objective, and must be applied objectively to be moral.

 

Whether people are CAPABLE of doing this is neither here nor there. It's literally completely irrelevant.

 

In essence, your stance is that: "people aren't objective therefore morality can never be objective". Which is like saying: "people aren't objective therefore mathematics can never be objective". Which is obviously nonsense. There is a right answer to a calculation and there are wrong answers. People may assert that "2 + 2 = 5", but that doesn't MAKE it five. That doesn't MAKE the numbers relative.

 

Numbers are numbers, just as morality is morality. People make mistakes while exploring mathematical calculations, people make mistakes while deciding what is morally right. But that doesn't make "maths relative". It doesn't make "morality relative". It just means people are fallible. "Moral relativism" is an irrational, illogical, and by definition immoral stance.

 

Therefore, your position doesn't make any sense.

Link to comment
Share on other sites

@windu6:

I see think I understand the direction you investigating in trying to determine how a person will choose to act given his predisposition and various social factors. That goal is a bit different than what I seeking with this thread which was to determine an objective definition for morality independent of the individual and his society.

 

@Spider AL:

1. Does your equation tell us anything about morality that hasn't already been elucidated in longhand by the major moral philosophers of history?
Oh probably not... but the syntax is concise and with the goal of eliminating all ambiguity in the terms. When you say m(x) and I say m(x) I know we're talking about the exactly the same thing for example.

More importantly, COULD it tell us anything new about morality in the future?
Well I'll admit the goal of mine was less lofty than that. Rather than discover something new about morality, I wanted to gather the known factors of morality and relate them in a form of shorthand. I figured having a formula such as that would be an easy reference guide. :lol: At least on paper, if not in practice. ;)

 

 

I understand what you mean by loss. I tried to cheat it into the D(x) factor assuming death as being the maximum distress for an individual:

...especially since organ donation would likely result in his death (assumed to be the maximum distress for an organism)...
I believe that was the only negative impact you cited that didn't fall under D(x). "Violations of established rights" I believe would qualify as psychological distress. I'm open to suggestions though of how better to define m(x) in terms of physical distress, psychological distress, death. Maybe each of those get their own variable instead of being lumped under D(x). :)

 

 

I'm glad you took issue with Ky. It bothered me too. Seemed quite anthrocentric, but at the same time I am surprised you wouldn't kill a mosquito. If I saw one biting my son, I wouldn't hesitate even without the risk of disease.

 

But getting back, you are suggesting that in order to be remain objective, we simplify the equation:

 

D(x) := ∑ δxyKy : (y ∈ P)

 

to

 

D(x) := ∑ δxy : (y ∈ P)

 

That is quite a conservative view, considering the suffering of a mosquito is equivalent is on par to the suffering of a human. It's not unheard of though -- Jainism follows this belief precisely. Plus it is does eliminate the rather troublesome Ky value.

 

But why does Ky trouble us? As you suggested with your alien scenario, what appears to an advanced alien as moral may appear to us as cruel and immoral. If we assume the universe has a fixed number of species within it, it is possible there does exist a scale of complexity that could be objectively defined. By inference then, the seemingly subjective Ky would in fact be an objective value. So in that case, perhaps it is true that Khuman << Kalien. Oh yeah, that's why it troubles us. :(

 

 

TK, I wonder why you'd want my input?
Because of the lightbulb. :D

 

 

I still haven't figured out how to represent a categorical imperative into this set of equations. Anyone have any suggestions?

Link to comment
Share on other sites

@windu6:

I see think I understand the direction you investigating in trying to determine how a person will choose to act given his predisposition and various social factors. That goal is a bit different than what I seeking with this thread which was to determine an objective definition for morality independent of the individual and his society.

You must mean a A.I. or someone, if they exist on Earth, that has no emotions.

But that human may exist, nothing is impossible. :)

 

Or thinking outside the box, a supreme universal intelligence, for our universe?

 

A universal intelligence that is a compose of all the intelligence life in our visible universe.

 

I still haven't figured out how to represent a categorical imperative into this set of equations. Anyone have any suggestions?

 

categorical imperative: The moral principle that behaviour should be determined by duty.

 

Well, this... maybe a tough one, I will have to do some structure analysis with Set Theory.

 

I will post if I figure this one out, tk102. :)

Link to comment
Share on other sites

  • 3 weeks later...

I'm back. :haw:

 

Originally posted by tk102:

Oh probably not... but the syntax is concise and with the goal of eliminating all ambiguity in the terms. When you say m(x) and I say m(x) I know we're talking about the exactly the same thing for example.

Removing ambiguity is a laudable goal, but since your notation merely directly substitutes symbols for longhand words and concepts, it's not going to be any more nor any less ambiguous than the original terms were. We could certainly quibble over the meaning of "m(x)" to precisely the same extent as we could quibble over a longhand term like "the morality of a given action".

 

Concision is also a laudable goal, but I'm afraid that since every public usage of a "morality equation" will have to be accompanied with a longhand key-code, It'll be slightly less concise than just using the longhand.

 

Don't get me wrong, I still think that it's an interesting exercise that might yield new insights from manipulation of the variables... but I rather think that's all it's useful for.

 

Originally posted by tk102:

I understand what you mean by loss. I tried to cheat it into the D(x) factor assuming death as being the maximum distress for an individual:

 

I believe that was the only negative impact you cited that didn't fall under D(x). "Violations of established rights" I believe would qualify as psychological distress. I'm open to suggestions though of how better to define m(x) in terms of physical distress, psychological distress, death. Maybe each of those get their own variable instead of being lumped under D(x).

Absolutely they need their own values. I mean, conflating these values totally oversimplifies the equation. If the values are joined at the hip, there's no way to properly delineate say... a painless murder and on the other end of the scale, torture without physical injury.

 

Originally posted by tk102:

I'm glad you took issue with Ky. It bothered me too. Seemed quite anthrocentric, but at the same time I am surprised you wouldn't kill a mosquito. If I saw one biting my son, I wouldn't hesitate even without the risk of disease.

Why would you kill it? Merely because of parental instinct? If so, I'm sure you'll agree that such protective instincts- while human and understandable- are not means by which we can determine a moral course of action.

 

Originally posted by tk102:

But getting back, you are suggesting that in order to be remain objective, we simplify the equation:

 

D(x) := ? dxyKy : (y ? P)

 

to

 

D(x) := ? dxy : (y ? P)

 

That is quite a conservative view, considering the suffering of a mosquito is equivalent is on par to the suffering of a human. It's not unheard of though -- Jainism follows this belief precisely. Plus it is does eliminate the rather troublesome Ky value.

Which once again highlights this point: In order to be optimally moral, it follows that we must follow the most conservative view. By this, I mean the optimally moral individual would follow a course of action that eliminated even the RISK of behaving immorally.

 

Thus, while I am not a follower of Jainism, I would have a hard time arguing against the assertion that they're being optimally moral in their treatment of other creatures.

 

Originally posted by tk102:

But why does Ky trouble us? As you suggested with your alien scenario, what appears to an advanced alien as moral may appear to us as cruel and immoral. If we assume the universe has a fixed number of species within it, it is possible there does exist a scale of complexity that could be objectively defined. By inference then, the seemingly subjective Ky would in fact be an objective value. So in that case, perhaps it is true that Khuman << Kalien. Oh yeah, that's why it troubles us.

Mmm, bit of leap there TK, I rather think the reason the alien scenario shows that anthrocentric views are immoral is that we know that we suffer and die, and that suffering and death are negative experiences for us. So regardless of whether the hypothetical aliens are more complex than us in both body and mind, that doesn't make their version of suffering more "valuable" than ours. It doesn't make our lives less "important". After all, empathy is about putting yourself in the shoes of others, not weighing them against your self-image.

 

So I think the question of whether there's an objectively definable scale of complexity on which all higher animal life can fit... is an utter irrelevance to the question of morality and moral treatment of other creatures.

 

Originally posted by tk102:

I still haven't figured out how to represent a categorical imperative into this set of equations. Anyone have any suggestions?

The categorical imperative is essentially a plea for moral universality. Universality is already implied (assumed, actually) in the equation. I don't think you need to fit the CI in anywhere to properly represent the values you're trying to describe. It would take a separate (and prohibitively complex) equation to deal with the question "why should a person be moral?" ;)
Link to comment
Share on other sites

  • 3 weeks later...

There are two problems I see with moral objectivism: cultual diffirences and personal bias. Cultual diffirence means what is seen as imoral in one part of the world is completely normal in another. For example women being treated like objects in Asia and ornamental in Asia does not fly well with the Western world. In fact culture may not have as much to do with it as one's upbringing and beliefs. One person may believe the best way to take action is a confrontational way, another might feel not so, and yet another, all three being from the same culture, may feel to just allow something to slide. With regard to personal bias, I see that someone will feel that their morals are right regardless of what someone else may think. They may feel that it's perfectly acceptable or they are entitled to act the way they do. They may even go further and claim that it's alright for them to act that way and not others, or simply pick on others for the same faults they refuse to accept in themselves.

Link to comment
Share on other sites

  • 2 weeks later...

Your "problems" with objective morality were all addressed in the earlier thread entitled "Moral Relativism".

 

In short, your contentions make no sense. Just because something is regarded as moral in one culture and immoral in another means nothing, except that at least ONE of those cultures has gotten it wrong.

 

Like numbers, objective morality is an abstract objective standard, it remains static whether people perceive it correctly or not.

Link to comment
Share on other sites

The standard is easy to define, through a basic application of logic. Go and read through that thread again, it's all explained in great depth by several people.

 

Just as in mathematics, simple logic allows one to examine all variables in the moral equation and extrapolate from them a ruleset by which moral behaviour can be quantified.

Link to comment
Share on other sites

"Forcing" others to conform to a moral standard? Well that would depend, wouldn't it Nancy. Laws already "force" people to conform to a societal standard, in a way. I personally wouldn't have a problem "forcing" an axe-murderer to stop his immoral behaviour.

 

What specific example were you thinking of when it comes to "forcing" others to behave morally?

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.


×
×
  • Create New...