Can Rightness Be Determined by Value?

Statement

The right thing to do is the one that has the greatest benefit or does the least harm.

Q1 Analysis

This is not a Q1 violation so long as you can define "greatest benefit" and "least harm" and these two criteria don’t come into conflict.

Q2 Analysis

This is a Q2 violation if, for example, you would object to someone allowing you to come to harm for the benefit of others.

Discussion

This is a utilitarian way of looking at morality. In many situations, it leads to what most people would agree is the preferred solution to a problem. For example, most people would agree that given the question of what a single man should do with leftover Thanksgiving turkey, it would be better form him to give some away than for him to eat what he could and let the rest go to waste. This solution maximizes benefit, and would mesh well with essentially all widely accepted moral systems.

Problems begin when it comes to calculating benefit and harm. In order to assign value to the consequence of an action, you need to be able to quantify those consequences. Is making a room full of people laugh a greater or lesser benefit than making one person ecstatically happy? Does a baby’s life have more value than an accomplished old man’s? How many people would soylent green have to be able to feed before it was morally allowable to kill people to make it?

If you find a consistent system for assigning value, you will be able to make moral decisions by performing calculations. Now the question is whether or not the results of these calculations are what you would hope them to be. That is, does maximizing benefit and minimizing harm lead you to behave in a way that you feel is moral?

It is possible — even likely — that a rigorous system for assigning value for the purposes of making moral decisions does not exist. If that is indeed the case, how will you make decisions when the mathematical values are incalculable or too close to call?

One thing this mathematical approach to morality does not have is consideration of intentions. If you are looking at the result of an action, does the intent behind the action make any difference? If so, shouldn’t you quantify it? And if you quantify it, how much harm or benefit can allowably be offset by good intentions?

Consider this situation: you are a general in the air force of a country that is at war. You learn that an enemy general is hiding in an orphanage. There are two ways you might justify bombing the orphanage. You might decide that killing the general will disrupt the enemy, shorten the war, and save thousands of lives, so bombing the building is a net good, even though there are children in it. Or you might decide that killing the children will show the enemy that you will stop at nothing to win, shorten the war, and save thousands of lives, so bombing the building is a net good, because there are children in it.

In both cases, the mathematics of benefit and harm work out the same, but many people would say that whether or not your goal is to kill children is morally important. Many people would also say that killing these children is not morally allowed, no matter how many other people’s lives it might save.

Even when using the mathematical approach, you might want to consider whether some actions are always wrong, regardless of how many people they benefit. For example, is it moral to:

If you find that there are factors that should sometimes take precedence over mathematical judgement, then perhaps you can use mathematical value as a tie breaker instead of a primary method of making moral decisions. On the other hand, you might prefer to have value as your primary decision-making factor, and argue that although these other considerations may feel as if they are more important, there is no logical reason to treat them as more important.

You are encouraged to leave your answers to the questions posed in this post in the comments section. This post is based on an excerpt from Ask Yourself to be Moral, by D. Cancilla, available at LuLu.com and Amazon.com. See the 2Q system page for details of the philosophical system mentioned in this post.

Posted on February 11, 2011 at 9:59 pm by ideclare · Permalink
In: 2Q

One Response

Subscribe to comments via RSS

  1. Written by Victor
    on February 12, 2011 at 2:05 am
    Reply · Permalink

    I could be wrong but I am open to other people’s opinions, the moment you believe you are completely right about something the moment you are truly wrong…

    I think good, universal good is:
    -Balance – Because we just need to look at the Earth and the environment to realize that excess in any way leads to negative consecuenses.

    -Knowlegde – Because is our main weapon and because without it we are gullible and allow prejudice to blind us to reality, its wonders and its dangers. You can tell a man is intelligent for his answers, you can tell a man is wise by his questions.

    – Humility – Greed is a poweful force that can lead entire civilizations to ultimate destruction, we need to realize that things like country, money and religion are just things we invented and are only blocking us from working toghether to solve our problems: hunger, plagues, wars, natural disasters, etc. Humanity is one and the universe, our planet has enough space for everyone, without humility we will end up killing ourselves before realizing our true potencial.

    These three principles shape universal good and I think they are what we need to be based upon to make though choices like the ones presented in here.

    Its my opinion and I am open to discussion.

Subscribe to comments via RSS

Leave a Reply