Can Rightness Be Determined by Value?
The right thing to do is the one that has the greatest benefit or does the least harm.
This is not a Q1 violation so long as you can define "greatest benefit" and "least harm" and these two criteria don’t come into conflict.
This is a Q2 violation if, for example, you would object to someone allowing you to come to harm for the benefit of others.
This is a utilitarian way of looking at morality. In many situations, it leads to what most people would agree is the preferred solution to a problem. For example, most people would agree that given the question of what a single man should do with leftover Thanksgiving turkey, it would be better form him to give some away than for him to eat what he could and let the rest go to waste. This solution maximizes benefit, and would mesh well with essentially all widely accepted moral systems.
Problems begin when it comes to calculating benefit and harm. In order to assign value to the consequence of an action, you need to be able to quantify those consequences. Is making a room full of people laugh a greater or lesser benefit than making one person ecstatically happy? Does a baby’s life have more value than an accomplished old man’s? How many people would soylent green have to be able to feed before it was morally allowable to kill people to make it?
If you find a consistent system for assigning value, you will be able to make moral decisions by performing calculations. Now the question is whether or not the results of these calculations are what you would hope them to be. That is, does maximizing benefit and minimizing harm lead you to behave in a way that you feel is moral?
It is possible — even likely — that a rigorous system for assigning value for the purposes of making moral decisions does not exist. If that is indeed the case, how will you make decisions when the mathematical values are incalculable or too close to call?
One thing this mathematical approach to morality does not have is consideration of intentions. If you are looking at the result of an action, does the intent behind the action make any difference? If so, shouldn’t you quantify it? And if you quantify it, how much harm or benefit can allowably be offset by good intentions?
Consider this situation: you are a general in the air force of a country that is at war. You learn that an enemy general is hiding in an orphanage. There are two ways you might justify bombing the orphanage. You might decide that killing the general will disrupt the enemy, shorten the war, and save thousands of lives, so bombing the building is a net good, even though there are children in it. Or you might decide that killing the children will show the enemy that you will stop at nothing to win, shorten the war, and save thousands of lives, so bombing the building is a net good, because there are children in it.
In both cases, the mathematics of benefit and harm work out the same, but many people would say that whether or not your goal is to kill children is morally important. Many people would also say that killing these children is not morally allowed, no matter how many other people’s lives it might save.
Even when using the mathematical approach, you might want to consider whether some actions are always wrong, regardless of how many people they benefit. For example, is it moral to:
- Kill one innocent person to save many innocent people?
- Beat a child for its own good?
- Own slaves that help you grow inexpensive food for everyone?
- Let one person be eaten by lions to entertain thousands in an arena?
- Humiliate one person to entertain hundreds in a comedy club?
- Torture someone to get life-saving information?
If you find that there are factors that should sometimes take precedence over mathematical judgement, then perhaps you can use mathematical value as a tie breaker instead of a primary method of making moral decisions. On the other hand, you might prefer to have value as your primary decision-making factor, and argue that although these other considerations may feel as if they are more important, there is no logical reason to treat them as more important.
You are encouraged to leave your answers to the questions posed in this post in the comments section. This post is based on an excerpt from Ask Yourself to be Moral, by D. Cancilla, available at LuLu.com and Amazon.com. See the 2Q system page for details of the philosophical system mentioned in this post.