What do most utilitarians believe that the morality of an action depends on?

As explained in Chapter 1: Introduction to Utilitarianism, the core idea of utilitarianism is that we ought to improve the well-being of everyone by as much as possible. Utilitarian theories generally share four elements: consequentialism, welfarism, impartiality, and aggregationism. Classical utilitarianism is distinctive because it accepts two additional elements: first, hedonism as a theory of well-being; second, the total view of population ethics. There are several further important distinctions between utilitarian theories: we can distinguish scalar from maximizing or satisficing utilitarianism, expectational from objective utilitarianism, multi-level from single-level utilitarianism, and global from hybrid utilitarianism.

The Definition of Utilitarianism

Utilitarianism is not a single viewpoint, but a family of related ethical theories. What these theories have in common are four defining elements:

  1. Consequentialism
  2. Welfarism
  3. Impartiality
  4. Aggregationism
An accurate and comprehensive definition of utilitarianism will include the four elements. We can thus define utilitarianism as follows:
Utilitarianism is the view that one morally ought to promote just the sum total of well-being.1
Sometimes philosophers talk about "welfare" or "utility" rather than "well-being", but they typically mean the same thing.

The Four Elements of Utilitarianism

Consequentialism

Utilitarianism accepts consequentialism, which is defined as follows:

Consequentialism is the view that one morally ought to promote just good outcomes.

On this view, bringing about good outcomes is all that ultimately matters, from a moral perspective. Thus, to evaluate whether to perform an action, we should look at its overall consequences, rather than any of its other features (such as the type of action that it is). For instance, when breaking a promise has bad consequences—as it usually does—consequentialists oppose it. However, breaking a promise is not considered wrong in itself. In exceptional cases, breaking a promise could be the morally best action available, such as when it is necessary to save a life.

Consequentialism's rivals offer alternative accounts of what one morally ought to do that depend on features other than the value of the resulting outcome. For example, according to deontology, morality is about following a system of rules, like "Do Not Lie" or "Do Not Steal". And according to virtue ethics, morality is fundamentally about having a virtuous character. Much of consequentialism's appeal may stem from the conviction that making the world a better place is simply more important than any of these competing moral goals.

Direct and Indirect Consequentialism: Explaining The Difference between Act Utilitarianism and Rule Utilitarianism

When offering a consequentialist account of rightness, a common distinction in the philosophical literature is between two views called direct consequentialism and indirect consequentialism.

According to the direct view, the rightness of an action (or rule, policy, etc.) depends only on its consequences. On this view, to determine the right action in some set of feasible actions, we should directly evaluate the consequences of the actions to see which has the best consequences. The best known direct view is act utilitarianism (or act consequentialism), which directly assesses the moral rightness of (and only of) actions.

In contrast, according to indirect consequentialism we should evaluate the moral status of an action indirectly, based on its relationship to something else (such as a rule), whose status is itself assessed in terms of its consequences. The most famous indirect view is known as rule utilitarianism (or rule consequentialism). According to rule utilitarianism, what makes an action right is that it conforms to the set of rules that would have the best utilitarian consequences if they were generally accepted or followed. Since an action's morality depends only on its conformity to a rule, rather than its own consequences, rule utilitarianism is a form of indirect consequentialism.

On our definition of consequentialism, only the direct view is a genuinely consequentialist position, and rule utilitarianism/consequentialism, despite the name, is not a type of consequentialism.2 As Brad Hooker, the world's leading rule consequentialist, argues, the most plausible form of rule consequentialism is not motivated solely by the consequentialist commitment to outcomes being as good as possible: for example, he justifies rule consequentialism because it impartially justifies intuitively plausible moral rules.3 This marks an important difference from foundationally consequentialist theories.

Though act utilitarianism assesses only actions (rather than rules) in terms of "rightness", it nevertheless also recognizes the importance of having strong commitments to familiar moral rules. Rules such as "don't lie" and "don't kill" are regarded as useful decision procedures—guidelines we should almost always follow—but not as standards of moral rightness. For a related discussion that seeks to clarify this point further, see the section on "multi-level utilitarianism" below.

Welfarism

Consequentialists differ regarding what they mean by good consequences. Utilitarians endorse welfarism, which is defined as follows:

Welfarism is the view that only the welfare (also called well-being) of individuals determines the value of an outcome.4
Specifically, from a welfarist perspective, good consequences are those which increase well-being in the world, while bad consequences are those which decrease it. Philosophers use the term well-being to describe everything that is good for a person in itself, as opposed to things only instrumentally good for a person. For example, money can buy many useful things and is thus good for a person instrumentally, but it is not a component of their well-being.

Different theories of well-being regard different things as the constituents of well-being. The three most prevalent theories are hedonism, desire theories and objective list theories.

While every plausible view recognizes that well-being is important, some philosophers reject welfarism on the grounds that other things matter in addition. For example, egalitarians may hold that inequality is intrinsically bad, even when it benefits some and harms none. Others might hold that environmental and aesthetic value must be considered in addition to well-being.

Impartiality and the Equal Consideration of Interests

Utilitarianism is committed to a conception of impartiality that builds in the equal consideration of interests:

Impartiality is the view that the identity of individuals is irrelevant to the value of an outcome. Furthermore, equal weight must be given to the interests of all individuals.
Accepting this conception of impartiality means treating well-being as equally valuable regardless of when, where or to whom it occurs. As utilitarian philosopher Henry Sidgwick states: "the good of any one person is no more important from the point of view (...) of the universe than the good of any other".5 As a consequence, utilitarians value the well-being of all individuals equally, regardless of their nationality, gender, where or when they live, or even their species. According to utilitarianism, in principle you should not even privilege the well-being of yourself or your family over the well-being of distant strangers (though there may be good practical reasons to do so).6

Not all philosophers agree that impartiality is a core feature of morality. They might hold that we are allowed, or even required, to be partial towards a particular group, such as our friends and family. Or they might advance an alternative conception of "impartiality" that does not require the equal consideration of interests. For example, prioritarianism gives extra weight to the interests of the worst-off, whoever they might be.

Aggregationism

According to utilitarianism, the overall value in the world is given by the sum total of well-being in it. This means utilitarians accept aggregationism, which is defined as follows:7

Aggregationism is the view that the value of the world is the sum8 of the values of its parts, where these parts are local phenomena such as experiences, lives, or societies.9
When combined with welfarism and the equal consideration of interests, this view implies that we can meaningfully add up the well-being of different individuals, and use this total to determine which trade-offs are worth making. For example, utilitarianism claims that improving five lives by some amount is better than improving one life by the same amount, and that it is five times better.

Some philosophers deny any form of aggregationism. They may believe, for instance, that small benefits delivered to many people cannot outweigh large benefits to a few people. To illustrate this belief, suppose you face the choice between saving a given person's life or preventing a large group of people from experiencing mild headaches. An anti-aggregationist might hold that saving the life is more morally important than preventing the headaches, regardless of the number of headaches prevented. Utilitarians would reason that if there are enough people whose headaches you can prevent, then the total well-being generated by preventing the headaches is greater than the total well-being of saving the life, so you are morally required to prevent the headaches.10 The number of headaches we have to relieve for it to be better than saving a life might be, in practice, extremely high—but utilitarians, nonetheless, believe there is some number of headaches at which this trade-off should be made.

In practice, many individuals and policymakers appear to endorse these kinds of trade-offs. For example, allowing cars to drive fast on roads increases the number of people who die in accidents. Placing exceedingly low speed limits would save lives at the cost of inconveniencing many drivers. Most people demonstrate an implicit commitment to aggregationism when they judge it worse to impose these many inconveniences for the sake of saving a few lives.

The Two Elements of Classical Utilitarianism

Above we have explained the four elements accepted by all utilitarian theories: consequentialism, welfarism, impartiality, and aggregationism. While this is useful for distinguishing utilitarian from non-utilitarian moral theories, there are also important distinctions between utilitarian theories. Depending on how a utilitarian theory is spelled out, it might have widely differing practical implications and may be more or less compelling.

The oldest and most prominent utilitarian theory is classical utilitarianism, which can be defined as follows:

Classical utilitarianism is the view that one morally ought to promote just the sum total of happiness over suffering.

Classical utilitarianism can be distinguished from the wider utilitarian family of views because it accepts two additional elements: first, hedonism, the view that well-being consists only of conscious experiences; and second, the total view of population ethics, on which one outcome is better than another if and only if it contains a greater sum total of well-being, where well-being can be increased either by making existing people better off or by creating new people with good lives.

Theories of Well-Being: Hedonism

→ Main article: Theories of Well-Being

Classical utilitarianism accepts hedonism as a theory of well-being, which is defined as follows:

Hedonism is the view that well-being consists in, and only in, the balance of positive over negative conscious experiences.
Ethical hedonists believe that the only things good in themselves are the experiences of positive conscious states, such as enjoyment and pleasure; and the only things bad in themselves are the experiences of negative conscious states, such as misery and pain. Happiness and suffering are commonly used by philosophers as shorthand for the terms positive conscious experience and negative conscious experience respectively.

We discuss the arguments for and against hedonism—and its two major rivals, desire theories and objective list theories—in the chapter Theories of Well-Being.

Population Ethics: The Total View

→ Main article: Population Ethics

Utilitarians agree that if the number of people is held constant, we should promote the sum total of well-being in that fixed population.11 But in reality, the population is not fixed. We have the option of bringing more people into existence, such as by having children. If these additional people would have good lives, is that a way of making the world better? This question falls in the domain of population ethics, which deals with the moral problems that arise when our actions affect who and how many people are born and at what quality of life.

Classical utilitarianism accepts a population ethical theory known as the total view, which holds that:

One outcome is better than another if and only if it contains greater total well-being.
Importantly, one population may have greater total well-being than another in virtue of having more people. One way to calculate this total is to multiply the number of individuals with their average quality of life. For example, the total view regards a world with 100 inhabitants at average well-being level 10 as just as good as another world with 200 inhabitants at well-being level 5—both worlds contain 1,000 units of well-being.

The total view implies that we can improve the world in two ways: either we can improve the quality of life of existing people, or we can increase the number of people living positive lives.12 In practice, there are often trade-offs between making existing people happier and creating additional happy people. On a planet with limited resources, adding more people to an already large population may at some point diminish the quality of life of everyone else severely enough that total well-being decreases.

The total view's foremost practical implication is giving great importance to ensuring the long-term flourishing of civilization. Since the total well-being enjoyed by all future people is potentially enormous, according to the total view, the mitigation of existential risks—which threaten to destroy this immense future value—is one of the principal moral issues facing humanity.

The major alternatives to the total view in population ethics include the average view, variable value theories, critical level (and range) theories, and person-affecting views. We explain and discuss these theories in the chapter on Population Ethics.

Further Distinctions Among Utilitarian Theories

Though we have now explained all the core utilitarian elements, there remain further distinctions within utilitarian theories. After selecting your favored theory of well-being and population ethics view, you should also consider:

  1. how to construct a conception of rightness;
  2. when to focus on actual vs. expected consequences;
  3. the role of simple heuristics, derived from utilitarianism, to guide our actions in everyday life; and
  4. what forms of moral evaluation apply to rules, motives, character, and other objects of moral interest beyond actions.

Reconstructing Rightness: Maximizing, Satisficing, and Scalar Utilitarianism

Utilitarianism is most often stated in its maximizing form: that, within any set of options, the action that produces the most well-being is right, and all other actions are wrong.

Though this is the most common statement of utilitarianism,13 it may be misleading in some respects. Utilitarians agree that you ideally ought to choose whatever action would best promote overall well-being. That's what you have the most moral reason to do. But they do not recommend blaming you every time you fall short of this ideal.14 As a result, many utilitarians consider it misleading to take their claims about what ideally ought to be done as providing an account of moral "rightness" or "obligation" in the ordinary sense.15

To further illustrate this, suppose that Sophie could save no-one, or save 999 people at great personal sacrifice, or save 1,000 people at even greater personal sacrifice. From a utilitarian perspective, the most important thing is that Sophie saves either the 999 people or the 1,000 people rather than saving no-one; the difference between Sophie's saving 999 people and 1,000 people is comparatively small. However, on the maximizing form of utilitarianism, both saving no-one and saving the 999 people would simply be labeled as "wrong". While we might well accept a maximizing account of what agents ideally ought to do, there are further moral claims we may want to make in addition.

Satisficing utilitarianism instead holds that, within any set of options, an action is right if and only if it produces enough well-being.16 This proposal has its own problems and has not yet found wide support.17 In the case given in the previous paragraph, we still want to say there is good reason to save the 1,000 people over the 999 people; labeling both actions as right would risk ignoring the important moral difference between these two options. So while we may be drawn to a satisficing account of what agents are obliged to do in order to meet minimal moral standards,18 this view, too, requires supplementation.

Instead, it is more popular among leading utilitarians today to endorse a form of scalar utilitarianism, which may be defined as follows:

Scalar utilitarianism is the view that moral evaluation is a matter of degree: the more that an act would promote the sum total of well-being, the more moral reason one has to perform that act.19
On this view, there is no fundamental, sharp distinction between 'right' and 'wrong' actions, just a continuous scale from morally better to worse.20

Philosophers have traditionally conceived of maximizing, satisficing, and scalar utilitarianism as competing views. But more recently, it has been suggested that utilitarians could fruitfully accept all three, by constructing multiple different senses of 'should' or 'right'.21 According to this pluralist account, (i) maximizers are correct to hold that Sophie ideally should save all 1,000 people; (ii) satisficers may be correct to hold that saving 999 is minimally acceptable in a way that saving no-one is not; and (iii) scalar utilitarians are correct to hold that it's ultimately a matter of degree, and that the gain from saving 999 rather than zero dwarfs the gain from saving 1,000 rather than 999.

Expectational Utilitarianism Versus Objective Utilitarianism

Given our cognitive and epistemic limitations, we cannot foresee all the consequences of our actions. Many philosophers have held that what we ought to do depends on what we believe at the time of action. The most prominent example of this kind of account is expectational utilitarianism.22

Expectational utilitarianism is the view we should promote expected well-being.
Expectational utilitarianism states we should choose the actions with the highest expected value.23 The expected value of an action is the sum of the value of each of the potential outcomes multiplied by the probability of that outcome occurring. This approach follows expected utility theory, the widely-accepted theory in economics of decision making under uncertainty. So, for example, according to expectational utilitarianism we should choose a 10% chance of saving 1,000 lives over a 50% chance of saving 150 lives, because the former option saves an expected 100 lives (= 10% * 1,000 lives) whereas the latter option saves an expected 75 lives (= 50% * 150 lives). This provides an account of rational choice from a moral point of view.

Objective utilitarianism, by contrast, takes the extent to which we ought to perform an action to depend on the well-being it will in fact produce. The contrast between the two views may be illustrated using a thought experiment:

The risky treatment: A patient has a chronic runny nose that will leave her, if untreated, with a mildly lower well-being for the rest of her life. The only treatment for her condition is very risky, with only a 1% chance of success. If successful, the treatment will cure her completely, but otherwise it will lead to her death. Her doctor gives her the treatment, it succeeds and she is cured.

The doctor's action has—as a matter of pure chance and against overwhelming odds—led to the best outcome for the patient, and not treating the patient would have left her worse off. Thus, according to objective utilitarianism, the doctor has acted rightly. However, the action was wrong from the perspective of expectational utilitarianism. The expected consequences of giving the treatment, with its overwhelming odds of killing her, were much worse for the patient than not treating her at all. The doctor's decision turned out to be immensely fortunate, but it was extremely reckless and irrational given their available information.

When there is a conflict in this way between which act would be actually best versus which would be expectably best, is there a fact of the matter as to which act is "really" right? Many philosophers are drawn to the view that this is a merely verbal dispute. We can talk about the actually-best option as being "objectively right", and the expectably-best option as "subjectively right", and each of these concepts might have a legitimate theoretical role. For example, we should prefer that the actually-best outcome be realized. But we should also recognize that, given our cognitive limitations, in practice it would be wise to instead be guided by considerations of expected value.

Multi-level Utilitarianism Versus Single-level Utilitarianism

In the literature on utilitarianism, a useful distinction is made between a criterion of rightness and a decision procedure. A criterion of rightness tells us what it takes for an action (or rule, policy, etc.) to be right or wrong. A decision procedure is something that we use when thinking about what to do.24

Utilitarians believe that their moral theory is the correct criterion of rightness (at least in the sense of what "ideally ought" to be done, as discussed above). However, they almost universally discourage using utilitarianism as a decision procedure to guide our everyday actions. This would involve deliberately trying to promote aggregate well-being by constantly calculating the expected consequences of our day-to-day actions. For instance, it would be absurd to figure out what breakfast cereal to buy at the grocery store by thinking through all the possible consequences of buying different cereal brands to determine which one best contributes to overall well-being. The decision is low stakes, and not worth spending a lot of time on.

The view that treats utilitarianism as both a criterion of rightness and a decision procedure is known as single-level utilitarianism. Its alternative is multi-level utilitarianism, which only takes utilitarianism to be a criterion of rightness, not as a decision procedure. It is defined as follows:

Multi-level utilitarianism is the view that individuals should usually follow tried-and-tested rules of thumb, or heuristics , rather than trying to calculate which action will produce the most well-being.
According to multi-level utilitarianism we should, under most circumstances, follow a set of simple moral heuristics—do not lie, steal, kill etc.—expecting that this will lead to the best outcomes overall. Often, we should use the commonsense moral norms and laws of our society as rules of thumb to guide our actions. Following these norms and laws usually leads to good outcomes, because they are based on society's experience of what promotes individual well-being. The fact that honesty, integrity, keeping promises, and sticking to the law generally have good consequences explains why in practice utilitarians value such things highly, and use them to guide their everyday actions.25

In contrast, to our knowledge no one has ever defended single-level utilitarianism, including the classical utilitarians.26 Deliberately calculating the expected consequences of our actions is error-prone and risks falling into decision paralysis.

Sometimes, philosophers claim that multi-level utilitarianism is incoherent. But this is not true. Consider the following metaphor provided by Walter Sinnott-Armstrong: The laws of physics govern the flight of a golf ball, but a golfer does not need to calculate physical forces while planning shots.27 Similarly, multi-level utilitarians regard utilitarianism as governing the rightness of actions, but they do not need to calculate expected consequences to make decisions. To the extent that following the heuristics recommended by multi-level utilitarianism results in better outcomes, the theory succeeds.

A common objection to multi-level utilitarianism is that it is self-effacing. A theory is said to be (partially) self-effacing if it (sometimes) directs its adherents to follow a different theory. Multi-level utilitarianism often forbids using the utilitarian criterion when we make decisions, and instead recommends acting in accordance with non-utilitarian heuristics. However, there is nothing inconsistent about saying that your criterion of moral rightness comes apart from the decision procedure it recommends, and it does not mean that the theory fails.

The Difference Between Multi-Level Utilitarianism and Rule Utilitarianism

Multi-level utilitarianism sounds similar to the position known as rule utilitarianism, which we discussed above, and it is easy to confuse the two. Yet, the two theories are distinct and it is important to understand how they differ.

Multi-level utilitarianism takes utilitarianism to be the criterion of moral rightness. This means it does not regard the heuristics it recommends to follow as the ultimate ethical justification of any action, which is only determined by the action's tendency to increase well-being. In contrast, for rule utilitarianism, conformity to a set of rules is the criterion of moral rightness: the reason an action is right or wrong is that it does or does not conform to the right set of rules.

Insofar as you share the fundamental utilitarian concern with promoting well-being, and you simply worry that deliberate pursuit of this goal would prove counterproductive, this should lead you to accept multi-level utilitarianism rather than any kind of rule utilitarianism.

Global Utilitarianism Versus Hybrid Utilitarianism

Most discussions of utilitarianism revolve around act utilitarianism and its criterion of right action. But it is important to appreciate that utilitarians can just as well consider the tendency of other things—like motives, rules, character traits, policies and social institutions—to promote well-being. Since utilitarianism is fundamentally concerned with promoting well-being, we should not merely want to perform those actions that promote well-being. We should also want the motives, rules, traits, policies, institutions, and so on, that promote well-being.

This aspect of utilitarianism has sometimes been overlooked, so those who seek to highlight its applicability to things besides just actions sometimes adopt the label "global utilitarianism" to emphasize this point:28

Global utilitarianism is the view that the utilitarian standards of moral evaluation apply to anything of interest.
Global utilitarianism assesses the moral nature of, for example, a particular character trait, such as kindness or loyalty, based on the consequences that trait has for the well-being of others—just as act utilitarianism morally evaluates actions. This broad focus can help the view to explain or accommodate certain supposedly "non-consequentialist" intuitions. For instance, it captures the understanding that morality is not just about choosing the right acts but is also about following certain rules and developing a virtuous character.

All utilitarians should agree with this much. But there is a further question regarding whether this direct utilitarian evaluation is exhaustive of moral assessment, or whether there is a role for other (albeit less important) kinds of moral evaluation to be made in addition. For example, must utilitarians understand virtue directly as a matter of character traits that tend to promote well-being,29 or could they appeal to a looser but more intuitive connection (such as representing a positive orientation towards the good)?30

A challenge for pure global utilitarianism is that it fails to capture all of the moral evaluations that we intuitively want to be able to make. For example, imagine a world in which moral disapproval was reliably counterproductive: if you blamed someone for doing X, that would just make them stubbornly do X even more in future. Since we only want people to do more good acts, would it follow that only good acts, and not bad ones, were blame-worthy?

Here it is important to distinguish two claims. One is the direct utilitarian assessment that it would be good to blame people for doing good acts, and not for doing bad ones, since that would yield the best results (in the imagined scenario). But a second—distinct—claim is that only bad acts are truly blame-worthy in the sense of intrinsically meriting moral disapproval.31

Importantly, these two claims are compatible. We may hold both that gratuitous torture (for example) warrants moral disapproval, and that it would be a bad idea to express such disapproval (if doing so would just make things worse).

This argument may lead one to endorse a form of hybrid utilitarianism, which we may define as follows:

Hybrid utilitarianism is the view that, while one morally ought to promote just overall well-being, the moral quality of an aim or intention can depend on factors other than whether it promotes overall well-being.
In particular, hybrid utilitarians may understand virtue and praise-worthiness as concerning whether the target individual intends good results, in contrast to global utilitarian evaluation of whether the target's intentions produce good results. When the two come into conflict, we should prefer to achieve good results than to merely intend them—so in this sense the hybrid utilitarian agrees with much that the global utilitarian wants to say. Hybridists just hold that there is more to say in addition.32 For example, if someone is unwittingly anti-reliable at achieving their goals (that is, they reliably achieve the opposite of what they intend, without realizing it), it would clearly be unfortunate were they to sincerely aim at promoting the general good, and we should stop them from having this aim if we can. But their good intentions may be genuinely virtuous and admirable nonetheless.

Purists may object that hybrid utilitarianism is not "really" a form of utilitarianism. And, indeed, it is a hybrid view, combining utilitarian claims (about what matters and what ought to be done) with claims about virtue, praise- and blame-worthiness that go beyond direct utilitarian evaluation. But so long as these further claims do not conflict with any of the core utilitarian claims about what matters and what ought to be done, there would seem no barrier to combining both kinds of claims into a unified view. This may prove a relief for those otherwise drawn to utilitarianism, but who find pure global utilitarian claims about virtue and blameworthiness to be intuitively implausible or incomplete.

Conclusion

All ethical theories belonging to the utilitarian family share four defining characteristics: they are consequentialist, welfarist, impartial, and aggregationist. As a result, they assign supreme moral importance to promoting the sum total of well-being.

Within this family, there are many variants of utilitarian theories. The most prominent of these is classical utilitarianism. This theory is distinguished by its acceptance of hedonism as a theory of well-being and the total view of population ethics.

There are several further distinctions between utilitarian theories: we can distinguish scalar from maximizing and satisficing utilitarianism, expectational from objective utilitarianism, multi-level from single-level utilitarianism, and hybrid from global utilitarianism.

What makes an action moral for utilitarianism?

Utilitarianism holds that an action is right if it tends to promote happiness and wrong if it tends to produce sadness, or the reverse of happiness—not just the happiness of the actor but that of everyone affected by it.

What are the 3 elements of utilitarianism as a moral theory?

Utilitarian theories generally share four elements: consequentialism, welfarism, impartiality, and aggregationism.

What according to utilitarianism is the basis of morality quizlet?

The Principle of Utility is the very basis of Utilitarianism. It states that something is morally right if it produces pleasure, and morally wrong if it produces pain.

What makes an action utilitarian quizlet?

Utilitarianism. the right course of action is determined by its consequences; what is intrinsically good is happiness/pleasure, and 'rightness' is to be determined by the amount of happiness an action creates for everyone, balanced against the harms this action also causes.