Consequentialism
What is Utilitarianism?
The first ethical theory we will look at more closely is consequentialism. Consequentialism claims that whether an action is right or wrong depends on the consequences that it brings about. Any consequentialist ethical theory has to provide a justification of how we decide which consequences are good or bad. The most famous form of consequentialist ethics is utilitarianism which was first proposed by Jeremy Bentham and then furthered by John Stuart Mill in the 19th century in his work ‘Utilitarianism’ which you can access by clicking here.
Mill was a hedonist and believed that above all we desire happiness and the absence of pain. There are of course other things that we value but these are simply a means of producing the ultimate end which is happiness. Mill also claims that we ought to promote as much happiness as possible. Utilitarians claim that actions are “right in proportion as they tend to promote happiness; wrong as they tend to produce the reverse of happiness” and that what is desirable is “pleasure and the freedom from pain”[1]. In other words, the morally right action is the one that produces the greatest amount of happiness or welfare. This is not the greatest amount of happiness for the moral agent but the greatest amount of happiness overall - this is known as the ‘Greatest Happiness Principle’. So let us consider a simplified example - imagine we are faced with an ethical dilemma. This ethical dilemma has two actions that we can take - Action 1 and Action 2:
Happiness Produced Pain Produced Net Happiness
Action 1 + 20 units - 10 units + 10 units
Action 2 + 15 units 0 units + 15 units
In the above ethical scenario, utilitarianism would claim that the correct options to take is Action 2 as it produces the greatest amount of happiness overall. Utilitarianism in such scenarios can provide an objective justification of why we have chosen Action 2 - this can be explained and shown clearly should the moral agent have to justify their decision. However, the difficulty is that it can be difficult to calculate the amount of happiness and amount of pain which each action will take. Critics claim that the moral agent in these cases will be bogged down by a ‘utilitarian calculus.’
Higher and Lower Order Pleasures
Bentham, Mill’s mentor and teacher, was a quantitative hedonist. This means that he felt that we should aim to produce as much happiness as possible. A famous criticism of Bentham’s hedonism was that it was better to be a satisfied pig that to be Socrates the famous philosopher. The criticism claimed that the pig was more happy overall in their life than Socrates who often had strife and was ultimately put to death.
Mill address this criticism and introduced a qualitative version of utilitarianism. This stated that there were some pleasures which were more valuable than others. He referred to these as higher order pleasures. Higher order pleases are those such as poetry and philosophy. In contrast there are lower order pleasures, such as watching tv or sitting idly on one’s own. Mill claims that we should always aim to promote higher order pleasures. Furthermore, no matter how much of a lower order pleasure we have we should always choose a higher order pleasure even if the quantity of the former is higher. He summarises this in Chapter 2 of Utilitarianism:
‘If I am asked what I mean by difference of quality in pleasures, or what makes one pleasure more valuable than another, merely as a pleasure, except its being greater in amount, there is but one possible answer. If one of the two is, by those who are competently acquainted with both, placed so far above the other that they prefer it, even though knowing it to be attended with a greater amount of discontent, and would not resign it for any quantity of the other pleasure which their nature is capable of, we are justified in ascribing to the preferred enjoyment a superiority in quality so far outweighing quantity as to render it, in comparison, of small account.’ [1]
An example of utilitarianism in practice
Now we will look at an example of how utilitarianism may be used in a medical context. To do so, let’s look at an example of limited resources, such as ventilators, in hospitals during the Covid-19 outbreak and how we may decide which patients should be allocated these resources. In this example let us imagine that: A hospital with limited capacity to deal with Covid-19 patients has to decide between admitting a 20 year old patient, who is fit and well, and a 75 year old patient, who has a history of heart disease, to their last available ventilator.
When deciding on questions of what is ethically right or wrong utilitarians want to maximise the overall amount of happiness or welfare. In this case, they would argue that welfare would be maximised by giving the last ventilator to the 20 year old. This is because all things being equal they will live longer and may contribute more to society over a longer period of time. In many hospitals and healthcare trusts across the world this is the thought process that is guiding their policies regarding who should be treated in hospitals. Particularly in Italy, where the death toll and strain on hospitals has reached a critical point, there is a policy now to prioritise treating younger patients and healthcare workers. One doctor speaking to the Business Insider claimed:
"If you had, let's say, an ICU that was overwhelmed, you're probably going to try and give some extra attention to healthcare workers because you need them to deliver care," he said. "The rationale isn't that they're more worthy; it's that they can contribute in the longer run to saving more lives." This is of course a contentious rationale. During the Covid-19 pandemic it was interesting to see that many people rejected this kind of justification and found it abhorrent that we would place a predictive value on a life.
Criticisms to Utilitarianism
So far, we have considered utilitarianism and its use in ethical dilemmas. We will now focus on some common objections of utilitarianism:
Objection 1 - The ‘One Thought Too Many’ Objection:
Bernard Williams [2] objects that utilitarian decision making commits us to having ‘one thought too many.’ In other words, imagine Person A who is a strong swimmer sees their wife drowning in a pond. We would assume that Person A jumps straight into the pond and saves them. Intuitively, this would be instinctive and not require the level of reflection of whether the action of saving their wife would produce maximum utility. The ‘one thought too many’ objection, therefore, provides an interesting debate over the motivational state of the moral agent. In this case, are we to believe that Individual A is motivated to save their wife because ‘it is the action which would produce the greatest amount of overall happiness?' or would it be - as we would assume - simply to help the person with whom we have a relationship with. Those who defend utilitarianism may claim that this objection does not apply for most everyday ethical decisions such as the one that Person A finds themselves in. In cases such as these, intuition will suffice and the answer to the dilemma will be so obvious that a further level of reflection is not required. Calculating the happiness or pain an action may produce is for more complex ethical decisions where further deliberation is required.
Objection 2 - The ‘Survival Lottery Thought Experiment’
Another common objection of utilitarianism is that by only focusing on objectively promoting the greatest good, utilitarians can permit and indeed claim that certain actions, which appear morally wrong, are in fact the right action. To demonstrate this I will use an adapted version of John Harris’ ‘Survival Lottery’ thought experiment [3]: ‘Let us imagine we have five patients who require organ transplants. On the next ward is Patient A who has a curable disease and also is a suitable donor for the other patients. Utilitarianism would permit a doctor to allow Patient A to die and for their organs to be used to save the lives of the other five patients. Their justification is that this action would produce the most overall welfare.’
Clearly, sacrificing an individual simply based in it producing the greatest overall utility is an abhorrent thought and one which seems to conflict with our intuition. What this example highlights is that there are some actions which appear wrong regardless of whether they produce the greatest overall happiness - this is the view of duty based theories of ethics - which we discuss in more detail here.
This criticism in not a silver bullet to discredit utilitarianism entirely. Mill could argue that he agrees that the sacrificing of an individual is clearly wrong and appeal to a system of rights which he discusses in Chapter 5 of utilitarianism. This is certainly a step which could hold some merit but many argue that Mill does not make it explicit how his theory of utilitarianism and principles of justice marry up. Others may claim that rule utilitarianism is another way to escape this criticism - you can read more about this here.
Objection 3 - Nozick’s Experience Machine:
The final criticism of utilitarianism which we will consider here is that of Nozick. His claim is that it is not simply happiness that we desire - we desire other things too that are independent of happiness and furthermore, they are not necessarily a means to happiness. He introduces the thought experiment of the experience machine in his work ‘Anarchy, State and Utopia:’
‘Suppose there were an experience machine that would give you any experience that you desired. Superduper neuropsychologists could stimulate your brain so that you would think and feel you were writing a great novel, or making a friend, or reading an interesting book. All the time you would be floating in a tank, with electrodes attached to your brain. Should you plug into this machine for life, preprogramming your life's experiences? If you are worried about missing out on desirable experiences, we can suppose that business enterprises have researched thoroughly the lives of many others. You can pick and choose from their large library or smorgasbord of such experiences, selecting your life's experiences for, say, the next two years. After two years have passed, you will have ten minutes or ten hours out of the tank, to select the experiences of your next two years. Of course, while in the tank you won't know that you're there; you'll think it's all actually happening. Others can also plug in to have the experiences they want, so there's no need to stay unplugged to serve them. (Ignore problems such as who will service the machines if everyone plugs in.) Would you plug in? What else can matter to us, other than how our lives feel from the inside? Nor should you refrain because of the few moments of distress between the moment you've decided and the moment you're plugged. What's a few moments of distress compared to a lifetime of bliss (if that's what you choose), and why feel any distress at all if your decision is the best one?’
Nozick claims that most of us would not want to plug in the machine despite the promise of happiness and the absence of pain. He states three reasons why this would not be the case:
‘First, we want to do certain things, and not just have the experience of doing them. In the case of certain experiences, it is only because first we want to do the actions that we want the experiences of doing them or thinking we've done them.’[4]
‘A second reason for not plugging in is that we want to be a certain way, to be a certain sort of person. Someone floating in a tank is an indeterminate blob. There is no answer to the question of what a person is like who has been long in the tank. Is he courageous, kind, intelligent, witty, loving? It's not merely that it's difficult to tell; there's no way he is. Plugging into the machine is a kind of suicide. It will seem to some, trapped by a picture, that nothing about what we are like can matter except as it gets reflected in our experiences.’ [4]
‘Thirdly, plugging into an experience machine limits us to a man-made reality, to a world no deeper or more important than that which people can construct. There is no actual contact with any deeper reality, though the experience of it can be simulated.’
References:
Mill, J. (2014). Utilitarianism (Cambridge Library Collection - Philosophy). Cambridge:Cambridge University Press.
Smart, J. J. C. & Williams, Bernard (1973). Utilitarianism: For and Against. Cambridge: Cambridge University Press.
Harris, John (1975). "The survival lottery." Philosophy, 50: 81-87.
Nozick, R., 1974 [ASU], Anarchy, State, and Utopia, New York: Basic Books
For a summary of consequentialist ethics check out the videos below:
For more information on consequentialism and utilitarianism follow the links below:
https://plato.stanford.edu/entries/consequentialism/
https://plato.stanford.edu/entries/utilitarianism-history/
https://plato.stanford.edu/entries/mill-moral-political/
https://www.iep.utm.edu/util-a-r/
https://ethics.org.au/ethics-explainer-consequentialism/