Acessibilidade / Reportar erro

PRIORITARIANISM WITHOUT CONSEQUENTIALISM* * This article is supported by a research project on "Contemporary Anglo-American Left-Wing Theories of Distribution" (No.: 17BZX081), General Research Grant of the National Social Science Fund of China.

ABSTRACT

According to prioritarianism, an influential theory of distributive justice, we have a stronger (non-egalitarian) reason to benefit people the worse off these people are (Parfit 2012PARFIT, D. "Another Defense of the Priority View". Utilitas, Vol. 24, pp. 399-440, 2012.). Many authors have adopted a consequentialist version of prioritarianism. On this account, we have a consequentialist reason to benefit the worse off because the state of affairs where the worse off gains a given amount of utility is more valuable than the state of affairs where the better off gains roughly the same amount of utility. In this paper, we argue that the consequentialist approach to prioritarianism is problematic. However, it doesn't follow that the prioritarian doctrine per se is groundless. We then suggest that we can make sense of prioritarianism by appeal to a contractualist approach.

Keywords
Prioritarianism; Consequentialism; Contractualism; Distributive Justice

RESUMO

De acordo com o prioritarianismo, uma teoria influente da justiça distributiva, temos um motivo forte (não igualitário) para beneficiar as pessoas, por piores que tais pessoas sejam (Parfit, 2012PARFIT, D. "Another Defense of the Priority View". Utilitas, Vol. 24, pp. 399-440, 2012.). Diversos autores têm adotado uma versão consequencialista de prioritarianismo. Nesse sentido, temos um motivo consequencialista para beneficiar ainda que piores por causa do estado de coisas em que os piores que ganham uma dada quantidade de utilidade são mais valiosos do que o estado de coisas em que os melhores ganham aproximadamente a mesma quantidade de utilidade. Neste artigo, argumentamos que a abordagem consequencialista ao prioritarianismo é problemática. Entretanto, não se acompanha que a doutrina prioritarista seja, por si, sem base. Então sugerimos do que podemos fazer algum sentido no prioritarianismo por apelo a uma abordagem contraditória.

Palavras-chave
Prioritarianismo; consequencialismo; contratualismo; justiça distributiva

1. Introduction

In many circumstances regarding distributive justice where we could either benefit the worse off people of the society, or give roughly the same benefit to the better off, it seems that we ought to benefit the worse off. Let's consider a simple example as follows:1 1 The basic setup of this example is borrowed from Otsuka and Voorhoeve (2009).

(Case 1) Imagine that a young adult, Tom, suffers from slight impairment, which is a condition that renders it difficult for one to walk more than 2 km. Another young adult, Smith, suffers from very severe impairment, which is a condition that leaves one bedridden, save for the fact that one will be able to sit in a chair and be moved around in a wheelchair for part of the day if assisted by others. Suppose that you can either choose the treatment for Smith's very severe impairment, or the treatment for Tom's slight impairment. If you treat Smith's very severe impairment, you could improve his condition up to severe impairment - a condition in which he is no longer bedridden; rather, he is able to sit up on his own for the entire day but requires the assistance of others to move about. If you treat Tom's slight impairment, you would completely eliminate the mild disability. Suppose each treatment, if effective, would increase the same amount of utility. Suppose also that you cannot choose both treatments. The question is, "As a morally motivated stranger, which treatment ought you to choose?" (Ostuak and Voorhoeve, 2009OTSUKA, M., VOORHOEVE, A. "Why It Matters That Some Are Worse off than Others: An Argument against the Priority View". Philosophy and Public Affairs, Vol. 37, pp. 171-199, 2009.).

We have a strong intuition that a moral agent ought to treat Smith's very severe impairment instead of Tom's slight impairment. This paper doesn't challenge the reliability of the moral intuition. But it is an intriguing question what principles of distributive justice could ground the intuition. Clearly utilitarianism cannot be used to justify assigning priority to Smith over Tom, since it has been supposed that each treatment, if effective, would increase the same amount of utility.

Some may attempt to justify benefiting the worse off by appealing to egalitarianism, which endorses the intrinsic value of equality. According to this line of reasoning, a moral agent ought to treat Smith's very severe impairment because the inequality gap between Smith and Tom would be reduced. But the egalitarian approach has a downside. If equality were intrinsically valuable, then there would be a (pro tanto) reason for justifying a leveling-down action, which seems counterintuitive to many people.2 2 We argue elsewhere that egalitarianism can meet the leveling-down challenge (see Tang and Zhong, 2013). In this paper, we leave it open whether equality is intrinsically valuable or not. But even if equality is not intrinsically valuable, we might still have other reasons (such as a prioritarian reason) to benefit the worse off.

Recently, political philosophers have hotly discussed another theory of distributive justice, prioritarianism, for justifying the priority of benefiting the worse off (Crisp, 2011CRISP, R. "In Defense of the Priority View: A Response to Otsuka and Voorhoeve". Utilitas, Vol. 23, pp. 105-108, 2011.; Broome, 1991BROOME, J. "Weighing Goods". Oxford: Blackwell Press, 1991.; Hirose, 2009HIROSE, I. "Reconsidering the Value of Equality". Australasian Journal of Philosophy, Vol. 87, pp. 301-312, 2009.; Ostuka and Voorhoeve, 2009OTSUKA, M., VOORHOEVE, A. "Why It Matters That Some Are Worse off than Others: An Argument against the Priority View". Philosophy and Public Affairs, Vol. 37, pp. 171-199, 2009.; Parfit, 1991PARFIT, D. "Equality or Priority?". In: The Lindley Lecture. University of Kansas, 1991., 1997PARFIT, D. "Equality and Priority". Ratio, Vol. 10, pp. 202-221, 1997., 2012PARFIT, D. "Another Defense of the Priority View". Utilitas, Vol. 24, pp. 399-440, 2012.; Rabinowicz, 2002RABINOWICZ, W. "Prioritarianism for Prospects". Utilitas, Vol. 14, pp. 2-21, 2002.; Weirich, 1983WEIRICH, P. "Utility Tempered with Equality". Noûs, Vol. 17, pp. 423-439, 1983.). Derek Parfit is perhaps the first philosopher who develops the idea of prioritarianism to a sophisticated level. He defines prioritarianism as the view that 1) we have a stronger moral reason to benefit people the worse off these people are; 2) this is not because equality is intrinsically valuable. Parfit puts it this way:

Benefits to the worse off matter more, but that is only because these people are at a lower absolute level. It is irrelevant that these people are worse off than others. Benefits to them would matter just as much even if there were no others who were better off. The chief difference is, then, this. Egalitarians are concerned with relativities: with how each person's level compares with the level of other people. On the Priority View, we are concerned only with people's absolute levels (Parfit 1997PARFIT, D. "Equality and Priority". Ratio, Vol. 10, pp. 202-221, 1997., p. 214).

On a prioritarian theory, the reason why an agent ought to treat Smith's very severe impairment instead of Tom's slight impairment is that Smith's well-being is located at a lower point on the objective scale and Tom's well-being is located at a higher point. In such cases, reducing inequality is only a by-product of achieving priority. The difference between egalitarianism and prioritarianism will become more evident if we consider a leveling-down case. Since prioritarianism cares only about benefiting the worse off, there are no priority-based reasons in favor of a leveling-down action.

But how to make sense of the priority view? Many authors have adopted a consequentialist version of prioritarianism (Broome, 1991BROOME, J. "Weighing Goods". Oxford: Blackwell Press, 1991.; Hirose, 2009HIROSE, I. "Reconsidering the Value of Equality". Australasian Journal of Philosophy, Vol. 87, pp. 301-312, 2009.; Rabinowicz, 2002RABINOWICZ, W. "Prioritarianism for Prospects". Utilitas, Vol. 14, pp. 2-21, 2002.). On this account, we have a stronger moral reason to benefit the worse off because (1) a state of affairs where the worse off gains a given amount of utility is more valuable than a state of affairs where the better off gains roughly the same amount of utility (we call it "the Diminishing Marginal Value of Utility"); and (2) we morally ought do whatever will maximize the overall goodness (Consequentialism).3 3 By ‘consequentialism', I mean act consequentialism. It is doubtful that rule consequentialism is a genuine kind of consequentialism.

The article is structured as follows. In Sections 2-3, we attempt to reject the principle of diminishing marginal value of utility by arguing that a given amount of utility gain to the worse off and the same amount of utility gain to the better off are equally valuable. Thus, the reason why we morally ought to benefit the worse off is not that doing this would maximize the overall good - a consequentialist approach to prioritarianism fails. However, it doesn't follow that the prioritarian doctrine per se is groundless. In Sections 4-5, we suggest that we can make sense of the prioritarian doctrine by appeal to a contractualist approach.

2. Diminishing Utility vs. Diminishing Value

On the principle of diminishing marginal value of utility, the value or goodness of a given amount of utility gain decreases as the well-being of the recipient increases.4 4 Some prioritarians talk about the moral goodness of benefiting the worse off. This is just a re-description of the doctrine of prioritarianism per se and thus should be distinguished from the consequentialist version of prioritarianism. The principle of diminishing marginal value of utility shouldn't be confused with the law of diminishing marginal utility of income. Whereas the law of diminishing marginal utility of income is concerned with how the utility of a given amount of income varies as a given recipient becomes richer, the principle of diminishing marginal value of utility is about how the value of a given amount of utility varies as a given recipient becomes better off.

Nevertheless, the diminishment of marginal utility probably triggers some prioritarians to propose the principle of diminishing marginal value of utility. On the one hand, following utilitarian economists, the prioritarians still want to take a consequentialist strategy to prioritize the worse off. On the other hand, different from utilitarians, prioritarians hold that how better an outcome is not only depends upon the amount of a given utility gain, but also depends upon who receives the given utility gain. In Weirich's words, we need a theory that gives some weight to every utility gain, but gives more weight to utility gains for those less well off and so helps them to catch up (Weirich, 1983WEIRICH, P. "Utility Tempered with Equality". Noûs, Vol. 17, pp. 423-439, 1983., p. 424). Just as resources have diminishing marginal utility, so utility has diminishing marginal goodness (Parfit, 1991PARFIT, D. "Equality or Priority?". In: The Lindley Lecture. University of Kansas, 1991., p. 105).

Many authors thus simply assume that the principle of diminishing marginal value of utility (plus consequentialism) is the ground of prioritarianism (Broome, 1991BROOME, J. "Weighing Goods". Oxford: Blackwell Press, 1991.; Hirose, 2009HIROSE, I. "Reconsidering the Value of Equality". Australasian Journal of Philosophy, Vol. 87, pp. 301-312, 2009.; Rabinowicz, 2002RABINOWICZ, W. "Prioritarianism for Prospects". Utilitas, Vol. 14, pp. 2-21, 2002.). The formal structure of the diminishment-based argument for the priority of benefiting the worse off is quite similar to the formal structure of the utilitarian argument for the priority of benefiting the poorer. According to the diminishment-based argument, the putative truth of prioritarianism is rooted in the following view, a state of affairs <u'1, u'2, …, u'n> is at least as good as another state of affairs <u'1, u'2, …, u'n> if and only if gu1+gu2++gungu1+gu2++gun, where g() is a strictly increasing and strictly concave function, whose argument is a utility level and the dependent variable is the value of a person being at that level. For instance, an outcome, <u1, u2>, where Smith's utility level is u1 and Tom's utility level is u2, is better than another outcome, <u'1, u'2>, where Smith's utility level is u'1 and Tom's utility level is u'2, if and only if gu1+gu2>gu1+gu2.

Let's turn back to Case 1. For the sake of clarity, we will use a number to indicate the utility level of a certain prospect. Suppose that the utility level of very severe impairment is 1, severe impairment is 3, slight impairment is 4, and a healthy physical state is 6. Notice that the amount of utility gain from very severe impairment to severe impairment, (3 - 1 = 2), equals the amount of utility gain from slight impairment to perfect health, (6 - 4 = 2). Given the assumptions, we have g3+g4>g1+g6, where g() is a strictly concave function. For example, suppose that g(x) = x1/2, we then have, 31/2+41/2>11/2+61/2. It thus follows that the outcome where Smith suffers from severe impairment and Tom suffers from slight impairment is better or more valuable than the outcome where Smith suffers very severe impairment and Tom is perfectly healthy. So a moral agent ought to treat Smith's very severe impairment.

3. Moral vs. Non-moral Cases

In this section, however, we will argue that the principle of diminishing marginal value of utility is problematic - prioritarianism thus should not be based on this principle. To begin with, it is helpful to distinguish "moral cases" from "non-moral cases". A moral case is a case concerning what an agent morally ought to do. By contrast, a non-moral case is a case that is only concerned with non-moral value or goodness of the states of affairs.

Case 1 introduced in Section 1 is a paradigm moral case. Let's briefly restate this case. A young adult, Smith, suffers from very severe impairment; another young adult, Tom, suffers from slight impairment. Suppose that a moral agent can choose either the treatment for Smith's very severe impairment, or the treatment for Tom's slight impairment. But she cannot choose both treatments. Which treatment ought the moral agent to choose? It seems the moral agent ought to treat Smith's very severe impairment rather than Tom's slight impairment.

Now consider the following two non-moral cases:

(Case 2) Marie's physical state varies as time changes. Consider the sequence of four equal time intervals, t1, t2, t3 and t4. Marie suffered from very severe impairment at t1, severe impairment at t2, slight impairment at t3. Fortunately, she was perfectly healthy at t4. As we have assumed, the amount of utility gain from very severe impairment to severe impairment, (3 - 1 = 2), equals the amount of utility gain from slight impairment to perfect health, (6 - 4 = 2). Let's use xi to indicate the value of all states of affairs occurring at ti. Compare x1, x2, x3 and x4. It is easy to see that, all other things being equal, x2 is greater than x1 and x4 is greater than x3. The question is, "Is (x2 - x1) greater than (x4 - x3)?"

(Case 3) At an earlier time interval, Smith and Tom are equally well off in all aspects except that Smith suffered from very severe impairment, but Tom suffered from slight impairment. Thanks to some natural power, at a later time interval, their physical conditions are both improved. Smith's physical condition was improved from very severe impairment up to severe impairment; Tom's physical condition was improved from slight impairment up to perfect health. Once again, let's assume that the amount of utility gain from very severe impairment to severe impairment equals the amount of utility gain from slight impairment to perfect health. Compare the following four values: 1) the value of the state of affairs where Smith suffered from very severe impairment; 2) the value of the state of affairs where Smith suffered from severe impairment; 3) the value of the state of affairs where Tom suffered from slight impairment; and 4) the value of the state of affairs where Tom was perfectly healthy. Now use y1, y2, y3 and y4 to indicate the four values respectively. Obviously, all other things being equal, y2 > y1, and y4 > y3. The question is, "Is (y2 - y1) greater than (y4 - y3)?"

Case 2 and Case 3 are both non-moral cases. In the state of affairs S1, a patient suffers from very severe impairment; in S2, a patient suffers from severe impairment; in S3, a patient suffers from slight impairment; and in S4 a patient is perfectly healthy. All other things being equal, is the amount of value gain from S1 to S2 greater than the amount of value gain from S3 to S4? We don't think so. It seems that a given benefit to the worse off and the same amount of benefit to the better off are equally valuable. Cases 2 & 3 serve as crucial thought experiments for testing the principle of diminishing marginal value of utility. We have conducted several surveys among philosophers regarding the two cases. Most of them have the intuition that the amount of value gain from very severe impairment to severe impairment equals the amount of value gain from slight impairment to perfect health. It is reasonable to say that non-moral goodness is determined by utility (at least in person-regarding cases). Since the utility gains are the same in the two cases, non-moral value gains should be the same as well.

It is important to note that even if the marginal value of utility is constant, we could still have a stronger moral reason to benefit the worse off than to give the same benefit to the better off. The constancy of the marginal value of utility can be compatible with prioritizing the worse off. People may confuse moral cases with non-moral cases. Prioritarians share the intuition that we have a stronger moral reason to benefit the worse off. But this intuition, we suspect, may lead some of them to mistakenly think that a given amount of benefit to the worse off would bring about a greater non-moral value than the same amount of benefit to the better off would do.

Moreover, some people who believe that the amounts of value gain are not equal in Cases 2 & 3 may probably mistake a case involving the comparison of the same amount of utility gain for a case regarding the comparison of different amounts of utility gain. For example, someone may implicitly consider the utility gain of bringing Smith's very severe impairment up to severe impairment to be greater than the utility gain of bringing Tom's slight impairment up to health. Therefore, they believe that the amount of value gain from very severe impairment to severe impairment is greater than the amount of value gain from slight impairment to health. But this would not be a case in favor of the principle of diminishing marginal value of utility.

Next, we wish to indicate a theoretical reason why the prioritarian should not accept the principle of diminishing marginal value of utility. The reason is related with the disagreement over leveling-down between prioritarianism and egalitarianism. Consider a simple case of two-person distribution. Jane and John are unequal in their utility or well-being. Suppose that Jane has a greater amount of utility than John does - say, Jane has 200 units, whereas John has 100 units (call the unequal situation Sa). Then we take 100 units away from Jane without giving them to John so that each has 100 units (call the leveling-down situation Sb). According to egalitarianism, the leveling-down situation Sb would contain some pro tanto value that is not had by the original unequal situation Sa - in other words, Sb would be better than Sa in some aspect. However, contrary to the egalitarians, prioritarians universally maintain that a leveling down action would not bring about any pro tanto value. In claiming this, they implicitly appeal to the principle that non-moral value is determined by utility (at least in person-regarding cases). Since no one's utility is improved by leveling down, the leveling-down situation Sb does not contain any new value. So, in order for prioritarianism to be distinct from egalitarianism, the prioritarian should believe that non-moral goodness is determined by utility. That is to say, prioritarianism should reject the principle of diminishing marginal value of utility.

If goodness is determined by utility, we cannot appeal to consequentialism to justify the priority of benefiting the worse off. Thus, the prioritarian should consider an anti-consequentialist approach. What we morally ought to do doesn't entirely depend on the goodness of the states of affairs our action would bring about. In some circumstances, we morally ought to bring about a state of affairs A rather than another state of affairs B, even if the value of A is the same as, or even less than, the value of B. Consider an example given by Judith Thomson. David is a great transplant surgeon. Five of his patients need new organs: a heart, liver, stomach, spleen, and spinal cord, respectively - but all are of the same, relatively rare, blood type. By chance, David learns of a healthy person with that very blood type. David can kill the healthy person and then use his organs to save the five patients. Or he can refrain from taking the healthy person's parts, letting his patients die. If David kills the healthy person to save the five patients, there will be only one death; if David does nothing, the five patients will die - there will be five deaths. Although five deaths are worse than one death, David still ought not to kill the healthy person (Thomson, 1985THOMSON, J. "The Trolley Problem". Yale Law Journal, Vol. 94, pp. 1395-1415, 1985., p. 1399).

Or consider another example. Suppose that we can either eliminate one person's blindness or eliminate one billion people's 3-minute tiny headache, but not both. Suppose further that the state of affairs where one billion people has 3-minute tiny headache contains a greater amount of badness or disvalue than the state of affairs where one person is blind.5 5 The disvalue of blindness is quite high, but not infinitely high; the disvalue of 3-minute tiny headache is very low, but not infinitely low. So, as long as the population is large enough (say, 1 billion people), it seems to follow that the state of affairs where one billion people has 3-minute tiny headache contains a greater amount of disvalue than the state of affairs where one person is blind. But it seems that we still morally ought to cure a blind person rather than eliminate one billion people's 3-minute tiny headache, even though the latter action would maximize the overall goodness (Scanlon, 2008SCANLON, T. "Moral Dimensions: Permissibility, Meaning, Blame." Cambridge: Harvard University Press, 2008.).

We have a similar situation here. In Case 1, for example, benefiting the worse off will not maximize the overall goodness, as benefiting the worse off and benefiting the better off would bring about the same amount of utility. But it seems that we have a stronger moral reason to benefit the worse off than to benefit the better off even though the two actions would bring the same amount of non-moral goodness. The prioritarian is faced with two theoretical options: 1) non-moral goodness is detached from utility (the principle of diminishing marginal value of utility), but moral rightness is determined by non-moral goodness (consequentialism); and 2) non-moral goodness is determined by utility, at least in person-regarding cases (the constancy of the marginal value of utility), but moral rightness is separated from non-moral goodness (anti-consequentialism). By considering all theoretical factors, we contend that the prioritarian should adopt the second option.

4. Contractualism and Prioritarianism

If our analysis in the last section is correct, a consequentialist version of prioritarianism doesn't work. Then what's the ground, if any, for prioritarianism? Many philosophers have attempted to justify moral principles by considering what people would agree upon in a suitable contractual scenario (Rawls, 1971RAWLS, J. "A Theory of Justice". Cambridge: Harvard University Press, 1971., 2001RAWLS, J. "Justice as Fairness: A Restatement". Cambridge: Harvard University Press, 2001.; Harsanyi, 1975HARSANYI, J. "Can the Maximin Principle Serve as a Basis for Morality? A Critique of John Rawls's Theory". The American Political Science Review, Vol. 69, pp. 594- 606, 1975.). Contractualism is typically a non-consequentialist approach (Watson, 2002WATSON, G. "Some Considerations in Favor of Contractualism". In: S. Darwall (ed.), Contractarianism/Contractualism. Oxford: Blackwell Press, 2002. pp. 249-269.). While consequentialism claims that an action is morally right if and only if it maximizes the overall goodness, contractualism maintains that an action is morally right if and only if it is endorsed by principles that would be chosen in an ideal contractual situation.

Being inspired by the tradition of social contract theory, in this section we aim to propose a contractualist approach to prioritarianism. Our contractual situation is characterized as follows. Consider a society, which consists of two groups of people, "Suffered" and "Annoyed". While people in Suffered all suffer from very severe impairment, people in Annoyed all suffer from slight impairment. People have general knowledge about the society. They know that the society consists of two groups, Suffered and Annoyed, and that each person has a 50 percent chance of being a member of Suffered (also a 50 percent chance of being a member of Annoyed). They also know that a moral agent (an angel or the government) can only treat one group's impairment; and that each treatment would increase the same amount of utility. But let's suppose that they don't have much particular knowledge about themselves. For example, they don't have knowledge about their age, gender, race, or their social and economic classes, etc.; more relevantly, they don't know which group they belong to.

In Rawls's, Harsanyi's, and our contractual situation, an essential condition is that people in the original position are behind a veil of ignorance. Although they have some general psychological, sociological and physical knowledge, they don't have much specific information about themselves. But how thick the veil of ignorance is varies. Rawls assumes that the original position is a situation where even knowledge of likelihoods is unavailable. That is, people behind the veil of ignorance don't know how likely they would turn out to lead a particular course of life. For example, they don't know how likely they would turn out to suffer from any disabilities. However, Harsanyi argues that a rational decision-maker simply cannot make decisions without appealing to probabilities, even in a situation of complete ignorance; if a person has no empirical information about herself or about the world, then she should act as if each prospect is equally probable (Harsanyi, 1975HARSANYI, J. "Can the Maximin Principle Serve as a Basis for Morality? A Critique of John Rawls's Theory". The American Political Science Review, Vol. 69, pp. 594- 606, 1975.). If Harsanyi is correct, then people in Rawlsian original position would have to act as if different prospects of life are equally probable. As with Harsanyi, we assume the availability of the knowledge of probabilities in our account. For example, people in our contractual situation know that being a person who suffers from very severe impairment and being a person who suffers from slight impairment are equally probable.

Let's summarize the deliberation circumstance faced by every person in our contractual situation. Everyone has a 50 percent chance of developing very severe impairment and a 50 percent chance of developing slight impairment. She could either choose the treatment for very severe impairment or choose the treatment for slight impairment; but she cannot choose both. Each treatment would increase the same amount of utility. Now people in our contractual situation need to achieve an agreement about which treatment they want a moral agent to choose. The question is, "Which treatment ought she to choose?"

There are two influential rationales for decision-making. One is the principle of expected utility maximization, which many contemporary economists and philosophers apply to cases with uncertainties. According to expected utility maximization, a rational person ought to prefer a lottery, L1, to another lottery, L2, if and only if her expected utility under L1 is greater than her expected utility under L2. A person's expected utility under a lottery is equal to the sum of the utilities of possible outcomes multiplied by their probabilities. The other is the maximin rationale, which Rawls appeals to in justifying his two principles of justice. According to the maximin rationale, a rational person would adopt the option the worst outcome of which is superior to the worst outcomes of the other alternatives (Rawls, 1971RAWLS, J. "A Theory of Justice". Cambridge: Harvard University Press, 1971., pp. 152-154).

Harsanyi insists that the principle of expected utility maximization applies to all cases with uncertainties, and that it is irrational for a person to follow maximin. He says, "If you took the maximin principle seriously then you could not ever cross a street (after all, you might be hit by a car); you could never drive over a bridge (after all, it might collapse)…" (Harsanyi, 1975HARSANYI, J. "Can the Maximin Principle Serve as a Basis for Morality? A Critique of John Rawls's Theory". The American Political Science Review, Vol. 69, pp. 594- 606, 1975., p. 595).6 6 Even Rawls would not suggest people in our contractual situation to use maximin. For Rawls, though a rational person would adopt the maximin rationale in some circumstances (e.g. the original position defined by him), our contractual situation is not a suitable circumstance for using maximin due to the availability of the knowledge of probability. Rawls emphasizes that a suitable situation for maximin is "one in which a knowledge of likelihoods is impossible, or at best extremely insecure" (Rawls, 1971, p. 154).

We agree with Harsanyi that the maximin principle is problematic (at least in cases in which the knowledge of probabilities is available). When expected utilities are different, it seems that we ought to choose the option with greater expected utility, even if the option may lead to the worst outcome. But what if expected utilities are equal? According to the principle of expected utility maximization, a rational person would feel indifferent between two options with equal expected utilities. But this is highly controversial. Here we want to propose a third principle, which takes both expected utility and maximin into consideration:

[The Hybrid Principle] (1) Where the expected utilities of the two options are significantly different, a rational person should prefer the option that brings greater expected utility; and (2) where the expected utilities of the two options are not significantly different, a rational person should adopt the option the worst outcome of which is superior to the worst outcomes of the other alternatives.7PARFIT, D. "Equality and Priority". Ratio, Vol. 10, pp. 202-221, 1997.

Take street-crossing as an example. Compare the following two scenarios. In the first scenario, a person knows that the probability of getting to the other side safely is much greater than the probability of being hit by a car. The probability of getting to the other side safely is 0.99999; the probability of being hit is 0.00001. Suppose that the utility of getting to the other side safely is 1000 and the utility of being hit is minus 1000,000. Then the expected utility of crossing the street will be: (100 * 0.99999) + (-1000,000 * 0.00001) = 89.999. Suppose that the expected utility of not crossing the street is 50. Thus the expected utility of crossing the street is greater than the expected utility of not crossing the street. The first scenario is a standard case in daily life. In this kind of circumstance, as Harsanyi observes, a person should follow the rationale of expected utility maximization.

In the second scenario, a person knows that the probability of getting to other side and that of being hit equals each to 0.5. Suppose that the utility of getting to the other side is 1000 and the utility of being hit is minus 1000,000. Then the expected utility of crossing the street will be: (100 * 0.5) + (-1000,000 * 0.5) = -499,950. Now suppose that the expected utility of not crossing is equal to the expected utility of crossing, i.e. -499,950. And suppose that the worse outcome of not crossing is better than the worse outcome of crossing. Since the expected utility of crossing and that of not crossing are equal, the principle of expected utility maximization would suggest us to toss a coin. But we have a strong intuition that we should choose the option of not crossing the street. Whereas the principle of expected utility maximization is unable to accommodate this kind of cases, the Hybrid principle can.

Our contractual situation is similar to the second street-crossing case. Given the numbers that indicate the utility levels of different prospects, a person's expected utility under the lottery of receiving the treatment for very severe impairment would be: 3*50% + 4*50% = 3.5. Her expected utility under the lottery of receiving the treatment for slight impairment would be: 1*50% + 6*50% = 3.5. Since the two values are equal, people should apply the clause (2) of the Hybrid principle. Agents behind the veil of ignorance should reason as follows: "If I choose the treatment for very severe impairment, then I could either end up with severe impairment or end up with slight impairment. If I choose the treatment for slight impairment, then I could either end up with very severe impairment or end up being perfectly healthy. The worse outcome under the first choice is to suffer from severe impairment; the worse outcome under the second choice is to suffer from very severe impairment. Our expected utility under the two choices are equal. But the worse outcome under the first choice is better than the worse outcome under the second choice. Therefore, I ought to choose the first alternative, i.e. the treatment for very severe impairment."

Therefore, by following the Hybrid principle, people in our contractual situation would want the moral agent to choose the treatment for very severe impairment over the treatment for slight impairment. Then according to the contractualist postulation, what people would agree upon in a suitable contractual scenario like the original position determines what one morally ought to do. Then it follows that, other things being equal, a moral agent ought to treat the very severe impairment rather than the slight impairment - in other words, we come to the prioritarian conclusion that "the worse off matters more".

5. Concluding Remarks

We have argued against a consequentialist approach to prioritarianism by rejecting the principle of diminishing marginal value of utility. Then we have attempted to make sense of prioritarianism by appealing to a contractualist approach. In the final section of this paper, let's give two further remarks.

(a) Consider the following one-patient moral case. Marie has a 50 percent chance of developing very severe impairment, and a 50 percent chance of developing slight impairment. Suppose that a moral agent could choose either the treatment for very severe impairment or the treatment for slight impairment. In order for any treatment to be effective, it must be taken before it is known which impairment she will suffer. Which treatment ought a moral agent to choose?

This is quite similar to the contractual situation we proposed in the previous section. In both cases, a person doesn't know whether she suffers very severe impairment or slight impairment; she knows that she has a 50 percent chance of developing either impairment; she knows that a moral agent can only treat one impairment. Then it seems to follow that in both cases, a person wants the moral agent to treat her very severe impairment rather than her slight impairment. That is to say, as with the two-patient moral case, in the one-patient moral case we also have a stronger reason to choose the treatment for the patient's very severe impairment.8 8 According to Ostuak and Voorhoeve (2009), in the one-patient case, it is indifferent whether a moral agent treats the patient's very severe impairment or her slight impairment. We disagree. We have conducted some surveys on the one-patient case. Most of philosophers answer that a moral agent has a stronger reason to treat the patient's very severe impairment. As Parfit says, "benefits to the worse off would matter just as much even if there were no others who were better off" (Parfit, 1997PARFIT, D. "Equality and Priority". Ratio, Vol. 10, pp. 202-221, 1997.). Our approach seems to provide a unifying reason that can account for both the priority of benefiting the worse off person in inter-personal cases and the priority of benefiting a person's worse off prospect in intra-personal cases.

(b) We want to explain why our contractualist account would reject leveling-down actions. Consider a modified example. Imagine a society, all members of which suffer from the same type of impairment (say, slight impairment). Suppose that a moral agent can either treat half of the people in the society (call it ‘the lucky group') or do nothing. Suppose that people in the contractual scenario have knowledge about the above facts. But due to the veil of ignorance, they don't know whether they belong to the lucky group; all they know is that they have a 50 percent chance to be a member of the lucky group. Moreover, as we have argued, people should adopt the Hybrid principle in their reasoning.

Given all the information, it seems people would want a moral agent to treat half of the people over doing nothing. The reasoning of each person is as follows, "If the moral agent chooses to treat half of the people, then I would either end up with slight impairment or end up with perfect health. But if the moral agent chooses to do nothing, then I would end up with slight impairment anyway. The expected utilities under the two options are different. I should choose the option that would bring greater expected utility. So, I should want the moral agent to treat half of the people rather than do nothing."

Therefore, the moral agent has a stronger reason to treat half of the people (an unequal situation) instead of doing nothing (a leveling-down equal situation). Now we can see that our contractualist approach to prioritarianism would reject leveling down.

  • *
    This article is supported by a research project on "Contemporary Anglo-American Left-Wing Theories of Distribution" (No.: 17BZX081), General Research Grant of the National Social Science Fund of China.
  • 1
    The basic setup of this example is borrowed from Otsuka and Voorhoeve (2009)OTSUKA, M., VOORHOEVE, A. "Why It Matters That Some Are Worse off than Others: An Argument against the Priority View". Philosophy and Public Affairs, Vol. 37, pp. 171-199, 2009..
  • 2
    We argue elsewhere that egalitarianism can meet the leveling-down challenge (see Tang and Zhong, 2013TANG, Y., ZHONG, L. "Toward a Demystification of Egalitarianism". Philosophical Forum, Vol. 44, pp. 149-163, 2013.). In this paper, we leave it open whether equality is intrinsically valuable or not. But even if equality is not intrinsically valuable, we might still have other reasons (such as a prioritarian reason) to benefit the worse off.
  • 3
    By ‘consequentialism', I mean act consequentialism. It is doubtful that rule consequentialism is a genuine kind of consequentialism.
  • 4
    Some prioritarians talk about the moral goodness of benefiting the worse off. This is just a re-description of the doctrine of prioritarianism per se and thus should be distinguished from the consequentialist version of prioritarianism.
  • 5
    The disvalue of blindness is quite high, but not infinitely high; the disvalue of 3-minute tiny headache is very low, but not infinitely low. So, as long as the population is large enough (say, 1 billion people), it seems to follow that the state of affairs where one billion people has 3-minute tiny headache contains a greater amount of disvalue than the state of affairs where one person is blind.
  • 6
    Even Rawls would not suggest people in our contractual situation to use maximin. For Rawls, though a rational person would adopt the maximin rationale in some circumstances (e.g. the original position defined by him), our contractual situation is not a suitable circumstance for using maximin due to the availability of the knowledge of probability. Rawls emphasizes that a suitable situation for maximin is "one in which a knowledge of likelihoods is impossible, or at best extremely insecure" (Rawls, 1971RAWLS, J. "A Theory of Justice". Cambridge: Harvard University Press, 1971., p. 154).
  • 7
    Let's tolerate the term ‘significantly different'. See also Parfit, 1997PARFIT, D. "Equality and Priority". Ratio, Vol. 10, pp. 202-221, 1997., 2012PARFIT, D. "Another Defense of the Priority View". Utilitas, Vol. 24, pp. 399-440, 2012..
  • 8
    According to Ostuak and Voorhoeve (2009)OTSUKA, M., VOORHOEVE, A. "Why It Matters That Some Are Worse off than Others: An Argument against the Priority View". Philosophy and Public Affairs, Vol. 37, pp. 171-199, 2009., in the one-patient case, it is indifferent whether a moral agent treats the patient's very severe impairment or her slight impairment. We disagree. We have conducted some surveys on the one-patient case. Most of philosophers answer that a moral agent has a stronger reason to treat the patient's very severe impairment.

References

  • BROOME, J. "Weighing Goods". Oxford: Blackwell Press, 1991.
  • CRISP, R. "In Defense of the Priority View: A Response to Otsuka and Voorhoeve". Utilitas, Vol. 23, pp. 105-108, 2011.
  • HARSANYI, J. "Can the Maximin Principle Serve as a Basis for Morality? A Critique of John Rawls's Theory". The American Political Science Review, Vol. 69, pp. 594- 606, 1975.
  • HIROSE, I. "Reconsidering the Value of Equality". Australasian Journal of Philosophy, Vol. 87, pp. 301-312, 2009.
  • OTSUKA, M., VOORHOEVE, A. "Why It Matters That Some Are Worse off than Others: An Argument against the Priority View". Philosophy and Public Affairs, Vol. 37, pp. 171-199, 2009.
  • PARFIT, D. "Another Defense of the Priority View". Utilitas, Vol. 24, pp. 399-440, 2012.
  • PARFIT, D. "Equality and Priority". Ratio, Vol. 10, pp. 202-221, 1997.
  • PARFIT, D. "Equality or Priority?". In: The Lindley Lecture University of Kansas, 1991.
  • RABINOWICZ, W. "Prioritarianism for Prospects". Utilitas, Vol. 14, pp. 2-21, 2002.
  • RAWLS, J. "A Theory of Justice". Cambridge: Harvard University Press, 1971.
  • RAWLS, J. "Justice as Fairness: A Restatement". Cambridge: Harvard University Press, 2001.
  • SCANLON, T. "Moral Dimensions: Permissibility, Meaning, Blame." Cambridge: Harvard University Press, 2008.
  • TANG, Y., ZHONG, L. "Toward a Demystification of Egalitarianism". Philosophical Forum, Vol. 44, pp. 149-163, 2013.
  • THOMSON, J. "The Trolley Problem". Yale Law Journal, Vol. 94, pp. 1395-1415, 1985.
  • WATSON, G. "Some Considerations in Favor of Contractualism". In: S. Darwall (ed.), Contractarianism/Contractualism Oxford: Blackwell Press, 2002. pp. 249-269.
  • WEIRICH, P. "Utility Tempered with Equality". Noûs, Vol. 17, pp. 423-439, 1983.

Publication Dates

  • Publication in this collection
    Dec 2018

History

  • Received
    06 Sept 2017
  • Accepted
    09 Nov 2017
Faculdade de Filosofia e Ciências Humanas da UFMG Av. Antônio Carlos, 6627 Campus Pampulha, CEP: 31270-301 Belo Horizonte MG - Brasil, Tel: (31) 3409-5025, Fax: (31) 3409-5041 - Belo Horizonte - MG - Brazil
E-mail: kriterion@fafich.ufmg.br