This is a famous paradox of expected utility theory that has caused some to question the validity of the theory. Suppose a subject has the following choices under uncertainty:
Gamble A: a 100% chance of receiving $1 million.
Gamble B: a 10% chance of receiving $5 million, an 89% chance of receiving $1 million and a 1% chance of receiving nothing.
Most people choose A over B, even though the expected pecuniary value of B is $1.39 million. Presumably, certainty is preferred. In terms of expected utility they are revealing that
and, subtracting 0.89U($1m) from each side of the inequality, we get
Now present the same subject with a further two gambles:
Gamble C: an 11% chance of receiving $1 million, and an 89% chance of receiving nothing.
Alternative Hypothesis 11
Gamble D: a 10% chance of receiving $5 million, and a 90% chance of receiving nothing. Most people choose D over C.
In terms of expected utility, they are revealing that
0.1 U($5m) + 0.9U($0) > 0.11 U($1m) + 0.89 U($0).
Now, as expected utility theory permits, subtract 0.89 U($0) from each side to get
0.1 U($5m) + 0.01 U($0) > 0.11 U($1m), which is the opposite from what was chosen in the first choice situation. Expected utility theory excludes this possibility because preferring A to B implies preferring C to D. See Maurice Allais (1953), 'Le comportement de l'homme rationnel devant le risque: Critique des postulats et axiomes de l'école américaine', Econometrica, 21, 503-46.
Was this article helpful?