Decision Making

When we are confronted by a choice that affects us personally (e.g., going to France or to Spain for a holiday), there are nearly always benefits and costs associated with each choice. How do we decide what to do? A fairly straightforward approach was proposed by von Neumann and Morgenstern (1947). According to their utility theory, we try to maximise utility, which is the subjective value we attach to an outcome. When we need to choose between simple options, we assess the expected utility or expected value of each one by means of the following formula:

Expected utility=(probability of a given outcome) futility of the outcome)

In the real world, there will typically be various factors associated with each option. For example, one holiday option may be preferable to a second holiday option, because it is in a more interesting area and the weather is likely to be better. However, the first holiday is more expensive and more of your valuable holiday time would be spent in travelling. In such circumstances, people are supposed to calculate the expected utility or disutility (cost) of each factor in order to work out the overall expected value or utility of each option.

When the choice is an easy one, then people do seem to behave in line with utility theory. For example, if given the choice between two holidays, one of which is in a more interesting place with better weather than the other, and is also cheaper and involves less travelling time, then virtually everyone will choose that holiday. However, as is discussed next, people's choices are often decided by factors other than simply utility. Readers interested in finding out about contemporary versions of utility theory should consult Luce (1996) or Mellers, Schwartz, and Cooke (1998).

Loss aversion

In many situations, people demonstrate a phenomenon known as loss aversion, that is, they are much more sensitive to potential losses than to potential gains. For example, consider a study by Kahneman and Tversky (1984). Participants were given the chance to toss a coin, winning $10 if it came up heads and losing $10 if it came up tails. Most of them declined this offer, even though it is a fair bet. More surprisingly, most participants still refused to bet when they were offered $20 if the coin came up heads, with a loss of only $10 if it came up tails. They showed loss aversion, in that they were unduly concerned about the potential loss. In terms of utility theory, they should have accepted the bet, because it provides an average expected gain of $10 per toss. In real life, however, loss aversion may often be desirable (Trevor Harley, personal communication).

A phenomenon that resembles loss aversion is the sunk-cost effect, in which additional resources are expended to justify some previous commitment. Dawes (1988) discussed a study in which two people had paid a $100 non-refundable deposit for a weekend at a resort. On the way to the resort, both of them became slightly unwell, and felt they would probably have a more pleasurable time at home than at the resort. Should they drive on or turn back? Many participants argued that the two people should drive on to avoid wasting the $100: this is the sunk-cost effect. The implications of this decision are that more money will be needed in order to spend the weekend at the resort, even though it is less preferred than being at home!

Framing

Many of our decisions are influenced by irrelevant aspects of the situation (e.g., the precise way in which an issue is presented). This phenomenon is known as framing. Tversky and Kahneman (1987) provided an interesting example of framing in the Asian disease problem. The participants were told that there was likely to be an outbreak of an Asian disease in the United States, and that it was expected to kill 600 people. Two programmes of action had been proposed: Programme A would allow 200 people to be saved; programme B would have a 1/3 probability that 600 people would be saved, and a 2/3 probability that none of the 600 would be saved. When the issue was expressed in this form, 72% of the participants favoured programme A, although the two programmes (if implemented several times) would on average both lead to the saving of 200 lives.

Other participants in the study by Tversky and Kahneman (1987) were given the same problem, but this time it was negatively framed. They were told that programme A would lead to 400 people dying, whereas programme B carried a 1/3 probability that nobody would die, and a 2/3 probability that 600 would die. In spite of the fact that the problem was the same, 78% chose programme B. The various findings obtained by Tversky and Kahneman (1987) can be accounted for in terms of loss aversion in the sense of avoiding certain losses.

Wang (1996) carried out a series of studies using different versions of the Asian disease problem in which the total number of people in the patient group varied between 600 and 6. He replicated the findings of Tversky and Kahneman (1987) when the group size was 600. However, the key findings were as follows:

1. There was no framing effect when the size of the patient group was 60 or 6.

2. With the smaller patient groups, there was a clear preference for the probabilistic outcome (1/3 probability that nobody would die, and a 2/3 probability that everyone would die).

What do these findings mean? The most common reason that participants gave for choosing the probabilistic outcome is that they wanted to give everyone an equal chance to survive. In other words, they were concerned about fairness, and this concern was greater in a small-group context than in a large-group context.

Wang (1996) then tested the importance of fairness by asking participants to choose between two options: (1) one-third of the group is saved; (2) one-third of the group is selected to be saved. When the group size was 600, 60% of the participants chose the option with the non-selected survivors; this rose to 80% when the group size was 6. These findings strengthen the notion that concerns about fairness are greater when dealing with small groups than with large ones.

In a final study, Wang (1996) asked participants to choose between definite survival of two-thirds of the patients (deterministic option) or a one-third probability of all patients surviving and a two-thirds probability of none surviving (probabilistic option). They were told that the group size was 600, 6, or 3 patients unknown to them, or 6 patients who were close relatives of the participant. According to utility theory, the participants should have chosen the definite survival of two-thirds of the patients. In fact, the decision was greatly affected by group size and by the relationship between the participants and the group members (see Figure 17.3). Presumably the increased percentage of participants choosing the probabilistic option with small group size (especially for relatives) occurred because the social context and psychological factors relating to fairness were regarded as more important in those conditions. Their apparently "irrational" behaviour becomes immediately explicable when one takes account of this social context.

Perceived justification

When people choose one option over another, they generally want to be able to justify their decision to themselves and to other people. One of the clearest demonstrations of the influence of perceived justification on choice making was reported by Tversky and Shafir (1992). Their participants were asked to

Stop Anxiety Attacks

Stop Anxiety Attacks

Here's How You Could End Anxiety and Panic Attacks For Good Prevent Anxiety in Your Golden Years Without Harmful Prescription Drugs. If You Give Me 15 minutes, I Will Show You a Breakthrough That Will Change The Way You Think About Anxiety and Panic Attacks Forever! If you are still suffering because your doctor can't help you, here's some great news...!

Get My Free Ebook


Post a comment