## Panel 163

: OAKSFORD AND CHATER'S PROBABILISTIC THEORY

• The probabilistic theory deals with assessing the optimal hypothesis (H) to select from among n mutually exclusive and exhaustive hypotheses (Hi); in short, with assessing the uncertainty of these hypotheses, /(Hi):

• After some data D have been found (e.g., by turning a card), the probability of the hypotheses, P (Hi) is undated relative to D, P(Hi|D), D), giving a new uncertainty measure /(H;'|D):

• The P(Hi|D) terms are computed from Bayes' theorem:

which specifies the posterior probability of a hypothesis Hi given some data D can be found from the prior probability of each hypothesis Hi and the likelihoods of D given each Hj. Choosing the prior probabilities of a hypothesis can be a contentious issue; in a simple case, if there are four possible choices in a situation then the prior probability of each occurring is .25. In their analysis, Oaksford and Chater make an important assumption about the P and Q cases (the so-called rarity assumption) that their prior probabilities are low.

• Assuming this formula, the information gain in a situation is the amount of reduction in uncertainty that follows from the arrival of a new datum:

although, in the theory a more precise measure, the Expected Information Gain E(lg), is computed; this is the uncertainty after making a choice weighted by the probability of each possible choice outcome, less the prior uncertainty.

### Probabilistic theory applied to the selection task

The analysis of probabilistic theory can be made clear by showing how it concretely applies to the selection task (Oaksford & Chater, 1994; Oaksford, Chater, Grainger, & Larkin, 1997). Using an example proposed by these theorists, imagine that you are testing hypotheses about eating tripe and getting sick, the hypothesis being "if you eat tripe," (P) "then you feel sick" (Q). To test this hypothesis you might investigate groups of people who have eaten tripe (P), not eaten tripe (not-P), become sick (Q), and not become sick (not-Q); these being the equivalent of the four cards in the task. The question is, what are the most informative groups to check? First, asking people who have eaten tripe (P) should provide some information gain, because if you find that they have not been sick then you know the hypothesis is false. Second, asking people who have never eaten tripe (not-P) has little or no information gain, because the hypothesis says nothing about people who have not eaten tripe. Third, in contrast, it makes sense to ask someone who is feeling sick (Q) whether they have eaten tripe; if they have eaten tripe then there is considerable information gain, although if they have not eaten tripe then no conclusion can be drawn. Fourth, asking someone who is not feeling sick (not-Q) may be informative; if they have eaten tripe then the hypothesis is falsified, but if they have not eaten tripe then no conclusion can be made. In short, it seems that choosing P is certain to be informative, choosing not-P is definitely not informative, and Q and not-Q are somewhere in between (i.e., they may or may not be informative).

Behind this loose description of information gain is an elaborate Bayesian probability model that specifies exactly how to compute the information gain for the various cards (see Panel 16.3). This computation of probabilities, combined with a further assumption (the rarity assumption, that properties that figure in causal relations are rare) leads to the proposal that the best selections, in order of optimal information gain, are the P, Q, not-Q, and not-P items. In a meta-analysis of research in the area, Oaksford and Chater show that the frequency of occurrence of the difference cardchoices exactly follows this pattern. It is interesting to note that this probabilistic account sanctions the choices of P and Q as being rational, in contrast to the logical account which viewed them as irrational.

With additional assumptions, this model can be used to account for other effects in the selection task, like the effects in deontic versions of the task and the effects of negations (see Oaksford & Chater, 1994, 1996).

### Context effects and conditional inference

This theory has been mainly applied to an analysis of versions of the selection task. Thus far it has not been applied to conditional inference per se and so it has little to say about the typical patterns of inference and suppression effects of conditional inference. However, it is clear that the theory should be applicable to manipulations of uncertainty in conditional inference (e.g., Cummins et al., 1991; Stevenson & Over, 1995) and work in this area has been promised for the future.

### Evaluation of probabilistic theory

As a new arrival on the scene in an area where theories are relatively long-standing, the probabilistic theory has attracted a lot of attention and excitement (Almor & Sloman, 1996; Evans & Over, 1996; Klauer, 1999; Laming, 1996). One of the big promises of the theory is that it suggests a way to unite research on reasoning with that on judgement and decision making, bringing two previously disparate fields together.

Having said this, like domain-specific-rule theories, the probabilistic theory has a way to go theoretically and empirically before it really challenges the abstract-rule and mental models theories. Theoretically, it is important to realise that the theory lacks a performance component (Johnson-Laird, 1999). The probabilistic theory is a computational-level theory (see Marr, 1982, and Chapter 1) which specifies what needs to be computed in a reasoning task (namely, factors like information gain). It does not propose a performance mechanism that shows which reasoning processes produce the candidate set of choices/conclusions and which memory processes retrieve past experiences to compute the probabilities for these items. In the absence of such a mechanism, the theory has little to say about reasoning per se (i.e., how specific conclusions are generated from premises). Indeed, it may end up being a dual-process theory which proposes that reasoning is carried out by either abstract rules or mental models over which probabilities are computed.

Empirically, the theory needs to be tested much more extensively. There has been a tendency to carry out meta-analyses on the existing literature rather than to test novel predictions of the theory (but see Oaksford et al., 1997). With meta-analyses there is always the worry that the theory is being shaped to the data in an ad hoc fashion rather than standing on its own predictive legs. For example, in the selection task, the theory's consistency with the typical choices made is carried heavily by the rarity assumption. This assumption is not well founded on grounds other than the fact that it delivers the sorts of results that are typically found in the literature.

24 chapters on preparing to write the letter and finding the proper viewpoint how to open the letter, present the proposition convincingly, make an effective close how to acquire a forceful style and inject originality how to adapt selling appeal to different prospects and get orders by letter proved principles and practical schemes illustrated by extracts from 217 actual letter.

Get My Free Ebook