Figure 412

The model of face recognition put forward by Bruce and Young (1986). • Face recognition units: they contain structural information about known faces.

• Person identity nodes: they provide information about individuals (e.g., their occupation, interests).

• Name generation: a person's name is stored separately.

• Cognitive system: this contains additional information (e.g., that actors and actresses tend to have attractive faces); this system also influences which of the other components receive attention.

The recognition of familiar faces depends mainly on structural encoding, face recognition units, person identity nodes, and name generation. In contrast, the processing of unfamiliar faces involves structural encoding, expression analysis, facial speech analysis, and directed visual processing.

Experimental evidence

Bruce and Young (1986) assumed that familiar and unfamiliar faces are processed differently. If it were possible to find patients who showed good recognition of familiar faces but poor recognition of unfamiliar faces, and other patients who showed the opposite pattern, this double dissociation would suggest that the processes involved in the recognition of familiar and unfamiliar faces are different.

Malone et al. (1982) tested one patient who showed reasonable ability to recognise photographs of famous statesmen (14 out of 17 correct), but who was very impaired at matching unfamiliar faces. A second patient performed normally at matching unfamiliar faces, but had great difficulties in recognising photographs of famous people (only 5 out of 22 correct).

According to the model, the name generation component can be accessed only via the appropriate person identity node. As a result, we should never be able to put a name to a face without at the same time having available other information about that person (e.g., his or her occupation). Young, Hay, and Ellis (1985) asked participants to keep a diary record of the specific problems they experienced in face recognition. There were 1008 incidents altogether, but participants never reported putting a name to a face while knowing nothing else about that person. In contrast, there were 190 occasions on which a participant could remember a fair amount of information about a person, but not their name.

Cognitive neuropsychological evidence is also relevant. Practically no brain-damaged patients can put names to faces without knowing anything else about the person, but several patients show the opposite pattern. For example, Flude, Ellis, and Kay (1989) studied a patient, EST, who was able to retrieve the occupations for 85% of very familiar people when presented with their faces, but could recall only 15% of their names.

According to the model, another kind of problem should be fairly common. If the appropriate face recognition unit is activated, but the person identity node is not, there should be a feeling of familiarity coupled with an inability to think of any relevant information about the person. In the set of incidents collected by Young et al. (1985), this was reported on 233 occasions.

Reference back to Figure 4.12 suggests further predictions. When we look at a familiar face, familiarity information from the face recognition unit should be accessed first, followed by information about that person (e.g., occupation) from the person identity node, followed by that person's name from the name generation component. Thus, familiarity decisions about a face should be made faster than decisions based on person identity nodes. As predicted, Young, McWeeny, Hay, and Ellis (1986b) found that the decision as to whether a face was familiar was made faster than the decision as to whether it was the face of a politician.

It also follows from the model that decisions based on person identity nodes should be made faster than those based on the name generation component. Young, McWeeny, Hay, and Ellis (1986a) found that participants were much faster to decide whether a face belonged to a politician than they were to produce the person's name.


The model of Bruce and Young (1986) provides a coherent account of the various kinds of information about faces, and the ways in which these kinds of information are related to each other. Another significant strength is that differences in the processing of familiar and unfamiliar faces are spelled out.

There are various limitations with the model. First, the account given of the processing of unfamiliar faces is much less detailed than that of familiar faces. Second, the cognitive system is vaguely specified. Third, some evidence is inconsistent with the assumption that names can be accessed only via relevant autobiographical information stored at the person identity node. An amnesic patient, ME, could match the faces and names of 88% of famous people for whom she was unable to recall any autobiographical information (de Haan, Young, & Newcombe, 1991).

Fourth, it is important for the theory that some patients show better recognition for familiar faces than unfamiliar faces, whereas others show the opposite pattern. This double dissociation was obtained by

Malone et al. (1982), but has proved difficult to replicate. For example, Young et al. (1993) studied 34 brain-damaged men, and assessed their familiar face identification, unfamiliar face matching, and expression analysis. Five of the patients had a selective impairment of expression analysis, but there was much weaker evidence of selective impairment of familiar or unfamiliar face recognition. Young et al. (1993) argued that previous research may have produced misleading conclusions because of methodogical limitations.

Interactive activation and competition model

Burton and Bruce (1993) developed the Bruce and Young (1986) model. Their interactive activation and competition model adopted a connectionist approach (see Figure 4.13). The face recognition units (FRUs) and the name recognition units (NRUs) contain stored information about specific faces and names, respectively. Person identity nodes (PINs) are gateways into semantic information, and can be activated by verbal input about people's names as well as by facial input. As a result, they provide information about the familiarity of individuals based on either verbal or facial information. Finally, the semantic information units (SIUs) contain name and other information about individuals (e.g., occupation; nationality).

Experimental evidence

The model has been applied to associative priming effects that have been found with faces. For example, the time taken to decide whether a face is familiar is reduced when the face of a related person is shown immediately beforehand (e.g., Bruce & Valentine, 1986). According to the model, the first face activates SIUs, which feed back activation to the PIN of that face and related faces. This then speeds up the familiarity decision for the second face. As PINs can be activated by both names and faces, it follows that associative priming for familiarity decisions on faces should be found when the name of a person (e.g., Prince Philip) is followed by the face of a related person (e.g., Queen Elizabeth). Precisely this has been found (e.g., Bruce & Valentine, 1986).

One of the differences between the interactive activation and competition model and Bruce and Young's (1986) model concerns the storage of name and autobiographical information. These kinds of information are both stored in SIUs in the Burton and Bruce (1993) model, whereas name information can only be accessed after autobiographical information in the Bruce and Young (1986) model. The fact that the amnesic patient ME (discussed earlier) could match names to faces in spite of being unable to access autobiographical information is more consistent with the Burton and Bruce (1993) model. In similar fashion, Cohen (1990) found that faces produced better recall of names than of occupations when the names were meaningful and the occupations were meaningless. This could not happen according to the Bruce and Young (1986) model, but poses no problems for the Burton and Bruce (1993) model.

The interactive activation and competition model can also be applied to the findings from patients with prosopagnosia. These findings are discussed later in the chapter.

Configurational information

When we recognise a face shown in a photograph, there are two major kinds of information we might use: (1) information about the individual features (e.g., eye colour); or (2) information about the configuration or overall arrangement of the features. Many approaches to face recognition are based on a feature approach. For example, police forces often make use of Identikit, or Photofit, to aid face recognition in eyewitnesses.

Photofit involves constructing a face resembling that of the criminal on a feature-by-feature basis (Figure 4.14).

Evidence that the configuration of facial features also needs to be considered was reported by Young, Hellawell, and Hay (1987). They constructed faces from photographs by combining the top halves and bottom halves of different famous faces. When the two halves were closely aligned, participants experienced great difficulty in naming the top halves. However, their performance was much better when the two halves were not closely aligned. Presumably close alignment produced a new configuration which interfered with face recognition.

Searcy and Bartlett (1996) reported convincing evidence that face processing is not solely configurational. Facial distortions in photographs were produced in two different ways:

1. Configural distortions (e.g., moving the eyes up and the mouth down).

2. Component distortions (e.g., blurring the pupils of the eyes to produce cataracts, blackening teeth, and discolouring remaining teeth).

The photographs were then presented upright or inverted, and the participants gave them grotesqueness ratings on a 7-point scale. The findings suggest that component distortions are readily detected in both upright and inverted faces, whereas configural distortions are often not detected in inverted faces (see Figure 4.15). Thus, configurational and component processing can both be used with upright faces, but the processing of inverted faces is largely limited to component processing.

Most research on face recognition has used photographs or other two-dimensional stimuli. There are at least two potential limitations of such research. First, viewing an actual three-dimensional face provides more information for the observer than does viewing a two-dimensional representation. Second, people's faces are normally mobile, registering emotional states, agreement or disagreement with what is being said, and so on. None of these dynamic changes over time is available in a photograph. The importance of these changes was shown by Bruce and Valentine (1988). Small illuminated lights were spread over a face, which was then filmed in the dark so that only the lights could be seen. Participants showed some ability to determine the sex and the identity of each face on the basis of the movements of the lights, and they were very good at identifying expressive movements (such as smiling or frowning).


Patients with prosopagnosia cannot recognise familiar faces even though they can recognise other familiar objects. This might occur simply because more precise discriminations are required to distinguish between one specific face and another specific face than to distinguish between other kinds of objects (e.g., a chair and a table). An alternative view is that there are specific processing mechanisms that are only used for face recognition, and which are not involved in object recognition.

Farah (1994a) obtained evidence that prosopagnosic patients can be good at making precise discriminations for stimuli other than faces. She studied LH, who developed prosopagnosia as a result of a car crash. LH and control participants were presented with various faces and pairs of spectacles, and were then given a recognition-memory test. LH performed at about the same level as the normal controls in terms of recognition performance for pairs of spectacles. However, LH was at a great disadvantage to the controls on the test of face recognition (see Figure 4.16).

The notion that face processing involves specific mechanisms would be strengthened if it were possible to show a double dissociation, with some patients having normal face recognition but visual agnosia for

Stop Anxiety Attacks

Stop Anxiety Attacks

Here's How You Could End Anxiety and Panic Attacks For Good Prevent Anxiety in Your Golden Years Without Harmful Prescription Drugs. If You Give Me 15 minutes, I Will Show You a Breakthrough That Will Change The Way You Think About Anxiety and Panic Attacks Forever! If you are still suffering because your doctor can't help you, here's some great news...!

Get My Free Ebook

Post a comment