Figure

Performance speed on a detection task as a function of target definition (conjunctive vs. single feature) and display size. Adapted from Treisman and Gelade (1980).

• In the absence of focused attention or relevant stored knowledge, features from different objects will be combined randomly, producing "illusory conjunctions".

Treisman and Gelade (1980) had previously obtained support for this theory. Their participants searched for a target in a visual display having a set or display size of between 1 and 30 items. The target was either an object (a green letter T), or consisted of a single feature (a blue letter or an S). When the target was a green letter T, all the non-targets shared one feature with the target (i.e., they were either the brown letter T or the green letter X). The prediction was that focused attention would be needed to detect the object target (because it was defined by a combination of features), but would not be required to detect single-feature targets.

The findings were as predicted (see Figure 5.6). Set or display size had a large effect on detection speed when the target was defined by a combination or conjunction of features (i.e., a green letter T), presumably because focused attention was required. However, there was very little effect of display size when the target was defined by a single feature (i.e., a blue letter or an S).

According to feature integration theory, lack of focused attention can produce illusory conjunctions. Treisman and Schmidt (1982) confirmed this prediction. There were numerous illusory conjunctions when attention was widely distributed, but not when the stimuli were presented to focal attention. Balint's patients have problems with visual attention generally, especially with the accurate location of visual stimuli. Accordingly, it might be expected they would be liable to illusory conjunctions. Friedman-Hill, Robertson, and Treisman (1995) studied a Balint's patient. He made a remarkably large number of illusory conjunctions, miscombining the shape of one stimulus with the colour of another.

Treisman and Sato (1990) developed feature integration theory. They argued that the degree of similarity between the target and the distractors influences visual search time. They found that visual search for an object target defined by more than one feature was typically limited to those distractors having at least one of the target's features. For example, if you were looking for a blue circle in a display containing blue triangles, red circles, and red triangles, you would ignore red triangles. This contrasts with the views of Treisman and Gelade (1980), who argued that none of the stimuli would be ignored.

Treisman (1993) put forward a more complex version of feature integration theory, in which there are four kinds of attentional selection. First, there is selection by location involving a relatively broad or narrow attention window. Second, there is selection by features. Features are divided into surface-defining features (e.g., colour; brightness; relative motion) and shape-defining features (e.g., orientation; size). Third, there is selection on the basis of object-defined locations. Fourth, there is selection at a late stage of processing which determines the object file that controls the individual's response. Thus, attentional selectivity can operate at various levels depending on the particular demands of the current task.

Guided .search theory

Guided search theory was put forward by Wolfe (1998). It represents a substantial refinement of feature integration theory. There is an overall similarity, in that it is assumed within guided search theory that visual search initially involves efficient feature-based processing, followed by less efficient search processes. However, Wolfe (1998) replaced Treisman and Gelade's (1980) assumption that the initial processing is necessarily parallel and subsequent processing is serial with the notion that processes are more or less efficient. He did so because of the diverse findings in the literature: "Results of visual search experiments run from flat to steep RT [reaction time]*set size functions with no evidence of a dichotomous division [division into two]. The continuum of search slopes does make it implausible to think that the search tasks, themselves, can be neatly classified as serial or parallel" (Wolfe, 1998, p. 20). Thus, there should be no effect of set or display size on detection times if parallel processing is used, but a substantial effect of set size if serial processing is used, but most actual findings fall between these two extremes.

According to guided search theory, the initial processing of basic features produces an activation map, in which each of the items in the visual display has its own level of activation. Suppose that someone is searching for red, horizontal targets. Feature processing would activate all red objects and all horizontal objects. Attention is then directed towards items on the basis of their level of activation, starting with those with the highest level of activation. This assumption allows us to understand why search times are longer when some of the non-targets share one or more features with the target stimuli (e.g., Duncan & Humphreys, 1989).

A great problem with the original version of feature integration theory is that targets in large displays are typically found faster than would be predicted. The activation-map notion provides a plausible way in which visual search can be made more efficient by ignoring stimuli not sharing any features with the target stimulus.

What are the basic features in visual search, and how can they be identified? According to Wolfe (1998, p. 23), the answer to the second question is as follows: "If a stimulus supports both efficient search and effortless segmentation [grouping], then it is probably safe to include it in the ranks of basic features." Wolfe (1998, p. 40) provided the following answer to the first question: "There appear to be about eight to ten basic features: colour, orientation, motion, size, curvature, depth, vernier offset [small irregularity in a line segment], gloss, and, perhaps, intersection."

Attentional engagement theory

Duncan and Humphreys (1989, 1992) put forward attentional engagement theory. This was designed in part to explain why visual search is often faster and more efficient than would be expected on the original version of feature integration theory. They made two key predictions:

• Search times will be slower when the similarity between the target and the non-targets is increased.

• Search times will be slower when there is reduced similarity among non-targets. Thus, the slowest search times are obtained when non-targets are dissimilar to each other, but similar to the target.

Evidence that visual search can be very rapid when non-targets are all the same was obtained by Humphreys, Riddoch, and Quinlan (1985). Participants detected inverted T targets against a background of Ts the right way up. Detection speed was hardly affected by the number of non-targets. According to feature integration theory, the fact that the target was defined by a combination or conjunction of features (i.e., a vertical line and a horizontal line) means that visual search should have been greatly affected by the number of non-targets.

Duncan and Humpreys (1989, 1992) made the following theoretical assumptions:

• There is an initial parallel stage of perceptual segmentation and analysis based on all items.

• There is a later stage of processing in which selected information is entered into visual short-term memory; this corresponds to selective attention.

• The speed of visual search depends on how easily the target item enters visual short-term memory.

• Items well matched to the description of the target item are most likely to be selected for visual short-term memory; thus, non-targets that are similar to the target slow the search process.

• Items that are perceptually grouped (e.g., because they are very similar) will be selected (or rejected) together for visual short-term memory. Thus, dissimilar non-targets cannot be rejected together, and this slows the search process.

In the study by Treisman and Gelade (1980), there were long search times to detect a green letter T in a display containing brown Ts and green Xs (see Figure 5.6). Treisman and Gelade (1980) argued that this occurred because of the need for focal attention to produce the necessary conjunction of features. In contrast, Duncan and Humphreys (1989, 1992) claimed that the slow performance resulted from the high similarity between the target and non-target stimuli (the latter shared one of the features of the target stimulus) and the dissimilarity among the non-target stimuli (the two different non-targets shared no features).

Humphreys and Müller (1993) produced a connectionist model based on attentional engagement theory. This model, known as SERR (SEarch via Recursive Rejection), was based on the assumption that grouping and search processes operate in a parallel fashion. Müller, Humphreys, and Donnelly (1994) compared the predictions of SERR against those of feature integration theory. The participants had to detect T-type targets as rapidly as possible, with the distractors consisting of Ts at various different orientations. In one condition, there were two or more identical targets, and the participants had to respond as soon as they detected one or them. The time taken to detect targets in this condition was faster than the fastest time taken to detect the target in another condition in which there was only a single target in the display. This finding follows from the SERR model with its emphasis on parallel processing, but is very hard for serial processing theories to explain.

Evaluation

Feature integration theory has influenced theoretical approaches to visual search in various ways. First, it is generally agreed that two successive processes are involved. Second, it is accepted that the first process is fast and efficient, whereas the second process is slower and less efficient. Third, the notion that different visual features are processed independently or separately seems attractive in view of the evidence that distinct areas of the visual cortex are specialised for processing different features (Zeki, 1993, see Chapter 2).

There were four key weaknesses with early versions of feature integration theory. First, as Wolfe (1998) pointed out, the assumption that visual search is either entirely parallel or serial is much too strong and disproved by the evidence. Second, the search for targets consisting of a conjunction or combination of features is faster than predicted by feature integration theory. Some of the factors involved are incorporated into guided search theory and attentional engagement theory. For example, search for conjunctive targets can be speeded up if non-targets can be grouped together or if non-targets share no features with targets.

Third, it was originally assumed within feature integration theory that the effect of set or display size on visual search depends mainly on the nature of the target (single feature or conjunctive feature). In fact, other factors (e.g., grouping of non-targets) also play a role.

Fourth, Treisman and Schmidt (1982) assumed that features are completely "free-floating" in the absence of focused attention. As a result, any features can combine together into illusory conjunctions. In fact, most illusory conjunctions occur between items that are close together rather than far apart (Ashby, Prinzmetal, Ivry, & Maddox, 1996). This led Ashby et al. (1996) to develop location uncertainty theory, according to which illusory conjunctions occur "because of uncertainty about the location of visual features" (p. 165).

Another issue with research on visual search concerns its relevance to the real world. As Wolfe (1998, p. 56) pointed out:

In the real world, distractors are very heterogeneous [diverse]. Stimuli exist in many size scales in a single view. Items are probably defined by conjunctions of many features. You don't get several hundred trials with the same targets and distractors...A truly satisfying model of visual search will need .to account for the range of real-world visual behaviour.

Disorders of visual attention

Posner and Petersen (1990) proposed a theoretical framework within which various disorders of visual attention can be understood. They argued that three separate abilities are involved in controlling the attentional spotlight:

• Disengagement of attention from a given visual stimulus.

• Shifting of attention from one target stimulus to another.

• Engaging or locking attention on a new visual stimulus.

These three abilities are all functions of the posterior attention system. In addition, there is an anterior attention system. This is involved in co-ordinating the different aspects of visual attention, and resembles the central executive component of the working memory system (see Chapter 6). According to Posner and Petersen (1990, p. 10), there is "a hierarchy of attentional systems in which the anterior system can pass control to the posterior system when it is not occupied with processing other material."

Posner (1995) developed some of these ideas. The anterior attentional system based in the frontal lobes was regarded as controlling stimulus selection and the allocation of mental resources. The posterior attentional system is influenced by the anterior system and controls lower-level aspects of attention, such as the disengagement of attention. There is some evidence that the anterior attentional system may be more complex than was assumed by Posner (1995). For example, Stuss et al. (1999) found that damage to the left frontal lobe produced a different pattern of disturbance of attention than did damage to the right frontal lobe. These findings suggest that there may be more than one anterior attentional system.

Disengagement of attention

Posner, Walker, Friedrich, and Rafal (1984) presented cues to the locations of forthcoming targets to neglect patients. The patients generally coped fairly well with this task, even when the cue and the target were both presented to the impaired visual field. However, when the cue was presented to the unimpaired visual field and the target was presented to the impaired visual field, the patients' performance was very poor. These findings suggest that the patients found it very hard to disengage their attention from visual stimuli presented to the unimpaired side of visual space. Thus, problems with disengagement play a significant role in producing the symptoms shown by neglect patients.

Patients with neglect have suffered damage to the parietal region of the brain (Posner et al., 1984). A different kind of evidence that the parietal area is important in attention was reported by Petersen, Corbetta, Miezin, and Shulman (1994). PET scans indicated that there was much activation within the parietal area when attention shifted from one spatial location to another.

Problems with disengaging attention are also found in Balint's syndrome patients suffering from simultanagnosia. In this condition (mentioned earlier), only one object (out of two or more) can be seen at any one time, even when the objects are close together. As most of these patients have full visual fields, it seems that the attended visual object exerts a "hold" on attention that makes disengagement difficult. However, neglected stimuli are processed to some extent. Coslett and Saffran (1991) observed strong effects of semantic relatedness between two briefly presented words in a patient with simultanagnosia.

Shifting of attention

Posner, Rafal, Choate, and Vaughan (1985) looked at problems of shifting attention by studying patients suffering from progressive supranuclear palsy. Such patients have damage to the midbrain, so they find it very hard to make voluntary eye movements, especially in the vertical direction. These patients responded to visual targets, and there were sometimes cues to the locations of forthcoming targets. There was a short, intermediate, or long interval between the cue and the target. At all intervals, valid cues (cues providing accurate information about target location) speeded up responding to the targets when the targets were presented to the left or the right of the cue. However, only cues at the long interval aided responding when the targets were presented above or below the cues. Thus, the patients had difficulty in shifting their attention in the vertical direction.

Attentional deficits apparently associated with shifting of attention have been studied in patients with Balint's syndrome. These patients have difficulty in reaching for stimuli using visual guidance. Humphreys and Riddoch (1993) presented two Balint's patients with 32 circles in a display. The circles were either all the same colour, or half were one colour and the other half a different colour. The circles were either close together or spaced, and the task was to decide whether they were all the same colour. On trials where there were circles of two colours, one of the patients (SA) performed much better when the circles were close together than when they were spaced (79% vs. 62%, respectively). The other patient (SP) performed equivalently in both conditions (62% vs. 59%, respectively). Apparently some patients with Balint's syndrome (e.g., SA) find it hard to shift attention within the visual field.

Engaging attention

Rafal and Posner (1987) studied problems of engaging attention in patients with damage to the pulvinar nucleus of the thalamus. These patients were given the task of responding to visual targets that were preceded by cues. The patients responded faster when the cues were valid than when they were invalid, regardless of whether the target stimulus was presented to the same side as the brain damage or to the opposite side. However, they responded rather slowly following both kinds of cues when the target stimulus was presented to the side of the visual field opposite to that of the brain damage. According to Rafal and Posner (1987), these findings reflect a problem the patients have in engaging attention to such stimuli.

Additional evidence that the pulvinar nucleus of the thalamus is involved in controlling focused attention was obtained by LaBerge and Buchsbaum (1990). PET scans indicated increased activation in the pulvinar nucleus when participants were told to ignore a given stimulus. Thus, the pulvinar nucleus is involved in preventing attention from being focused on an unwanted stimulus as well as in directing attention to significant stimuli.

Section summary

As Posner and Petersen (1990, p. 28) pointed out, the findings indicate that "the parietal lobe first disengages attention from its present focus, then the midbrain area acts to move the index of attention to the area of the target, and the pulvinar nucleus is involved in reading out data from the indexed locations". An important implication is that the attentional system is rather complex. As Allport (1989, p. 644) expressed it, "spatial attention is a distributed function in which many functionally differentiated structures participate, rather than a function controlled uniquely by a single centre". This increased understanding of the complexities of attention has arisen in large part because of the study of brain-damaged patients.

Stop Anxiety Attacks

Stop Anxiety Attacks

Here's How You Could End Anxiety and Panic Attacks For Good Prevent Anxiety in Your Golden Years Without Harmful Prescription Drugs. If You Give Me 15 minutes, I Will Show You a Breakthrough That Will Change The Way You Think About Anxiety and Panic Attacks Forever! If you are still suffering because your doctor can't help you, here's some great news...!

Get My Free Ebook


Post a comment