Cognitive Science

Strengths

The computational modelling of psychological theories provides a strong test of their adequacy, because of the need to be explicit about every theoretical assumption in a computational model. Many theories from traditional cognitive psychology have been found to be inadequate, because crucial aspects of the human information-processing system were not spelled out computationally. For example, Marr (1982) found that previous theoretical assumptions about feature detectors in visual perception were oversimplified when he began to construct programs to specify precisely how feature extraction might occur (see Chapter 2). Alternatively, it can be shown that a particular theory is unfeasible in principle, because it proposes processes that would take forever to compute. For example, early analogy models suggested computationally intractable processes to underlie analogy-making that took people a matter of seconds to complete (Veale & Keane, 1998; see Chapter 15).

A second advantage of computational modelling is that it supports the development of more complex theories in cognitive psychology In many cognitive theories, theorists can rapidly reach a point where the complexity of the theory makes it very hard to say what the theory might predict. If you have developed a computational model, then the predictions can be generated rapidly and easily by simply running the model with the target stimuli. In short, the model can be a significant crutch to the theorist's thinking.

A third advantage of computational modelling is the clarity it can bring to the comparison of theories. First, it is very hard to maintain differences between theories that really do not exist when both theories have been elaborated in computational models. Second, computational models have clarified the distinctions between theories and models. Two theorists commonly present statements about a phenomenon that mix theoretical proposals with specific assertions based on a model. Often, when this situation is clarified, it becomes clear that both theories agree at the computational level (see Marr, 1982, and Chapter 1), but merely differ in the added assumptions made in their specific model instantiating the theory. For example, Keane et al. (1994) showed that what appeared to be three different theories of analogy were in fact three variations on an agreed theoretical scheme.

A fourth advantage is that computational models have been proposed as general cognitive architectures that can be applied to the understanding of all cognitive functioning (see, for example, Newell, 1990; Rumelhart et al., 1986). Such widely applicable theoretical notions have emerged less often from the work of traditional cognitive psychologists and cognitive neuropsychologists. They also offer a remedy to the fragmentation that dogs psychological theorising arising from specific empirical paradigms.

Many theorists have argued that connectionism and models based on parallel-distributed processing offer the prospect of providing better accounts of human cognition than previous approaches within cognitive science (see Chapters 2, 3, and 12). Some of the potential advantages of the newer approach can be seen with reference to the following quotation from Churchland (1989, p. 100):

The brain seems to be a computer with a radically different style. For example, the brain changes as it learns, it appears to store and process information in the same places .Most obviously, the brain is a parallel machine, in which many interactions occur at the same time in many channels.

Connectionist models with their parallel-distributed processing resemble brain functioning more closely than do traditional computer models based on serial processing. Furthermore, with their modifiable connections and distributed representations, such models provide very elegant and convincing models of human learning (see Chapter 1). For instance, when some of these models are "lesioned", they produce systematic errors that parallel the behaviour of brain-damaged patients (e.g., Plaut & Shallice, 1993).

Limitations

Cognitive science is perhaps the fullest expression of the computational metaphor for the mind (Von Eckardt, 1993). Although some commentators refuse to accept the validity of this metaphor (Still & Costall, 1991), 40 or so years of success in cognitive psychology are hard to ignore given the practicalities of paradigm-driven science. We will not enter the fray on the computational metaphor, but rather concentrate on the more revealing limitations arising from within when you accept this metaphor.

First, it can be argued that cognitive science is just "fancy" cognitive psychology. Originally, cognitive science was touted as a great bringing together of diverse disciplines like philosophy, linguistics, neuroscience, anthropology, and psychology (Gardner, 1985). However, the reality is more limited. Hardcastle (1996) argued that it is just experimental psychology "with bells and whistles". Concrete evidence for Hardcastle's view was forthcoming from an analysis of papers published in the journal Cognitive Science and the Proceedings of the Cognitive Science Society's Annual Conference (Schunn, Crowley, & Okada, 1998). Schunn et al. (1998) found that almost two-thirds of the authors of Cognitive Science articles came from either psychology or computer science departments, with the remainder coming from a sprinkling of linguistics and philosophy schools or from industry. Few published articles combined a true interdisciplinary mix; that is, most articles were straight experimental psychology or straight artificial intelligence, with few combining empirical and computational aspects. Furthermore, while many of these articles referred to papers in other disciplines, like linguistics or philosophy, these disciplines did not reciprocate by referring to work in cognitive science. Thus, it could be argued that the interdisciplinary claims for cognitive science are unfounded. The only plus to counter this argument is Schunn et al.'s (1998) finding that in recent years a rising number of articles come from "departments of cognitive science", suggesting that the area is maturing into a distinct discipline.

Second, it can be argued that computational models are rarely used to make predictions; they are produced as a prop for a theory, but have no real predictive function. For any given theory, there are a huge number of possible models (probably an infinite number), and these variations are rarely explored. To quote Gazzaniga et al. (1998, p. 102), "Unlike experimental work which by its nature is cumulative, modelling research tends to occur in isolation. There may be lots of ways to model a particular phenomenon, but less effort has been devoted to devising critical tests that pit one theory against another." The root of this complaint is that cognitive scientists develop one model of a phenomenon rather than exploring many models, which could then be distinguished by critical emprical testing. However, for some exceptions refer to Hummel and Holyoak (1997), Keane (1997), and Gentner and Markman (in press). In short, models are rarely used to make real predictions.

One of the reasons for this state of affairs may be the lack of any definite methodology for relating a computational model's behaviour to human behaviour. As Costello and Keane (2000) have pointed out, there are many levels of detail at which a model can simulate people. For example, a model can capture the direction of a difference in correct responses between two groups of people in an experiment, the specific correct and error responses of groups, general trends in response times for all response types, response times and types for specific individuals, and so on. Many models operate at the more general end of these possible parallels. This suggests that such models are, and may always be, predictively weak. In some cases, there are in-principle obstacles to them becoming more specific (Costello & Keane, 2000).

A third criticism of cognitive science is that models tend to be limited in diverse ways. The models often lack neural plausibility, they fail to capture the scope of cognitive phenomena, and the biological context in which cognition occurs. Connectionist models that are claimed to have neural plausibility do not really resemble the human brain. First, connectionist models typically use thousands or tens of thousands of connected units to model a cognitive task that might be performed by tens of millions of neurons in the brain. Second, there is little evidence that the pattern of connectivity between units and the learning rules (like backprop) used in connectionist models have parallels in the human brain. Third, numerous models can generally be found to "explain" any sets of findings. As Carey and Milner (1994, p. 66) pointed out, "any neural net which produces a desired output from a specified input is hugely under-constrained; an infinitely large number of solutions can be found for each problem addressed." In short, these models need to be much more constrained by what has been found at the neurological level.

Computational models often fail to capture the scope of cognitive phenomena. Human cognition is influenced by a multiplicity of potentially conflicting motivational and emotional factors, many of which may be operative at the same time. Most models do not try to capture these wider aspects of cognition. For example, most language processing models tend to focus on the process of understanding a sentence or phrase—for instance, a metaphor like "surgeons are butchers"— without considering its emotional import. However, if you were a surgeon, and this was said to you by a patient, the pejorative implications of the metaphor would be your main concern (see Veale & Keane, 1994). Similar arguments have been made about the failure to capture the moral and social dimensions of cognitive behaviour (Shotter, 1991).

Computational models also tend to model cognition independently of its biological context. Norman (1980) pointed out that human functioning involves an interplay between a cognitive system (which he called the Pure Cognitive System) and a biological system (which he called the Regulatory System). Much of the activity of the Pure Cognitive System is determined by the various needs of the Regulatory System, including the need for survival, for food and water, and for protection of oneself and one's family. Cognitive science, in common with most of the rest of cognitive psychology, focuses on the Pure Cognitive System and virtually ignores the key role played by the Regulatory System.

A fourth limitation of connectionist models was identified by Ellis and Humphreys (1999), who pointed out that most computational models have been designed to simulate human performance on single tasks. That obviously limits what they can tell us about human cognition. It is also limiting in less obvious ways, as was pointed out by Ellis and Humphreys (1999, p. 623): "Double dissociations based on evidence from different tasks. cannot be captured in models that perform only single tasks. To begin to address such evidence, multi-task models are needed."

A fifth limitation of cognitive science is that it may fail to deliver on its greatest promise, namely the provision of a general unified theory of cognition to weld the fragmentary theories of cognitive psychology together. Throughout the 1990s, one of the greatest developments in cognitive science was the proposal and elaboration of unified theories of cognition like Newell's SOAR model (Newell, 1990), Anderson's (1993) ACT* model, and the more amorphous connectionist framework (Shephard, 1989). Apart from the practical realities of these models being adopted by researchers in the field, there has been some criticism of the fundamental potential of these models. Cooper and Shallice (1995) have argued that SOAR is no better than the grand theories of the 1930s because it has insecure methodological foundations, an ill-specified computational/ psychological theory, and inadequate empirical tests. Ironically, for it to work as an adequate unified theory, it has been proposed that it needs to be more clearly specified in a language like that used in everyday life (Cooper et al., 1996).

COGNITIVE NEUROSCIENCE Strengths

One of the strengths of the cognitive neuroscience approach is that it allows us to obtain detailed information about the brain structures involved in different kinds of cognitive processing. Techniques such as MRI and CAT scans have proved of particular value when used on patients to discover which brain areas are damaged. Previous cognitive neuropsychological research on patients with conditions such as amnesia or various language disorders was hampered by the fact that the precise areas of brain damage could only be established by postmortem examination (see Chapters 7 and 13).

Why is it useful to know precisely which brain areas are damaged in each patient? One important reason is because such information has led to more realistic views of brain organisation. For example, the earlier assumption that any given language function is located in the same brain region for everyone has been replaced by the assumption that there are great individual differences in the localisation of language functions (see Chapter 13).

As was pointed out in Chapter 1, the various neuroimaging techniques differ considerably in their temporal and spatial resolution of cognitive processes. For example, two of the most popular techniques (PET and fMRI) both have fairly poor temporal resolution but good spatial resolution, whereas event-related potentials have good temporal resolution but poor spatial resolution. As Gazzaniga et al. (1998, p. 118) pointed out, the best way of achieving the desirable goal of good temporal and spatial resolution is by combining information from different techniques: "Researchers have opted to combine the temporal resolution of evoked potentials with the spatial resolution of PET or fMRI for a better picture of the physiology and anatomy of cognition."

Clear examples of the value of good temporal resolution are to be found in attention research (see Chapter 5). It has long been known that there are important differences in the processing of attended and unattended stimuli. However, it has proved very hard to distinguish empirically between theories based on the assumption that these differences occur early in processing versus those that assume they occur only late in processing. Studies using event-related potentials have found strong evidence for early processing differences between attended and unattended stimuli in the visual and auditory modalities (see Chapter 5).

A clear example of the value of good spatial resolution comes in research on episodic and semantic memory (see Chapter 7). Most brain-damaged patients with amnesia have poor episodic and semantic memory, which suggests that they both depend on the same brain areas. However, evidence from cognitive neuroscience suggests that episodic and semantic memory depend on adjacent brain structures that are typically both damaged in amnesic patients (see Vargha-Khadem et al., 1997).

Another strength of the cognitive neuroscience approach is that it can help to demonstrate the reality of theoretical distinctions. For example, it has been argued by many theorists (e.g., Roediger, 1990) that implicit memory can be divided into perceptual and conceptual implicit memory. Support for that view has come from PET studies by Schacter et al. (1996) and by Wagner et al. (1997). In these studies, perceptual and conceptual priming tasks affected different areas of the brain.

Limitations

It is very easy to be impressed by the ever-increasing number of neuroimaging techniques available to researchers. However, it is important not to be over-impressed. As Tulving (1998, p. 275) pointed out:

The single most critical piece of equipment is still the researcher's own brain. All the equipment in the world will not help us if we do not know how to use it properly... What is badly needed now, with all those scanners whirring away, is an understanding of exactly what we are observing, and seeing, and measuring.

In most neuroimaging studies, data are collected from several individuals and then averaged. Some concern has been expressed about such averaging because of the existence of significant individual differences. This issue was illustrated by Howard (1997, p. 298) in the following hypothetical (but plausible) example: "If, for instance, some 50% of subjects recognise words in Wernicke's area.and show increased rCBF [regional cerebral blood flow] there in a task, while the other 50% show no change, over the whole group there will probably be a significant increase."

How serious are the problems associated with averaging neuroimaging data? A reasonable answer to that question was provided by Raichle (1998, p. 115): "One only has to inspect individual human brains to appreciate that they do differ. However, general organising principles emerge that transcend these differences." Such organising principles do not necessarily apply to all the individuals tested.

It is a matter of some concern that findings obtained from neuroimaging studies sometimes seem inconsistent with those obtained by cognitive neuropsychologists with brain-damaged patients. For example, studies on amnesic patients have indicated clearly that the hippocampus is of central importance in declarative or explicit memory (see Chapter 7). In contrast, most neuroimaging studies using PET or fMRI have failed to find evidence of high levels of activation in normals performing declarative memory tasks (see Chapter 7).

There are various reasons for differences between neuroimaging studies and studies on brain-damaged patients. First, PET and fMRI show all the areas that are active during a task, including those that are essential to the performance of that task. In contrast, damage to a given area of the brain will only lead to impaired performance when that area is essential for the task and when non-damaged parts of the brain have not taken over the functions of the damaged area. Second, as Knight (1998, p. 110) pointed out, there are age differences between the participants in the two types of study: "Except for a few aging and lesion studies the metabolic techniques [PET, fMRI] are limited to young normal subjects. Conversely, lesion studies typically involve older populations who now have a superimposed brain lesion."

The techniques for measuring brain activity used by cognitive neuroscientists have become progressively more sophisticated and sensitive. These advances have sometimes had the paradoxical effect of making it harder to decide which areas of the brain are most involved in specific functions. For example, Price, Wise, and Frackowiak (1996) gave their participants the simple task of deciding whether words contained an ascender (a letter such as b, d, or f rising above the body of the word). PET revealed activation in large areas of the brain, including the posterior part of the temporal lobe, the frontal lobe, and the posterior parieto-occipital junction. Such findings make it very hard to identify the brain areas crucially involved in task performance.

Exploring EFT

Exploring EFT

EFT stands for Emotional Freedom Technique. It works to free the user of both physical and emotional pain and relieve chronic conditions by healing the physical responses our bodies make after we've been hurt or experienced pain. While some people do not carry the effects of these experiences, others have bodies that hold onto these memories, which affect the way the body works. Because it is a free and fast technique, even if you are not one hundred percent committed to whether it works or not, it is still worth giving it a shot and seeing if there is any improvement.

Get My Free Ebook


Post a comment