Arthur M Glenberg David Havas Raymond Becker and Mike Rinck

It has become increasingly clear over the past few years that the symbols used by language become meaningful through grounding. For example, Glenberg and Kaschak (2002) demonstrated that some linguistic constructions are grounded in literal action, and Pecher, Zeelenberg, and Barsalou (2003), as well as Stanfield and Zwaan (2001), showed how language is grounded in perceptual and imaginal states, respectively. In this chapter, we report initial results demonstrating how language may also be grounded in the bodily states that comprise emotions. We begin by discussing the logical and theoretical reasons for supposing that language is grounded in bodily states, and then we move to a brief review of the recent work demonstrating grounding in action and perceptual states. This introductory material is followed by the report of two experiments consistent with the claim that language about emotional states is more completely understood when those states are literally embodied during comprehension.

why language must be grounded outside the linguistic system

To ask how a symbol is grounded is to ask how it becomes meaningful by mapping it onto something else that is already meaningful. One hypothesis is that linguistic symbols (e.g., words and syntactic constructions) become meaningful only when they are mapped to nonlinguistic experiences such as actions and perceptions. A second hypothesis is that linguistic symbols can be grounded in other linguistic symbols. For example, words are often defined in terms of other words: That is exactly how most dictionaries work. In contrast to what is intuitive about the dictionary view of grounding, Searle's (1980) Chinese Room argument and Harnad's (1990) symbolic merry-go-round argument provide the contrary intuition that word meaning cannot result from defining words solely in terms of other words. In Hamad's argument, he asks us to imagine traveling to a country where we do not speak the language (e.g., China), and our only resource is a dictionary written solely in that language. When we arrive, we are confronted with a sign comprised of linguistic characters. Attempting to comprehend the sign, we look up the first symbol in the dictionary to be confronted with a definition (Definition 1) comprised of more symbols, none of which we understand. Undaunted, we look up the definition (Definition 2) of the first symbol contained in Definition 1, only to find that the symbols in Definition 2 are also meaningless to us. We continue to look up the definitions of symbols used in other definitions, but none of definitions provide any help. Apparently, no matter how many symbols we look up, if they are only defined in terms of other symbols, the process will not generate any meaning. That is, if we are to learn the meaning of the Chinese symbols, those symbols must be grounded in something other than additional Chinese symbols.

Although Harnad's argument seems compelling, traditional and contemporary theories of meaning proposed by cognitive psychologists suggest otherwise - that meaning of undefined and arbitrary symbols arises from definitions that are themselves comprised of more undefined and arbitrary symbols.1 For example, the Collins-Loftus theory (1975) proposes that conceptual information arises from the pattern of relations among nodes in a network. Here, every node corresponds to an undefined word, and the set of nodes to which a particular node is connected corresponds to the words in the dictionary definition. Similarly, recently proposed theories based on the mathematics of high-dimensional spaces (e.g., Burgess & Lund, 1997; Landauer & Dumais, 1997) suggest that linguistic meaning can arise from defining linguistic symbols in terms of other linguistic symbols. For example, HAL (Burgess & Lund, 1997) is a computer program that combs the Internet for linguistic stimuli. It creates a matrix with both rows and columns corresponding to the words encountered. The entries into the cell defined by the intersection of a particular row and column denote the frequency with which the words appear together in pieces of text (e.g., word pairs, triplets, quadruplets, and so on). According to Burgess and Lund, the meaning of a word is given by a vector created by the (approximately) 70,000 numbers in the row corresponding to a particular word combined with the (approximately) 70,000 numbers in the column corresponding to the same word. That is, Burgess and Lund claim that finding enough other words with which a particular word co-occurs is sufficient to ground meaning.

1 The symbols are arbitrary in the sense that there is no natural connection between the symbol and what it represents. For example, the word "chair" does not look, taste, feel, or act like a chair, and the word "eight" is not larger in any sense then the word "seven."

In fairness, many cognitive theorists recognize that during initial language acquisition cognitive symbols are associated with information derived from perception and action. Nonetheless, according to the theories, this perceptual and action information plays virtually no role in mature language use. That is, all thinking and all language comprehension (after initial perceptual processing) are based on the manipulation of arbitrary symbols. It is this claim that is questioned by many theories of embodied cognition that give a primary role to perception and action in linguistic processing.

In contrast to the dictionary-like theories of Collins and Loftus (1975), Burgess and Lund (1997) and many others (e.g., Anderson, Matessa, & Lebiere, 1997; Kintsch, 1988), recent theoretical (e.g., Barsalou, 1999; Glenberg, 1997) and empirical (Glenberg & Kaschak, 2002; Pecher et al., 2003; Stanfield & Zwaan, 2001) work within the embodiment framework has demonstrated how mature language use is grounded in bodily states of action and perception. Consider, for example, Pecher et al. (2003) who demonstrated a perceptual basis to property verification. Participants responded whether or not an object (e.g., a blender) has a particular property (e.g., loud). Pecher et al. found that when the perceptual dimension probed on the current trial (e.g., "LEAVES-rustle" probes the auditory dimension) was the same as the dimension probed on the previous trial (e.g., "BLENDER-loud" probes the auditory dimension), responding was faster than when the perceptual dimension probed on the previous trial was different (e.g., "CRANBERRIES-tart"). Apparently, understanding a concept presented linguistically calls on perceptual experience, not just arbitrary nodes.

Zwaan and his associates (e.g., Stanfield & Zwaan, 2001) asked participants to verify that a picture (e.g., of a pencil) depicted an object mentioned in a sentence (e.g., "The pencil is in a cup"). They found that pictures matching the implied orientation of the object (a pencil depicted vertically in this case) were responded to faster than pictures of the object in an orientation that mismatched the orientation implied by the sentence. Thus, understanding the sentence appears to call on experience with real pencils and real cups and the orientations that they can take, rather that just the association of nodes representing pencils and cups.

As another example of grounding in bodily states, Glenberg and Kaschak (2002) asked each participant to judge the sensibility of sentences such as "You gave Andy the pizza" or "Andy gave you the pizza" by moving the hand from a start button to a Yes button. Location of the Yes button required a literal movement either toward the body or away from the body. Responding was faster when the hand movement was consistent with the action implied by the sentence (e.g., moving away from the body to indicate "Yes" for the sentence, "You gave Andy the pizza") than when the literal movement was inconsistent with that implied by the sentence. Apparently, understanding these action sentences called on the same neural and bodily states involved in real action. Thus, in contrast to theories that claim language symbols are grounded solely in other symbols, these results imply that understanding language calls on bodily states involved in perception, imagery, and action.

This brief overview of embodied accounts of language comprehension suggests several questions. First, does the notion of grounding language in bodily states miss the distinction between "sense" and "reference?" Philosophers such as Frege have suggested that the meaning of a symbol is composed of the objects to which the symbol refers (reference), as well as a more formal definition (sense). The work in embodiment seems to demonstrate that people think of referents upon encountering a word (e.g., Stanfield and Zwaan's participants think of an image of a pencil upon reading the word "pencil"), but how can grounding in bodily states provide definitions? Within the embodiment framework, there are alternative ways of addressing this question, here we note just one. Barsalou (1999) has developed the ideas of perceptual symbols and simulators. The perceptual symbol corresponding to an object (e.g., the perceptual symbol for a gasoline engine) is abstracted (by selective attention) from experiences with the object and retains the format of the neural coding generated by the perceptual and action experiences. Thus, the perceptual symbol for how an engine looks uses the same neural format as visual perception, whereas the sound of the engine uses the format of audition. Perceptual symbols operate as simulators, that is, they can be modified to fit particular situations. The number of different ways in which a person can use a perceptual symbol in simulations is a measure of that person's knowledge about that object. Thus, a skilled mechanic can simulate the effects of many operations on an automobile engine and how those operations might affect the sound of the engine, its performance, and so on. Most of the rest of us can engage in only a limited number of simulations, such as how to start an automobile engine, or how to add oil. Because a perceptual symbol is derived from particular experiences and maintains the format of those experiences, it corresponds, at least roughly, to the philosopher's notion of reference. In contrast, the range of simulations corresponds to the notion of "sense." Thus, the skilled mechanic, by verbally describing her simulations, can provide a detailed definition of an engine.

A second question is whether all language can be grounded in bodily (that is, perception and action) experience. The data suggest that the answer is a qualified "yes." The qualification arises because some components of language are better characterized as instructions for manipulating (through simulation) grounded representations, rather than as grounded representations themselves. Whereas there are reasons to believe that even these instructions may be grounded in experience (cf. Lakoff, 1987), our current focus is on their role in guiding simulations. For example, Kaschak and Glenberg (2000) tested the notion that verb-argument constructions guide the embodied simulation of sentences. According to construction grammar (e.g., Goldberg, 1995), verb-argument constructions (corresponding to the syntax of simple sentences) convey meanings, and it is these meanings that Kaschak and Glenberg presumed to guide the simulations. For example, the syntax of the double-object construction (subject-verb-object 1-object 2) such as "Mike handed Art the apple" is claimed to carry the meaning that the subject transfers object 2 to object 1 by means of the verb. Kaschak and Glenberg inserted into the construction frame innovative denominal verbs, such as "to crutch," that have no standard verb meaning, hence any verb meaning must be coming from the construction itself. Kaschak and Glenberg demonstrated two effects. First, people readily interpreted sentences such as "Mike crutched Art the apple" as implying that Mike transferred the apple to Art by using the crutch. Second, this meaning only arose when the object named by the verb could be used to effect the transfer; experimental participants would reject as nonsense sentences such as "Mike paper-clipped Art the apple." Apparently, the double-object syntax directs a simulation making use of perceptual symbols in which the goal is to effect transfer. If people cannot figure out how to simulate that transfer (e.g., how a paper clip can be used to transfer an apple), then the sentence is rejected as nonsense.

A simulation account was also developed by De Vega, Robertson, Glenberg, Kaschak, and Rinck (in press) who investigated the interpretation of temporal adverbs such as "while" and "after." They proposed that temporal adverbs are instructions to the language system regarding the order and timing of simulations. For example, "while," as in "While painting a fence the farmer whistled a tune," is an instruction to simulate the two actions as occurring at the same time. De Vega et al. tested this claim by requiring participants to read sentences describing events that (in reality) are easy or difficult to perform at the same time. Events that are difficult to perform at the same time require the same motor system (e.g., painting a fence and chopping wood), and events that are easy to perform at the same time require different motor systems (e.g., painting a fence and whistling a tune). As predicted, participants rejected as nonsense sentences describing same motor system events conjoined with the temporal adverb "while" (e.g., "While painting a fence the farmer chopped the wood"), whereas the same events were readily accepted when conjoined with the adverb "after" (e.g., "After painting a fence the farmer chopped the wood"). Apparently, these adverbs are not simply symbols that are stuck into a representational structure. Instead, they direct the participant in how to think about, that is, simulate, the perception and action details of the situation, such as whether the motor actions can actually be performed as described.

grounding language in emotional states

There are strong connections between language and emotion (see Barsalou, Neidenthal, Barbey, and Rupert, 2003, for a review of some of this literature from a different perspective). When reading or listening, we often find ourselves becoming sad, angry, afraid, happy, joyous, or aroused depending on the meaning of the language we are processing. It is likely that much of the pleasure we gain in reading and listening to narratives and poetry is directly related to an author's skill in producing these emotional states in us. In fact, language can be a reliable method for inducing emotional changes in experimental situations. To this effect, language has been used in the form of explicit verbal instructions (e.g., Velten cards or hypnotic suggestions, see Bower, 1981), and also implicitly by having participants read newspaper reports (Johnson & Tversky, 1983). Furthermore, there is formal evidence for the prominent role of emotions in language processing. For example, Haenggi, Gernsbacher, and Bolliger (1994) demonstrated that emotional inferences can be processed with greater facility than spatial inferences. That is, participants read target sentences that matched the emotional state of a character more quickly than target sentences that matched spatial states. Participants were also faster to notice incongruities between emotional states than between spatial states. Moreover, the emotional states of readers influence their judgments of, and memory for, fictional character traits (Erber, 1991; Laird, Wagener, Halal, & Szegda, 1982).

Thus, connections between language and emotion are strong and well-documented. In addition, the effects of emotions on cognitive processes such as attention, memory, and interpretation have been studied extensively, with induced emotions as well as naturally occurring emotions and clinical emotional disorders (for reviews, see Bower, 1981; or Williams, Watts, MacLeod, & Mathews, 1997). Nonetheless, researchers have rarely addressed the effects of emotional state on higher-level language processes (for exceptions, see Bower, 1981). In this chapter, we begin to address this question: How does emotional state influence language processing?

The embodied account begins to answer this question by making a strong claim regarding the grounding of emotional language. Namely, understanding of language about emotional states requires that those emotional states be simulated, or partially induced, using the same neural and bodily mechanisms as are recruited during emotional experiences. To state this claim differently, language about emotions is grounded in emotional states of the body, and simulating those states is a prerequisite for complete and facile understanding of the language about those states.

At this point, it is worth pausing to consider just what we mean by "emotional states of the body." A major component of these bodily states is current physiology, such as the levels of various hormones, nutrients, and oxygen in the blood as well as heart rate, vaso-constriction, configuration of the musculature, and so on. We also include brain states such as the balance in activation between the left and right frontal lobes and activation of the amygdala. Our major claim is that language about emotional states is grounded directly in this complex of bodily states rather than in abstract and arbitrary cognitive nodes.

According to the strong embodiment claim, part of understanding emotional language is getting the body into (or moving toward) the appropriate bodily state because that is what gives the words their meaning. Consequently, if bodily systems are already in (or close to) those appropriate states, then understanding should be facilitated, and if the bodily systems are in inappropriate states, those states should interfere with language understanding. More concretely, if we are reading about pleasant events, we should be faster to understand those sentences if we are in a happy state than if we are in an unhappy state. Conversely, if we are reading about unpleasant events, we should be faster to understand those sentences if we are in an unhappy state than if we are in a happy state. Note that these predictions are based on two assumptions. The first is a dimensional assumption, namely that the bodily states corresponding to happiness are further away from those corresponding to unhappiness than to a neutral state. The second assumption is a type of inertia. Because the body is literally a physical and biological system that cannot change states instantaneously, it will take longer to shift from a happy state to an unhappy state (states that are far away) than to shift from a happy state to a neutral state.

In seeming opposition to these predictions, Forgas and Bower (1987) demonstrated longer reading times for character descriptions corresponding in evaluative valence with participants' induced moods than for non-corresponding descriptions. However, this result may well reflect the demands of their task that requires participants to make evaluative judgments about the described characters. As Forgas and Bower speculate, this task may have encouraged associative elaboration well beyond that needed to comprehend the sentences.

To test the predictions from the strong embodiment claim, we need a means of reliably shifting the body into happy and unhappy states. Strack, Martin, and Stepper (1988) provide just such a methodology. They noted that holding a pen in one's mouth using only the teeth (and not the lips) forces a partial smile. In contrast, holding the pen using only the lips (and not the teeth) forces a partial frown.2 Strack et al. demonstrated that these facial configurations differentially affected peoples felt emotions as well as their emotional assessment of stimuli: Participants rated cartoons as funnier when holding the pen in their teeth (and smiling) than when holding the pen in their lips (and frowning). This effect of facial configuration has

2 Note that having the face in a particular configuration is part of the bodily state corresponding to a particular emotion.

122 Arthur M. Glenberg, David Havas, Raymond Becker, and Mike Rinck table 6.1. Examples of Sentences Used in Experiments 1 and 2 Pleasant sentences

The college president announces your name, and you proudly step onto the stage.

The cold air rushes past you as you race down the snowy slope.

Ready for a snack, you put the coins into the machine and press the buttons.

You and your lover embrace after a long separation.

You laugh as the merry-go-round picks up speed.

Unpleasant sentences

The police car rapidly pulls up behind you, siren blaring. Your supervisor frowns as he hands you the sealed envelope. You've made another error at shortstop, bringing the crowd to its feet. Your debate opponent brings up a challenge you hadn't prepared for. Your father collapses at the end of the annual road race.

been replicated numerous times (e.g., Berkowitz & Troccoli, 1990; Ohira & Kurono, 1993; Larsen, Kasimatis, & Frey 1992; Soussignan, 2002).

In our experiments, we used the Strack et al. (1988) procedure to manipulate bodily state while participants read and understood sentences describing pleasant and unpleasant events. We measured how long it took to read and understand the sentences. On the basis of the strong embodiment claim, we predict an interaction between the pen condition (Teeth or Lips) and the valence of the sentence (Pleasant or Unpleasant). That is, participants should read and judge pleasant sentences more quickly when holding the pen in their teeth than when holding the pen in their lips, and the converse should be found for unpleasant sentences.

Examples of the sentences are given in Table 6.1. The 96 sentences (48 pleasant and 48 unpleasant) were based on those constructed and normed by Ira Fischler (personal communication, 2003), although we made changes in a small number of them.3 Each of the original sentences was rated by approximately 60 participants on a scale from 1 (most unpleasant) to 9 (most pleasant). The mean (and standard deviation) for the unpleasant sentences was 2.92 (.66), and the mean for the pleasant sentences was 6.50 (.58).

In the first experiment, each participant viewed a sentence on a computer screen by pressing the space bar on the computer keyboard (which also initiated a timer). The participant judged the valence of the sentence by pressing the "3" key on the keyboard, which was labeled with the letter "U," for unpleasant, or the "0" key, which was labeled with the letter "P," for pleasant. The left and right index fingers were used to make the Unpleasant and Pleasant responses, respectively. Half of the 96 participants began the experiment in the Teeth condition (holding the pen using

3 We are very grateful to Ira Fischler who provided us with the stimulus sentences.

table 6.2. Reading Times in Milliseconds and Proportion Consistent Valence Judgments (in Parentheses) from Experiment 1

First Half of the Experiment

Sentence Valence

Pen Condition Pleasant Unpleasant

Second Half of the Experiment

Sentence Valence

Pen Condition Pleasant Unpleasant

only their teeth), and half began in the Lips condition (holding the pen using only their lips). During the experiment, participants switched between holding the pen in their teeth and lips every 12 sentences. In each of these blocks of 12 sentences, half of the sentences were Pleasant and half Unpleasant. Participants were told that the purpose of the pen manipulation was to examine the effects of interfering with the speech articulators during reading.

Overall, the participants were highly accurate, that is, the judgments corresponded to the normative classification 96% of the time (range of 87% to 100%). Because of two worries, we decided to analyze the data including a factor of experiment half (First half or Second half). One of these worries was that the pen manipulation would become onerous and unpleasant in both the Teeth and Lips conditions as the musculature fatigued. The other worry was that the task was so simple (judging the valence of the sentences) that people would learn to make the judgment on the basis of a quick scan of a few words rather than reading and understanding the whole sentence.

The reading (and judgment) times for the two halves of the experiment are presented in Table 6.2. The means were computed only for those sentences judged correctly. Also, reading times were eliminated if they were more than 2.5 standard deviations from a participant's mean in any particular condition.

The critical interaction between Pen condition and Sentence valence was significant, F(i,95) = 5.41, MSe = 80515, p = .02. Although the triple interaction of Half of the experiment with Pen condition and Sentence condition was not significant (p = .10), it is clear from the data in Table 6.2 that the majority of the interaction comes from the first half of the table 6.3. Reading Times in Milliseconds and Proportion "Easy" Judgments (in Parentheses) from Experiment 3

First Half of the Experiment

Sentence Valence

Pen Condition Pleasant Unpleasant

Teeth (smiling) 3442 (.93) 3496 (.94) Lips (frowning) 3435 (.94) 3372 (.9i)

Second Half of the Experiment

Sentence Valence

Pen Condition Pleasant Unpleasant

experiment. Note that the form of this interaction is just what was predicted on the basis of the strong embodiment claim. Namely, reading the pleasant sentences is 122 msec faster when holding the pen in the teeth (and smiling) than when holding the pen in the lips (and frowning), whereas reading of the unpleasant sentences is 45 msec slower when holding the pen in the teeth than when holding the pen in the lips.

Although the data are consistent with the strong embodiment claim, there is a possible problem with the experimental procedure, namely that the judgment draws the participants' attention to the valence of the sentences. Thus, a skeptic might argue that the bodily effects of the pen manipulation emerge only when people must explicitly judge emotional valence, and thus the effects are not reflective of basic understanding processes. To address this problem, we conducted a second experiment in which the task was to judge whether or not the sentences were easy or hard to understand. We told participants that we had written the sentences to be easy to understand, but that some difficult ones might have snuck through and that we needed their help in finding them. Thus, the great majority of their responses would be "Easy," but that there might be a few "Hard" responses. The participants were never informed that the sentences differed in valence. Furthermore, taking a cue from the social psychology literature, we asked participants after the experimental session if they had ever heard of the pen procedure or had suspected that we were attempting to manipulate their moods. We eliminated the data from four of the 42 participants who answered the latter question in the affirmative.

The 38 participants judged most of the sentences as easy to understand (94%, with a range of 60% to 100%). The reading time data are presented in Table 6.3. These data are based solely on sentences classified as "Easy,"

and reading times more that 2.5 standard deviations from the participant's mean for a particular condition were eliminated.

Overall, there was a significant interaction between Pen condition and Sentence valence, F(i,37) = 6.63, MSe = 103382, p = .01. However, the same interaction was significant for the judgments, F(i,37) = 4.48, MSe = .003, p = .04, indicating the possibility of some sort of judgment-by-reading speed tradeoff. Because the judgments seemed to have stabilized in the second half of the experiment, we also examined the data by halves of the experiment. Considering only the first half of the experiment, the critical interaction was significant for the judgments, F(i,37) = 8.56, MSe = .002, p = .01, but the interaction was not significant for the reading times, p > .3. The opposite pattern was obtained for the second half. Namely, the critical interaction was not significant for the judgments, p > .3, but was significant for the reading times, F(1,37) = 4.89, MSe = 134316, p = .03.4 Thus, if we consider the data from both halves of the experiment or only the data from the second half, there is ample evidence for the critical interaction in the time needed to understand the sentence.

The data from the two experiments are clear and consistent: When reading and understanding sentences, judgments are facilitated when the suggested mood of the sentence is congruent with the mood induced by the pen manipulation. This finding is consistent with the predictions derived from the strong embodiment claim. As such, the data lend support to the general claim that language is grounded in bodily states, and the data lend support to the specific claim that language about emotions is grounded in emotional states literally produced by the body. Nonetheless, two issues will have to be addressed in further research.

The first issue concerns the prediction derived from the strong embodiment claim that relative to a neutral condition, both facilitation and interference should be observed. Our data might simply reflect facilitation (e.g., smiling speeds processing of pleasant sentences) with no interference for incongruent conditions (e.g., smiling may not interfere with processing of unpleasant sentences). Unfortunately, two pilot experiments designed to induce a neutral condition were failures. In the first pilot experiment, we implemented Strack et al.'s neutral condition of holding the pen in one hand rather than in the mouth. This necessitated a change in the response method from using the left and right index fingers to respond "Easy" and

4 Why are the effects most apparent in the first half of Experiment 1 and the second half of Experiment 2? We think that the task used in Experiment 1 (judging valence of the sentence) was very easy, and eventually participants learned to make those judgments by quickly scanning the sentences for key words, thus obviating the effect in the second half. The task used in Experiment 2 (judging if the sentence was easy or hard to understand) was more difficult (note the slower times compared to Experiment 1). Importantly, there were no surface cues as to whether a sentence should be judged easy or hard, and hence the participants were forced to carefully consider each sentence throughout the experiment.

"Hard" respectively (as in Experiments 1 and 2), to using the right index finger to indicate "Hard" and the right middle finger to indicate "Easy." With this change, however, the critical interaction was no longer significant. We suspect that the interaction disappeared because responding with the index finger was actually easier than responding with the middle finger, leading to a conflict between the labels "Hard" and "Easy" and the difficulty of actually making the response. In the second pilot experiment, we used as a neutral condition holding the pen between the knees so that the two index fingers could once again be used to make the "Easy" and "Hard" responses. We obtained the critical interaction, but to our surprise, the pen-between-the-knees condition seemed to be a super-pleasant condition and not a neutral condition. That is, for the pleasant sentences, responding was fastest in the pen-between-the-knees condition, and for the unpleasant sentences, responding was slowest in this condition!

A second issue calling for caution is that the results portrayed in Tables 6.2 and 6.3 are consistent with other accounts in addition to the strong embodiment claim. Consider, for example, Bower's (1981) theory of the relation between cognition and emotion. In Bower's theory, bodily states influence cognition by activating nodes (e.g., a happy node) that can spread activation to associated nodes representing happy and pleasant words such as, from Table 6.1, "proudly," "lover," and "embrace." Because these nodes are already activated, the corresponding words are read faster, leading to the critical interaction between bodily state and reading time.5 Although the Bower model and the embodied approach make similar predictions for the experiments reported here, there does seem to be an important difference between the accounts. Namely, in the Bower model, bodily states have an effect on cognition (including language comprehension) by activating the presumed emotion nodes which then activate other nodes corresponding to words. In this model, language understanding results from the manipulation of those other nodes. Thus, comprehension of emotional language is only indirectly grounded in the bodily states corresponding to emotions. In contrast, the strong embodiment claim is that language understanding is directly grounded in bodily states, that is, that language understanding requires the appropriate bodily states to derive meaning from the words. We are currently developing experimental procedures to differentiate between these accounts.

In conclusion, our results add an important new finding consistent with the claim that language is grounded in bodily states. Previous work has demonstrated that language is grounded in action, in perceptual states,

5 It is not clear whether this conjoining of bodily states and AAA principles will be successful. Indeed, the representation of emotions as nodes in a network, just like any other type of node, has received considerable criticism from emotions researchers (cf. Williams, Watts, MacLeod, & Mathews, 1997).

and in images. These new experiments demonstrate how language about emotion-producing situations may well be grounded in bodily states of emotion.


This work was supported by NSF grants BCS-0315434 and INT-0233175 to Arthur Glenberg and a grant from the German Academic Exchange Service (DAAD) to Mike Rinck. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors, and do not necessarily reflect the views of the NSF. We thank Brianna Buntje, Sheila Simhan, Terina Yip, and Bryan Webster for fantastically efficient data collection and interesting discussion of these results. We are also very grateful to Gordon Bower whose insightful review of an initial version of this chapter resulted in a substantially improved final version. Requests for reprints may be directed to Arthur Glenberg at [email protected].


Anderson, J. R., Matessa, M., & Lebiere, C. (1997). ACT-R: A theory of higher level cognition and its relation to visual attention. Human-Computer Interaction. Special Issue: Cognitive Architectures and Human-Computer Interaction 12, 4, 439462.

Barsalou, L. W. (1999). Perceptual symbol systems. Behavioral and Brain Sciences 22, 577-660.

Barsalou, L. W., Niedenthal, P. M., Barbey, A., & Ruppert, J. (2003). Social embodiment. In B. Ross (Ed.), The Psychology of Learning and Motivation, Vol. 43. San Diego: Academic Press. Berkowitz, L., & Troccoli, B. T. (1990). Feelings, direction of attention, and expressed evaluations of others. Cognition and Emotion 4, 305-325. Bower, G. H. (1981). Mood and memory. American Psychologist 36,129-148. Burgess, C., & Lund, K. (1997). Modeling parsing constraints with high-dimensional context space. Language and Cognitive Processes 12,177-210. Collins, A. M., & Loftus, E. F. (1975). A spreading-activation theory of semantic processing. Psychological Review 82,407-428. De Vega, M., Robertson, D. A., Glenberg, A. M., Kaschak, M. P., & Rinck, M. (in press). On doing two things at once: Temporal constraints on actions in language comprehension. Memory and Cognition. Erber, R. (1991). Affective and semantic priming: Effects of mood on category accessibility and inference. Journal of Experimental Social Psychology 27, 79-88. Fischler, I. S. (July 23, 2003).

Forgas, J. P., & Bower, G. H. (1987). Mood effects on person-perception judgments.

Journal of Personality and Social Psychology 53, 53-60. Glenberg, A. M. (1997). What memory is for. Behavioral and Brain Sciences 20,1-55. Glenberg, A. M., & Kaschak, M. P. (2002). Grounding language in action. Psycho-nomic Bulletin & Review 9, 558-565.

Goldberg, A. E. (1995). Constructions: A construction grammar approach to argument structure. Chicago: University of Chicago Press.

Haenggi, D., Gernsbacher, M. A., & Bolliger, C. M. (1994). Individual differences in situation-based inferencing during narrative text comprehension. In H. van Oostendorp & R. A. Zwaan (Eds.), Naturalistic text comprehension: Vol. LIII. Advances in discourse processing (pp. 79-96). Norwood, NJ: Ablex.

Hamad, S. (1990). The symbol grounding problem. Physica D 42, 335-346.

Johnson, J. D., & Tversky, A. (1983). Affect, generalization, and the perception of risk. Journal of Personality and Social Psychology 45, 20-31.

Kaschak, M. P., & Glenberg, A. M. (2000). Constructing meaning: The role of af-fordances and grammatical constructions in sentence comprehension. Journal of Memory and Language 43, 508-529.

Kintsch, W. (1988). The role of knowledge in discourse comprehension: a construction-integration model. Psychological Review 95,163-182.

Laird, J. D., Wagener, J. J., Halal, M., & Szegda, M. (1982). Remembering what you feel: Effects of emotion on memory. Journal ofPersonality and Social Psychology 42, 646-657.

Lakoff, George (1987). Women, Fire, and Dangerous Things: What Categories Reveal about the Mind. Chicago: University of Chicago Press.

Landauer, T. K., & Dumais, S. T. (1997). A solution to Plato's problem: The latent semantic analysis theory of acquisition, induction, and representations of knowledge. Psychological Review 104, 211-240.

Larsen, R. J., Kasimatis, M., & Frey, K. (1992). Facilitating the furrowed brow: An unobtrusive test of the facial feedback hypothesis applied to unpleasant affect. Cognition & Emotion 6, 5,321-338.

Ohira, H., & Kurono, K. (1993). Facial feedback effects on impression formation. Perceptual and Motor Skills 77(3, Pt. 2), 1251-1258.

Pecher, D., Zeelenberg, R., & Barsalou, L. W. (2003). Verifying different-modality properties for concepts produces switching costs. Psychological Science 14, 119124.

Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences 3, 417-457.

Soussignan, R. (2002). Duchenne smile, emotional experience, and autonomic reactivity: A test of the facial feedback hypothesis. Emotion 2,1, 52-74.

Stanfield, R. A., & Zwaan, R. A. (2001). The effect of implied orientation derived from verbal context on picture recognition. Psychological Science 121,153-156.

Strack, F., Martin, L. L., & Stepper, S. (1988). Inhibiting and facilitating condition of facial expressions: A nonobtrusive test of the facial feedback hypothesis. Journal of Personality and Social Psychology 54, 768-777.

Williams, J. M. G., Watts, F. N., MacLeod, C., & Mathews, A. (1997). Cognitive Psychology and Emotional Disorders, Second edition. Chichester: John Wiley.

Healing The Inner Child

Healing The Inner Child

Get All The Support And Guidance You Need To Be A Success At Changing Your Life. This Book Is One Of The Most Valuable Resources In The World When It Comes To What You Need To Know About Spiritual Emotional Freedom.

Get My Free Ebook

Post a comment