Exploring brain systems

The Peripheral Neuropathy Solution

Dr. Labrum Peripheral Neuropathy Program

Get Instant Access

Despite many years of investment, giving up the work I had been doing in the peripheral nervous system was not that hard. I had gradually come to feel that working at that level would yield diminishing returns, similar to my sense in the 1970s about the future of invertebrate nervous systems as they were then being studied with electrophysiological and anatomical methods. By the mid-1980s, it was hard to ignore the fact that neuroscientists were shifting to what might be described as either a lower or a higher level of interests. The revolution in molecular biology and the powerful methods it provided were attracting many neuroscientists to pursue the organization of the nervous systems of worms, fruit flies, and mice at the molecular genetic level. At the same time, brain imaging techniques based on computer assisted tomography (CAT) and positron emission tomography (PET) were giving birth to a new field that had been dubbed cognitive neuroscience, defined as a wedding of psychology with these and other neuroscientific methods for studying human brain function. If I simply kept going along the path I had been following, I would be stuck somewhere in the middle. Because molecular reductionism had never been appealing to me, the direction I needed to go in seemed inevitably upward.

The growing sense that I should move on to study some aspect of the brain was painfully underscored during a talk I gave at the Cold Spring Harbor Laboratories in the mid-1980s. Three of us neuroscientists had been invited to address the Lab's Winter Meeting, a session attended by the cadre of molecular biologists at the lab that Jim Watson had turned into a major intellectual center since he had taken over as director in 1968. These generally young and frighteningly bright molecular biologists were also looking for new problems to pursue, and they wanted to know the latest developments in neuroscience and how they might tap into them using the new molecular genetic techniques. Chuck Stevens, a synaptic physiologist, spoke first on synaptic noise (the molecular effects of individual transmitter molecules interacting with transmitter receptors discussed in Chapter 3). I followed with a talk on the work we had been doing on synaptic development and maintenance in the autonomic nervous system, and David Hubel ended the meeting with a discussion of the work he and Torsten Wiesel had done in the visual system. As Hubel mounted the stage after the end of my presentation, he turned and said pointedly to the audience, "And now for the real story!" My face must have reddened as the audience tittered at the gratuitous insult, but his point was not lost on me.

To pursue any sort of study at a "higher" level of the nervous system, I needed to define a worthwhile problem in the brain, which was not easy. It might seem odd, but after nearly 20 years as a practicing neuroscientist doing what was considered important research and teaching neurobiology to medical students, graduate students, and undergraduates, I actually knew very little about the brain. I had been taught brain neuroanatomy and pathology in medical school, but I soon forgot this information when it proved to be of no great value during my clinical years as a general surgical-house officer, or during the two years I put in as general practitioner in the Peace Corps. The brain's anatomy is enormously complicated and not at all logical. To make matters worse, neurophysiologists (which I was) generally looked down on neuroanatomy as a field, and the work I had been doing since 1973 had provided little reason to delve into issues that concerned the brain. Although I had learned a lot of information by osmosis over the years, absorbing the relevant detail—Figure 6.1 shows only a few elements of brain structure—requires the sort of dedication that comes from the daily demands of clinical practice in neurology or neurosurgery, or a research career specifically directed at one or more brain systems. As a result, I had managed to avoid all but the most superficial knowledge of brain organization, and I would have had trouble answering the questions on the anatomical part of the exam that we routinely gave the first-year medical students.

If I was going to pursue research on some aspect of the brain, it was clear that I would need a good deal of remediation, and the first order of business was to find a good teacher. I was especially lucky in 1987 when Anthony LaMantia, a newly minted neuronanatomist who had just finished his Ph.D. with Pasko Rakic at Yale, got in touch with me about joining the lab as a postdoctoral fellow. Rakic, who had trained with the people who had taught me neuroanatomy and neuropathology at Harvard, was without much question the most accomplished and imaginative neuroanatomist in the country. I had closely followed his work on the development of the primate brain and very much admired what he had done. His knowledge and talent rubbed off on his trainees, and given my new inclination, LaMantia's matriculation in the lab the following year was a godsend. I learned far more from him during the next few years than he learned from me.

Figure 6.1 The major surface features of the human brain in lateral (A) and medial (B) views. (The medial view is of the right cerebral hemisphere as seen from the midline with the left hemisphere removed.) The labeled structures represent only a few of the surface landmarks that guide neuroscientists working on this complex structure; the internal organization of the brain is far more complex. (After Purves, Augustine, et al., 2008)

Gdala Cerebral

Suptí-nor frontil Stt[n:J"Jnr fmtir.if sufcu* Mi44lï J'riinljl ¡¡yniS infcrkr^Fonnl gyrn$ InÍL-tutf frcimlal >aleus PretTiiirLl ¡¡yrtts (.'ÜIMfjI vUlCUN SupcHur puridad talmli:

Inn j;t;h i jl.ii mik u^ F.^Sikciklml suku^. Anptói.U pyros Supaainar.Eiii;il gyms Pas i íijni'i lateral tióipftnl gyti Superior rcn-.| ^Tül ^'uip^ frvoctipitti) ndlcii C^rcbelljf hflnisjAwre InfL-iKKf temporal jiyiu^ tnfcrkrr ttrnipoíiil siukin MwWIfi LCmpOial us üujKridr L-crtfMirjl ftûçUÏ


fyperör frontal gyru^ Chaulait suJcus Cfrguleic pjnh Genu rif wrpui CnHtosimr lokral vonCijcle Central tukig

MaiginaJ hrsiKh nrcin£.iilalc sukus tàffKtttital ínlíLik-


Spteniim of i^cpijN calloMjm CUiKUS

Cukwine üuku> lhijillii gynln Midbrain Fourth vwnriclí ftnttnofch hmujuui Medulla oWun|tMu Pürafiippocaifflpal gyru* Khiii.i1 suk-u-H>|XHllilJjlllllls Opiic driven - Gyrus teems

Relearning brain anatomy, however, was only preliminary to figuring out a good problem to explore. Typically, investigators extend their research in familiar directions, making an educated guess about an interesting tangent. This is what Anthony and I did, ultimately deciding to tackle a problem in the olfactory system. By 1988, the work on monitoring synapses over time in the peripheral nervous system was winding down for me (although not for Jeff Lichtman, who was just beginning to look directly at synaptic competition on developing muscle fibers in more sophisticated ways). LaMantia and I would have liked to have done something similar in some region of the living brain, but there was no technical way then. So we settled on what we thought was the next best thing: monitoring the development and maintenance of brain modules ( Figure 6.2).

Figure 6.2 Examples of the modular units found in the mammalian brain. A) The striped pattern of ocular dominance columns in the visual cortex of a rhesus monkey. B) The units—whimsically called blobs—in another region of the visual cortex of monkeys. C) The units called barrels in somatic sensory cortex of rats and mice. D) The units called glomeruli in the olfactory bulb of all mammals. In each case, the units are a millimeter or less in width or diameter, and are revealed by a staining techniques that enables them to be seen when looking down on the surface of the brain with a low-power microscope. (Purves, D., D. Riddle, and A. LaMantia. "Iterated Patterns of Brain Circuitry (or How the Cortex Gets Its Spots)." Reprinted from Trends in Neuroscience 15 (1992): 362-368 with permission from Elsevier.)

Circuitry Pattern

Applying the term modules to the brain has always been problematic. Many psychologists and mind-brain philosophers use it to refer to the idea that different brain regions are dedicated to particular functions. This concept needs to be carefully qualified because although brain structures and cortical regions are specialized to perform some range of functions, all these structures and regions interact directly or indirectly; even when the interactions are indirect, a surprisingly small number of synaptic linkages connect any one brain region and another. In contrast, neurobiologists use the term modules to refer more specifically to small repeating units within specialized brain regions that comprise local groups of functionally related neurons and their connections ( Figure 6.2). These units occur in the brains of many mammals, and their prominence had made them a focus of interest and speculation about cortical function since the 1950s. Spanish neuroanatomist Rafael Lorente de No, a protégé of Cajal, first noted in the 1920s that many regions of the cerebral cortex comprise iterated elementary units. ("Cortex" refers to the layer of gray matter that covers the cerebral hemisphere, and much of the higher-order neural processing in the brain occurs in the cortical circuitry; see Figure 6.1.) This cortical modularity remained largely unexplored until the late 1950s, when electrophysiological experiments indicated an arrangement of repeating units in the brains of cats. Vernon Mountcastle, a leading neurophysiologist who spent his career at Johns Hopkins, reported in 1957 that vertical microelectrode penetrations made in the cortical region of the cat brain that processes mechanical stimulation of the body surface encountered neurons that all responded to the same type of stimulus presented at the same body location. When Mountcastle moved the electrode to nearby locations in this brain area, he again found similar responses of neurons located along a vertical track down through the cortex, although the functional characteristics of the nerve cells were often different from the properties of the neurons along the first track (for example, the nerve cells along one vertical track might respond to touch, and those along another track respond to pressure). A few years after Mountcastle's discovery, Hubel and Wiesel (who were in Kuffler's lab at Johns Hopkins in 1957 and were well aware of this work) discovered a similar arrangement in the visual cortex of the cat, and later in the visual cortex of monkeys. These observations, along with evidence in other cortical regions, led Mountcastle and a number of other investigators to speculate that these repeating units represented a fundamental feature of the mammalian cerebral cortex that might be relevant to brain functions that remained poorly understood, including cognitive abilities and even consciousness. The role, if any, of these iterated patterns for brain function remains uncertain. However, the prevalence of these modules and the ongoing interest in their role gave LaMantia and me a way to assess the stability of brain organization monitored over weeks or months.

The modular units we chose to look at first were the glomeruli in the olfactory bulb of the mouse (see Figure 6.2D). These weren't the most interesting or most talked-about cortical patterns—that prize went to the ocular dominance columns in the visual cortex (see Figure 6.2A). However, glomeruli were practical targets to begin with because mice are cheap (a couple of dollars apiece then, compared with several hundred dollars for a cat and closer to a thousand dollars for a monkey); it was clear we would have to use a lot of animals to work out methods for exposing the brain, staining the units with a nontoxic dye, and repeating the procedure to examine the same region weeks later. We succeeded in doing this during the next year or so, asking whether these units in the mouse brain all developed at the same time as the animals grew up, or whether new units were added among the preexisting ones. No one was waiting impatiently for the answers to these questions about the olfactory system, however, and our finding that some units are demonstrably plugged in among the existing ones as mice mature did not elicit much excitement. The focus of interest in cortical modularity was the visual system, and this was part of the brain that had stimulated the most ardent debates about the role of cortical modularity.

As a graduate student in Rakic's lab, Anthony had plenty of experience working with rhesus monkeys, so we turned next to the monkey visual cortex. For a variety of technical reasons, it was impractical to do repeated monitoring in the same monkey as we had done while looking at modules in the olfactory system of the mouse. So we again settled for the next best thing. Given what we had found in the mouse olfactory bulb, it seemed reasonable to look at the overall number of modular units in the visual cortex shortly after birth and in maturity in different monkeys; if the average numbers were significantly different, we could assume that units had been added as monkeys matured, much as we had documented in the olfactory brain of the mouse. The easiest units to look at in the monkey visual cortex were the so-called blobs shown in Figure 6.2B. These units had not attracted the same attention as ocular dominance columns, but they were discrete and could be easily counted (ocular dominance columns form more complex patterns and would have been hard to assess quantitatively; see Figure 6.2A). In 1990, LaMantia and I began working to determine the number of blobs present in the primary visual cortex of newborn monkeys compared to the number in adults.

The project was problematic from the outset. Although I had worked on a lot of different species over the years, including small monkeys, adult rhesus monkeys are large and nasty, and dealing with them is distinctly unpleasant. The expense, the character of the animals, and the knowledge that the project was only a stepping stone to a more direct approach made us hurry along and eventually publish a wrong conclusion. Based on the first few animals in which we counted blobs at birth and in maturity, it seemed reasonably clear that these units were being added. Anxious to stake this claim in the monkey visual system and convinced that the results we had seen in the mouse olfactory bulb indicated a general rule, we went ahead and published a short paper to that effect. However, when we completed the study with a larger complement of monkeys, there was no statistically significant difference in the initial and mature number of blobs in the visual cortex. We corrected our mistake in the full report of the project, and no one seemed to have paid much attention to our error, but I realized that I had pushed too hard in the interests of being recognized as a player in my newly adopted field and the community of brain scientists. This minor fiasco left me considerably less confident about making the transition from the peripheral nervous system to the brain. It also left me with the need to determine another research direction. To judge from our work on blobs and other evidence we should have paid more attention to, modular units in the visual cortex seemed pretty stable.

While this was going on, my scientific and personal life changed in another important way. In 1990, I had accepted an offer from Duke University to start up a Department of Neurobiology there, and Shannon and I and our younger daughter had moved to North Carolina (which is where LaMantia and I carried out the work on monkey blobs). Most universities wanted to follow the example Kuffler had set at Harvard in 1966 by forming departments of neurobiology; many had done so as neuroscience, and the funding for it, grew rapidly in the 1970s and 1980s. But starting up a new department meant committing a great deal of money and overcoming the political opposition of existing departments that did not want to give up any of their own resources or clout. As a result, most places had made the sort of compromise Max Cowan had engineered at Washington University, marrying neurobiology to an existing department of anatomy or physiology. However, Duke had raised the large sum required to hire and set up about a dozen new faculty and had built a new building to house the department. This largesse coupled with the quality of the university and its ambitions presented an opportunity that was hard to turn down, even though it promised some administrative work that I had always shunned.

The move turned out to be significant in many ways, almost all of them positive. My less-than-robust mental state when I accepted the job at Duke benefited greatly from the change of scene and the challenge of starting up the new department. I had been at Washington University nearly 17 years, and my crankiness at the end of that time, the problematic relationship with Lichtman, and my desire for a new scientific start were all resolved in one fell swoop. The move was also a big plus for Shannon. Shannon (known as Shannon Ravenel in the publishing world) had been a successful young editor with Houghton Mifflin in Boston when we married in 1968, but she had given up her job when we moved to London in 1971. During our first few years in St. Louis, she had taken on a series of minor editorial jobs to make ends meet that were as demeaning to her as it would have been for me to teach junior high health science at that point in my career. Her professional situation had improved in 1977 when Houghton Mifflin asked her to become the series editor of their annual anthology Best American Short Stories, a job she could do in St. Louis that entailed reading and selecting stories from all the short fiction published in American and Canadian periodicals each year. Shannon's situation changed for the better again in 1982 when her friend and mentor Louis Rubin asked her if she would be interested in starting a literary publishing company in Chapel Hill, where he was then professor of English at the University of North Carolina. She agreed and had been exercising her role in this new venture from St. Louis. But as Algonquin Books of Chapel Hill grew more successful, this arrangement had become increasingly awkward. My move to Duke (only 10 miles from Chapel Hill) solved problems for both of us. (Workman Publishing Co. in New York purchased Algonquin Books in 1989, and it continues to flourish in Chapel Hill.) The administration at Washington University made no particular effort to keep me there (whether because they realized I was going to go anyway or because they didn't really care), and we arrived in North Carolina in summer 1990.

The first of several new postdoctoral fellows to join my new lab at Duke, in addition to LaMantia, was David Riddle, a recent Ph.D. from the University of Michigan. The new direction that seemed most attractive involved yet another region of the brain that had long been of interest: the somatic sensory system that Sherrington, Mountcastle, and many other others had studied. This region of the sensory brain is responsible for processing the mechanical information arising from the variety of receptor organs in skin and subcutaneous tissues that initiate the sensations we experience as touch, pressure, and vibration (pain and temperature sensations entail a related but largely separate system). Although not as thoroughly plowed as the visual system (and intrinsically less interesting to most people), the somatic sensory system had some advantages for us. The major attraction was visibility of the animal's body surface in the relevant part of the cortex (see Figures 6.2C and 6.3). Although this "map" of the body surface is apparent in the brains of many species, including primates, it is especially easy to see in rats and mice. Tom Woolsey, one of my colleagues in the Department of Anatomy and Neurobiology at Washington University, discovered the visible body map in rodents in 1970 when he was working with Hendrik van der Loos at Johns Hopkins, shortly after graduating from medical school there. As a result of their shape, these units were called barrels (only the top of the barrel is apparent in Figures 6.2C and 6.3A) and had been much studied ever since. Each barrel in the cortex corresponds to a richly innervated peripheral sensory structure, such as a particular whisker on the animal's face or a digital pad on one of the paws. Barrels are thus processing centers for sensory information arising from the corresponding peripheral structure, and can be made visible because of their higher rate of neural activity relative to the surrounding cortex. The higher level of neural activity in barrels means a higher rate of cellular metabolism, which causes barrels to preferentially take up reagents such as mitochondrial enzymes stains that reflect nerve cell metabolism.

Figure 6.3 The somatic sensory cortex in the rat brain, made visible by activity-dependent enzyme staining. A) The map of the rat's body that is apparent when looking down on the flattened surface of this region of the animal's cerebral cortex (a similar map is present in each cerebral hemisphere). Each of the darkly stained elements (the barrels) corresponds to a densely innervated peripheral structure such as a facial whisker or one of the pads on the forepaws or hind paws. B) Tracing of the map in (A), indicating the corresponding body parts; the color-coding shows the relative level of neural (metabolic) activity in each module, with yellow signifying higher activity. The red bar indicates 2 millimeters. (From Riddle, et al., 1993)

Riddle and I (and, eventually, Gabriel Gutierrez) wanted to see if the regions of sensory cortex that experienced more neural activity during maturation captured more cortical area than less active brain regions. We could explore this question by measuring the area occupied by different components of the somatic sensory map at different ages, asking whether the more metabolically active areas grew faster (see Figure 6.3B). If we could establish this correlation, it would imply that the neural activity generated by an animal's experience in life was being translated into the allocation of more cortical area for processing the relevant information (showing that neural activity influences cortical connectivity and the amount of cortex devoted to specific tasks). This turned out to be the case: The more active cortical regions expanded relatively more during maturation than less active ones. But having established this point, it was not clear how to go forward in a direction that would warrant a further effort along these lines.

By the early 1990s, I had spent about five years working on these various projects in the brain, felt more knowledgeable about the issues, and had finally acquired a reasonable working knowledge of brain anatomy. To show my willingness to participate in the grunt work of the new department, I was teaching the Duke med students neuroanatomy each spring. And although I was certainly no expert, I no longer embarrassed myself when answering questions from the students and could easily have passed the first-year exam that I had imagined failing unceremoniously a few years before. Work on the growth and organization of the cortex as a function of activity continued, and as much from scientific boredom as from any clear goal, I began thinking about visual perception, fiddling around with some small projects that seemed interesting but minor asides to the mainstream neuroscience that was plodding along in the lab.

Perception—what we actually see, hear, feel, smell, or taste—is generally thought of as the end product of sensory processing, eventually leading to appropriate motor or other behavior. But perception is far more complicated than this garden-variety interpretation. For one thing, we are quite unaware of the consequences of most of the sensory processing that goes on in our brains. For example, think of the sensory information the brain must process to keep your car on the road when you're driving along and focusing on other concerns. We obviously see perfectly well when our minds are otherwise engaged, but we are not aware of seeing in the usual sense. Why, then, do we ever need to be aware of sensory information? Furthermore, what we see or otherwise experience through the senses rarely tallies with corresponding physical measurements. Why are these discrepancies so rampant? And what do these discrepancies have to do with the longstanding philosophical inquiry into the question of how we can know the world through our senses in the first place? After I had begun to work on sensory systems, these and other questions about perception kept intruding and were growing harder to ignore. They are, after all, pretty interesting.

Similar to most mainstream neuroscientists, I was leery of devoting much time to questions that are generally looked down on as belonging to psychology or, worse yet, philosophy. I had first gotten a sense of this bias as a postdoc in Nicholls's lab circa 1970, when we had lunch every day with the Hubel and Wiesel lab. Because they were working on vision, Hubel and Wiesel were familiar with many of the controversies and issues in visual perception. The only psychologists I remember them taking seriously, however, were people such as Leo Hurvich and Dorothea Jameson, who devoted their careers to painstaking psychophysical documentation of people's perceptions of brightness and color, and models of how these phenomena might be explained. When less rigorous psychologists came up in conversation, Hubel would refer to them as "chuckleheads," a term he used often (and did not limit to psychologists). Likewise, the rest of my mentors and colleagues at Harvard, University College, and later Washington University didn't waste much thought on psychology or its practitioners; this sort of work was deemed irrelevant to the rapid progress of the reductionist neuroscience that nearly all of us were doing. Psychology as science was considered not up to par, and philosophical questions were simply nonstarters.

Some basis exists for this general lack of enthusiasm. Even though I have directed a center for cognitive neuroscience for the last six years ("cognitive neuroscience" being a more fashionable name for much of the territory covered by modern psychology), many psychologists do seem to be a bit chuckleheaded. This failing is not through any deficiency of native intelligence, but arises from the difficulty we all have in transcending the tradition in which we were trained. In science, as with anything else, we tend to run with the pack. For decades, psychologists had been mired in gestalt or behaviorist theories and their residua, and had not run a very good race. When physicists and chemists referred to biologists before sufficient cellular and molecular information hardened the field during the twentieth century, they no doubt lodged the same complaint about their relative "soft-headedness." These biases notwithstanding, perception and the psychological and philosophical issues involved were intellectual quicksand: Fascination with the relationship between the real world and what we end up perceiving was getting me more and more deeply stuck.

I undertook the first of several miniprojects on perceptual issues in 1994 with another postdoc, Len White, and I think we both regarded it simply as an amusing diversion from the especially tedious project we were conducting. Len was a superbly trained neuroanatomist who had recently earned his doctorate with Joel Price, one of Cowan's original hires at Washington University. We had been looking at the neural basis of right- and left-handedness by laboriously measuring the cortical hand region in the two hemispheres of human brains. Based on the effect of activity on the allocation of brain space that Riddle, Gutierrez, and I had seen in rodents, we thought that human right-handers would very likely have more cortex devoted to that hand in the left hemisphere, where the right hand is represented, and conversely for left-handers. Thus, White and I were in the process of measuring this region in hundreds of microscopical sections taken from brains that had been removed during autopsy.

People are not just right- or left-handed—they are also right- or left-footed and, interestingly, right- or left-eyed. To leaven the load of measuring the right- and left-hand regions in what ended up being more than 60 human brains, we started thinking about right- and left-eyedness. The minimal question we asked about perception was whether people who were either right-eyed or left-eyed when sighting with one eye (such as aiming a rifle) routinely expressed this preference in viewing the world with both eyes. We covered a large panoramic window in the Duke Neurobiology building with black paper into which we had cut about a hundred holes the diameter of a tennis ball. We then asked subjects to simply wander around the room and look at the scene outside through the holes, which they would necessarily have to do using one eye or the other. This setup mimicked the everyday situation in which we look at the objects in a scene that lies beyond occluding objects in the foreground (if you look around the room in which you are reading this, there will be many examples). As subjects looked at the outside world through the holes from a meter or two away, we monitored whether they used the right or left eye to do so, and whether the eye they used matched the eyedness they showed in a standard monocular sighting task. It did match, although as far as I

know, no one paid attention to the short paper that we published on the topic. However, doing this work and thinking about the issues involved was more fun than measuring the hand region in human brains (which, as it turned out, showed no significant difference between the cortical space devoted to the right and left hand in humans).

One thing leads to another in science, and our little eye project got everyone in the lab interested in perception, at least to some degree. It also raised eyebrows among my colleagues in the Department of Neurobiology. When they walked by and saw the papered-over window with people wandering around looking out through little holes, it was apparent that some weird things were beginning to occur in my lab. The faculty I had recruited to the department seemed either mildly bewildered or amused at the apparent flakiness of what we were doing, which was very far from neurobiology as they understood it. Instead of leading the troops into battle, the general was apparently playing tiddlywinks.

Another project in perception that we undertook at about the same time was just as peculiar but less trivial, and accelerated my transition (colleagues might have thought "downward slide") toward a focus on perception. Another postdoc, Tim Andrews, had received his degree in the United Kingdom working on trophic interactions and neural development, and had come expecting to work with me on some related issue in the central nervous system. But when he arrived, the quicksand phenomenon affected him as well, and he became the first postdoc to work primarily on perception. (Andrews continues to work on visual perception as a member of the psychology faculty of York University in the United Kingdom.) The eyedness project uncovered a lot of interesting literature that I had never come across, including several papers by Charles Sherrington describing some little-known experiments on vision that he carried out in the early 1900s. As already mentioned, Sherrington was one of the pioneers of modern neurophysiology and the mentor of John Eccles, who was, in turn, the mentor of Kuffler and Katz. The main body of Sherrington's highly influential work was on motor reflexes, and one of his principal findings was that actions were always routed through a "final common pathway." This meant that the output of all the neural processing that goes on in the motor regions of the brain ultimately converges onto the spinal motor neurons that innervate skeletal muscles, which generate motor behavior

(see Figure 1.3A). It was therefore natural for him to ask whether a similar principle might apply to sensory processing. Might all sensory processing in the brain likewise be funneled into a final common pathway, which would lead to perception in a given modality such as vision or audition?

Sherrington recognized that the sensory nervous system provided at least one good venue for addressing this question, namely the processing carried out by the neurons in the visual system that are related, respectively, to the right and left eyes. He also knew, as we all do from subjective experience, that the scenes we end up seeing are seamless combinations of the information the left and right eyes process individually, a combination that is experienced as if we were viewing the world with a single eye in the middle of the face (this subjective sense is referred to as cyclopean vision). Because the line of sight of each eye is directed at nearby objects from a different angle, the left and right eyes generate quite different images of the world. To convince yourself of this, simply hold up any three dimensional object at reading distance and compare what you see looking at it with one eye and then the other; the left-eye view enables you to see parts of the left side of the object that the right-eye view does not include, and the right-eye view enables you to see parts on the right side that are hidden from the left eye. How these two views are integrated was an especially challenging question for Sherrington and remains just as challenging today.

Sherrington used the initially separate processing of visual information by each eye to test his idea of a final common sensory pathway by taking advantage of the perceptual phenomenon called 'flicker-fusion'. The flicker-fusion frequency is the rate at which a light needs to be turned on and off before the off periods are no longer noticed and the light appears to be always on. The cycling rate at which this fusion happens in humans is about 60 times a second, and is important in lighting, movies, and video. The room lights that we see as being "on" are actually going on and off 120 times a second as a result of the alternating current used in the U.S. power grid, and movies in early decades of the twentieth century flickered because the stills were presented at less than the flicker-fusion frequency. Sherrington surmised that if there were a final common pathway for vision, then the flicker-fusion frequency when the two eyes are synchronously stimulated ought to be different from the frequency observed when the two eyes are stimulated alternately (meaning one eye experiencing light when the other eye experienced dark; Figure 6.4). If the information from the two eyes is brought together in a final common pathway in the visual brain, the combined asynchronous left and right eye stimulation should be perceived as continuous light at roughly half the normal flicker-fusion frequency (see Figure 6.4B). However, Sherrington found that the on-off rate at which a flashing light becomes steady is virtually identical in the two circumstances. Based on this observation (which Andrews and I confirmed), Sherrington concluded rather despairingly that the two retinal images must be processed independently in the brain and that their union in perception must be "psychical" instead of physiological, thus lying beyond his ability (or interest) to pursue. Given this outcome, Sherrington returned to studies of the motor system and never worked on vision again.

Figure 6.4 A modern version of Sherrington's experiment, in which the perception elicited by synchronous and asynchronous stimulation of the two eyes with flashes of light is compared. A) A computer triggers independent light sources whose relationship can be precisely controlled; one series of flashes is seen by only the left eye, and the other series by only the right. B) Diagram illustrating synchronous versus asynchronous stimulation of the left and right eyes. The fact that observers report the same flicker-fusion frequency whether the two eyes are stimulated synchronously or asynchronously presents a deep perceptual puzzle. (After Andrews et al., 1996)

For better or for worse, Andrews and I and other students and fellows in the lab continued down this path during the next couple of years, carrying out a series of projects on visual perception that examined other odd phenomena, such as the wagon wheel illusion in continuous light, the rate of disappearance of the images generated by retinal blood vessels, the strange way we perceive a rotating wire-frame cube, and the rivalry between percepts that occurs when one stares long enough at a pattern of vertical and horizontal stripes. All this time, the lab was conducting conventional projects on handedness, on the way cortical organization was affected by the prevalence of differently oriented contours in natural scenes, and on other unfinished business in mainstream neurobiology. The reality, however, was an ever-greater interest in perception and less devotion to issues of brain structure and function using the sorts of electrophysiological and anatomical tools I was familiar with.

The tipping point came in 1996. By then, I was in my late 50s, and after four or five years puttering around with how activity affects brain organization, I hadn't stumbled across anything that was deeply exciting. With perhaps 10 or

15 good working years left, I began to think that I should spend all my remaining time working on perception. I had learned enough about the brain and the visual system to have a strong sense that the attempt to explain perception in terms of a logical hierarchy of neuronal processing, in which properties of visual neurons at one stage determine those at the next, was stuck in some fundamental way. Hubel and Wiesel had set about reaching this goal shortly before I met them as a med student in 1961, and we budding neuroscientists in the early 1970s thought it would soon be reached. But despite an effort spanning 40 years in which the properties of many types of nerve cells in the visual system had been carefully documented, the goal did not seem much closer. No percept had been convincingly explained in these terms, a wealth of visual perceptual phenomena remained mysterious, and how the orderly structure of the visual system is related to function remained unclear. When that much time goes by in science without much success, it usually means that the way people are thinking about a problem is on the wrong track.

I was pretty sure that, given my age, the raised eyebrows of my colleagues, and my lack of any serious credentials in vision science, it would be an uphill fight to support a lab focused explicitly on perception. (Up to that point, I had been using money from grants for the conventional work in the lab to support our forays into perception.) I also sensed that I was starting to be seen as something of an oddball, and I was less often being invited to speak at high-profile meetings or asked to lecture at other institutions. On the other hand, work on perception doesn't cost much to do, and I knew that if I didn't take the plunge at that point, there would be no second chance. And so I plunged.

Was this article helpful?

0 0
Peripheral Neuropathy Natural Treatment Options

Peripheral Neuropathy Natural Treatment Options

This guide will help millions of people understand this condition so that they can take control of their lives and make informed decisions. The ebook covers information on a vast number of different types of neuropathy. In addition, it will be a useful resource for their families, caregivers, and health care providers.

Get My Free Ebook

Post a comment