RiiiUHftiim tiii J J Jill 11 t lu LMwH

/ft ritfiflrr 'c

If IIffitfill rr.-jr ""TT r * 1 tit'1511iS

will mi*

■ r ¡r ■ mm .^i i r ■ m ar ■ ■ ■ r ■

tffftHp txl tttiia itiinttiiiutiii

■F-T-Tt txl tttiia itiinttiiiutiii

A further twist in trying to make this argument compelling was to create scenes designed to appear as they would if seen under differently "colored" lights (Figure 9.9). Lotto thought correctly that this further trick would generate even more dramatic effects on color appearances. As indicated by the keys, the "blue" tiles on the top of the cube in Figure 9.9A are physically identical to the "yellow" tiles on the cube in Figure 9.9B (both sets of tiles appear gray when the varied spectral information in the surrounding scene is masked). Therefore, spectrally identical patches are being made to look either blue or yellow. Because blue and yellow are color opposites, the gray patches are being made to appear about as far apart in color perception as possible. This is a powerful demonstration of empirically determined color contrast. Conversely, the "red" tiles on the cube in Figure 9.9A appear similar to the "red" tiles on the cube in Figure 9.9B, even though the spectra coming from them are very different, as indicated in the keys. This latter comparison is a powerful demonstration of color constancy. Our explanation was that the pertinent empirical information had been built into the visual system circuitry as a consequence of species and individual experience in the world, and that the resulting contrast and constancy effects were behaviorally useful responses to the relevant stimuli.

The empirical framework we had cooked up also had the potential to explain another longstanding puzzle in color vision: why mixing different amounts of short-, medium-, and long-wavelength light at a given level of intensity generates most but not all color perceptions. Because such mixing can produce any distribution of power across the light spectrum, in principle, it should produce any color perception. But seeing colors such as browns, navies, and olives requires adjusting the context as well. These color perceptions occur only when the intensity of the light coming from the relevant region is low compared to the light coming from the other surfaces in a scene. For example, brown is the color seen when the luminance of a surface that otherwise looks orange is reduced compared to the luminance of the surfaces around it. The inability to produce brown, navy, and olive color perceptions by light mixing without changing the overall pattern of light intensities in a scene explains why the spectrum coming from the tile on the top of cube in Figure 9.8 looks brown, whereas the same light spectrum coming from the face in shadow looks orange.

Taken together, all this evidence seemed to indicate convincingly that the colors we see are not the result of the spectra in retinal images per se, but result from linking retinal spectra to real-world objects and conditions discovered empirically through the consequences of behavior. Regarding color contrast and constancy effects as "illusions" was not only wrong, but also missed the biological significance of these phenomena and the explanatory power of understanding color vision in empirical terms.

Figure 9.9 The effects on color perception when the same or physically different targets are presented in scenes consistent with illumination by differently "colored" light sources (the quotes are a reminder that because colors are a result of visual processing, neither light nor the tiles can properly be called colored). Except for the targets, all the spectral information has been made consistent with "yellowish" illumination in the scene on the left, and with "bluish" illumination in the scene on the right. See text for further explanation. (After Purves and Lotto, 2003)

Contrast

Constancy ton«

i 21117 " T Hitt Itttlt

* + # + * + 44+44 4 + 4 1 * 4 + * 44 44+4 4-444-44 4

-Yellow'

Contrast

Constancy

Although Lotto and I were sold on the idea, the response from the vision scientists who we thought might be converted was a collective yawn, or worse. Anonymous reviewers invariably gave the papers we wrote a hard time, granting agencies were unenthused, and even local colleagues showed only polite curiosity about what we were doing. This generally negative reaction came in two flavors. Visual physiologists were unimpressed because we said nothing about how any of this could be related to the properties of visual neurons and implied that what they were doing was off the mark. Indeed, if visual circuitry had been selected empirically because it linked retinal images to behaviors that worked in response to a world that was directly unknowable by means of light, it seemed doubtful that the logical framework that people had long relied on would ever explain vision.

An example of the opposition was the reaction to a talk I gave on this work on lightness/brightness and color to a small group in southern California—the Helmholtz Club—in 2001. Francis Crick, who had turned to neuroscience in the late 1970s after revolutionizing the understanding of genetic inheritance, had started the club after becoming especially interested in vision as a means of understanding consciousness. Although he was not a physiologist, Crick was certainly a consummate reductionist. When I argued to the 20 or so club members in attendance that vision would need to be understood in this way, Crick became apoplectic, exclaiming heatedly that he wanted to know about the nuts and bolts of visual mechanics, not some vague theory based on the inverse problem and the role of experience. My protest that there was a lot of evidence supporting the interpretation based on what we actually see—and that the discovery of genes and ultimately DNA had depended on vague ideas about inheritance in the late nineteenth century—didn't change his summary dismissal of the whole idea.

The second sort of opposition came from psychologists who were equally unenthusiastic, but for different reasons. Psychologists and psychophysicists had worked on brightness and color for decades, advancing a variety of theories, often mathematically based, that purported to explain at least some of these phenomena. Many of the critiques we suffered pointed out that we were not sufficiently familiar with this abundant literature and that we had failed to rebut (or sometimes even mention) these other explanations. Although this complaint was partly true, the main objection seemed to be that in coming at these issues from a different tradition, we lacked the necessary credentials to intrude in this arena and failed to appreciate its history, complexities, and conventional wisdom. The most this group seemed willing to grant was that we (meaning Lotto) had made some very pretty pictures.

Whatever the reasons for antipathy, it was apparent that explaining perceptual phenomena empirically without data about accumulated human experience that could be used to predict specific perceptions would not carry the day. But it wasn't clear how to get or analyze such information. Moreover, in going this route we would have to confront the way the brain organizes the lessons of experience in the perceptual space of a visual quality, meaning the way values of lightness, brightness, color, or some other perceptual quality are subjectively related. Having already irritated the physiologists and psychologists, a further effort in this more abstract direction—which could be considered toying with the structure of "mind"—would invite the philosophers to pile on as well.

10. The organization of perceptual qualities

Despite the subjective nature of perception, there are several ways to assess how the brain organizes a given quality. In general, understanding the organization of a perceptual category depends on determining how the quality behaves when a relevant stimulus parameter is systematically varied. For example, perceptual qualities such as lightness, brightness, or color can be evaluated in terms of the least amount of light energy that can be seen as the wavelength of a stimulus is varied, as the background intensity is altered, as a pattern is changed, or as some other aspect of the stimulus is permutated. Another approach is to determine the least noticeable differences between stimuli that can be perceived, for example between two stimuli that differ slightly in luminance. Yet another way of determining the organization of perception is simply to measure the time it takes people to react to a systematically varied stimulus. Assessing the organization of perceptual qualities in vision or some other sensory modality in these ways has established many "psychophysical functions."

These efforts to make the organization of what we perceive scientifically meaningful date from about 1860, when German physicist and philosopher Gustav Fechner decided to pursue the connection between what he referred to as the "physical and psychological worlds" (thus the modern term "psychophysics"). In vision, the goal of psychophysics is to understand how the brain organizes the perception of qualities such as lightness, brightness, color, form, and motion. For lack of a better phrase, this organization is often referred to as the perceptual space pertinent to the quality in question. The significance of such studies for us was clear enough. We would have to rationalize the configuration of the perceptual spaces indicated by the psychophysical functions that had been painstakingly determined over the years in empirical terms if this is indeed the strategy that evolved to contend with the unknowability of the world through the senses.

Although Lotto and I recognized the need to move beyond the ad hoc explanations of the particular perceptual phenomena discussed in Chapters 8 and 9, we found plenty of reasons to hesitate in pursuing this goal. Dozens of attempts to describe the organization of the perceptual space of lightness, brightness, or color and had already been made without much success but plenty of debate. Part of the difficulty in reaching a consensus about the organization of any perceptual space is the complexity of the problem. As the diagram in Figure 10.1 implies, although it was convenient to separate the discussion of color and lightness/brightness in the preceding chapters, the lightness/brightness values we see are intertwined with color perceptions, and vice versa. The spectral distribution of light affects the perceptions elicited by luminance, and luminance affects the colors elicited by light spectra. The entanglement of these and other visual qualities had been amply documented in the classic colorimetry experiments described later in the chapter, and this presents a major obstacle to rationalizing the perceptual space in Figure 10.1 in empirical or any other terms. As a result, many vision scientists took a dim view of plumbing these perceptual details as a useful way forward.

Nonetheless, it seemed worth a try. We began by looking more specifically at the organization of the full range of lightness/brightness perceptions in the absence of any influence from hue or color saturation (the central axis of perceptual space shown in Figure 10.1). Many studies of lightness and brightness had shown that the organization of these perceptual qualities in relation to the physical intensity of light (luminance) is far more complex than the vertical straight line in the diagram implies, as should already be apparent from the anomalies described in Chapters 8 and 9. The most notable student of this issue was Harvard psychophysicist Stanley Stevens, who worked on these topics from about 1950 to 1975. Stevens wondered how a simple white light stimulus made progressively more intense over a range of physical values is related to the lightness/brightness values that subjects reported. To determine a person's sense of lightness or brightness in response to a particular stimulus in a quantitative way, he asked subjects to rate the intensity they experienced in response to a given stimulus on an imaginary scale on which 0 represented the least intense perception in response to the stimuli in the test series and 100 represented the most intense ( Figure 10.2). Stevens and, subsequently, other investigators using different methods found that when the test stimuli were sources of light (such as a light bulb or the equivalent), doubling the luminance of the stimulus did not simply double the perceived brightness; the change in brightness was greater than expected at lower values of stimulus intensity, but less than expected for higher values of the test light (see the red curve in Figure 10.2). However, Stevens obtained a different result when the same sort of test was carried out using a series of papers that ranged from light to dark instead of using an adjustable light source. In this case, the reported perceptions of lightness (remember that lightness is the term used to describe surface appearances and brightness describes the appearance of light sources) varied more or less linearly with the measured luminance values generated by the reflective properties of the papers (see the green line in Figure 10.2). If vision operates according to an empirical strategy, we should be able to explain these different psychophysical functions (often called "scaling functions") for light sources and surfaces.

Figure 10.1 Diagram of the human perceptual space for lightness/brightness and color. The vertical axis in the upper panel indicates that any particular level of light intensity evokes a particular sense of lightness or brightness ; movements around the perimeter of the relevant plane correspond to changes in hue (changes in the apparent contribution of red, green, blue, or yellow to perception), and movements along the radial axis correspond to changes in saturation (changes in the approximation of the color to the perception of a color-neutral gray). The lower panel is looking down on one of the planes in the central region of this space. The four primary color categories (red, green, blue, and yellow) are each characterized by a unique hue (indicated by dots) that has no apparent admixture of the other three categories of color perception (a color experience that cannot be seen or imagined as a mixture of other colors). (After Lotto and Purves, 2003)

Lightness ur brightness

Lotto and I, together with Shuro Nundy, a graduate student who eventually wrote his thesis on these issues, started thinking about how these observations could be explained as more general manifestations of the empirical arguments that we had advanced to explain particular effects such as Mach bands and the Cornsweet edge (see Chapter 8). Because we had very little idea how to do this, our attempts were bumbling and seemed—no doubt correctly—naive to the psychophysicists in this field. Nevertheless, we were reasonably sure that we could explain Stevens's observations in terms of human experience linking luminance values with the underlying sources discovered according to the results of behavior. Because the intensities of sources of light are generally greater that the intensities of reflected light (surfaces typically absorb some of the light that falls on them), the discovery of these facts through behavioral interactions would have shaped the organization of the visual circuitry underlying the perceptual spaces of lightness and brightness differently. The incorporation of this empirical information in visual system circuitry would explain the difference in the psychophysical functions for sources and surfaces illustrated in Figure 10.2.

Figure 10.2 A summary of the psychophysical (or scaling) functions observed in studies carried out by Stevens and others in which test targets are presented as either light sources or surfaces over a range of intensities (luminance values). When the target stimuli are sources of light (the sort of stimulus that elicits a sense of brightness), the subjective rankings that subjects report tend to track along the red curve with an exponent (b) of about 0.5. However, if the stimuli are a series of surfaces (such as pieces of paper) that reflect different amounts of light, then the subjective rankings of lightness track closer to the green line with an exponent that approaches 1. (After Nundy and Purves, 2002. Copyright 2002 National Academy of Sciences, U.S.A.)

Although this framework seemed simplistic, it provided a way of exploring the relative lightness and brightness values people would expect to see if the organization of perceptual space for these qualities is indeed determined empirically. The initial efforts to test this interpretation seemed promising. Shuro presented subjects with a series of test patches ranging from black to white on a computer screen set against a featureless gray background, and compared the responses in this circumstance to those elicited by the same patches presented in a scene as objects lying on an illuminated tabletop. The observers' responses shifted from nonlinear in response to patches presented on the featureless background (consistent with the patches being light sources) to more nearly linear when the information in the tabletop scene implied that the patches were surfaces reflecting light from an illuminant.

These and other observations Shuro made supported the conclusion that the psychophysical functions for lightness and brightness that Stevens and others had established (see Figure 10.2) could be explained in the same empirical terms that we had used to rationalize the phenomena discussed in Chapter 8. The argument in this case was that the same amount of light coming from a test patch would, through behavioral interactions, have turned out to be sometimes a source of light and sometimes light reflected by a surface. For simple physical reasons (the absorption of some light by reflecting surfaces), light sources typically return more light to the eye than is reflected from surfaces in the rest of any scene. The consequences of these facts for successful behavior would have shaped the perceptual spaces of lightness and brightness differently, leading to the psychophysical functions illustrated in Figure 10.2.

Another perplexing set of psychophysical observations that needed to be explained was the results obtained in classical colorimetry experiments. Colorimetry involves careful measurements of the way the physical characteristics of spectral stimuli are related to the colors that observers see, carried out in the simplest possible testing conditions (Figure 10.3). Because color perception includes the subsidiary qualities of hue, saturation, and color brightness (see Figure 10.1), colorimetry testing shows how these qualities interact. For example, colorimetry indicates how changes in hue affect the brightness and how changes in brightness affect the perception of hue. Such studies can be thought of as a more complete way of examining the organization of color space diagrammed in Figure 10.1. As indicated in Figure 10.3, the stimuli in such experiments are typically generated by three independent light sources producing long, middle, and short wavelengths projected onto half of a frosted-glass screen set in a black matte surround. A test light, usually having a single wavelength, is projected onto the other half of the diffuser. The subject is then asked to adjust the three light sources until the color of the two halves of the disk appears the same. Alternatively, the experimenter might gradually vary the wavelength or intensity of the test stimulus and ask subjects to report when they first noticed a difference between the appearances of the two halves (a test of "just noticeable differences").

Figure 10.3 Colorimetry testing. See text for explanation. (After Purves and Lotto, 2003)

Figure 10.3 Colorimetry testing. See text for explanation. (After Purves and Lotto, 2003)

PercapLuaJ match

Psychophysical functions determined by colorimetry testing can be generated by color matching tests or color discrimination tests. In color matching tests, such studies showed that saturation varies in a particular way as a function of luminance (the so-called Hunt effect); that hue varies as a function of stimulus changes that affect saturation (the Abney effect); that hue varies as a function of luminance (the Bezold-Brucke effect); and that brightness varies as a function of stimulus changes that affect both hue and saturation (the Helmholtz-Kohlrausch effect). In color discrimination tests, the ability to distinguish "just noticeable" differences in hue, saturation, or color brightness also varies in a complex way as a function of the wavelength of the test stimulus.

Explaining these interdependent functions is fiendishly complicated. The few brave souls who tried—psychophysicists such as Leo Hurvich, Dorothea Jameson, Walter Stiles, and Gunter Wyszecki—had developed models based on well-documented knowledge of how the three human cone types respond to light of different wavelengths, coupled with assumptions about the neuronal interactions that do (or might) occur early in the visual pathway. Although such models were extraordinarily clever, they were ad hoc explanations and accounted for one or another colorimetric phenomenon without providing any biological rationale. The supposition was that these perceptual phenomena were simply incidental consequences of light absorption by the different receptor types and subsequent visual processing.

The complexity of these colorimetric phenomena (and the resulting literature) presented a daunting challenge to anyone trying to understand color vision. Reticent as we were to take up this arcane subject, if we wanted to show that an empirical interpretation of color space had merit, we would eventually need to contend with these observations in colorimetry. The framework for this departure was the same as that we had used to explain the lightness/brightness functions that Stevens and others had determined (see Figure 10.2). Even under the simple conditions of colorimetry, testing the perception of hue, saturation, and brightness should be explained by the typical relationships among the physical characteristics of light that humans would have incorporated in visual circuitry through feedback from trial-and-error behavior. As in lightness and brightness scaling, the colorimetry functions that had been described would be signatures of the empirical strategy that evolved to contend with the inverse problem. Therefore, if we knew the approximate nature of human experience with the physical attributes underlying hue, saturation, and color brightness, we should be able to predict these functions and greatly strengthen the empirical case.

Although we could infer the nature of human experience with object surfaces and light sources that would have caused some of the anomalies apparent in color contrast and constancy (see Chapters 8 and 9), or even the lightness and brightness scaling function in Figure 10.2, intuition was useless when it came to colorimetry functions. The only way to determine the experience that we thought might underlie these functions was to examine spectral relationships in a large database of natural scenes. The way forward in this project was the result of much hard work by two postdoctoral fellows who had recently come to the lab from mainland China, Fuhui Long and Zhiyong Yang. Like many other American scientists working during the last decade or two, I owe an enormous debt of gratitude to the Chinese educational system, which was, and is, turning out wonderful students as the quality of education at all levels in the United States continues to decline. The skills in mathematics, computer programming, and image analysis that Long and Yang brought with them were simply not part of the intellectual equipment of most students trained in this country.

Long was a quiet young woman whose English left much to be desired when she started out in 2001. But her retiring demeanor concealed a vivid intelligence and a determination that I had rarely encountered in homegrown students, and she was never shy in the daily arguments that play a big part in any new project. Long had received a Ph.D. in computer science and electrical engineering, and had been a postdoc in the Department of Electronic and Information Engineering at Hong Kong Polytechnic University, where she had worked on image processing and computer vision. As a result, she was facile with a wide range of technical approaches to problems in image analysis, and her purpose in coming to my lab was to gain some knowledge about the biological side of things. After some false starts, we settled on analyzing color images as a means of understanding the human color experience underlying colorimetry functions. Long eventually collected more than a thousand high-quality digital photographs of representative natural scenes for this purpose (Figure 10.4A) and wrote the computer programs needed to extract the physical characteristics associated with hue, saturation, and color brightness at each point in millions of smaller image patches taken from these pictures ( Figure 10.4B). Our assumption was that this database would fairly represent the spectral relationships that humans had always experienced, and we hoped this information would enable us to predict the classical colorimetry functions.

An example of this approach is explaining the results of color discrimination tests in empirical terms. Psychophysicists Gunter Wyszecki and Walter Stiles generated the function shown in Figure 10.5A in the late 1950s; it indicates the human sensitivity to a perceived color change as the wavelength of a stimulus is gradually varied. The resulting function is anything but simple, going up and down repeatedly over the range of wavelengths that humans perceive. Where the slope of the function is not changing much, people make finer color discriminations: Relatively little change in the wavelength of the test stimulus is needed before the subject reports an apparent difference in color. However, where the slope is steep a relatively large change in wavelength is needed before observers see a color difference. The reason for these complex variations in sensitivity to wavelength change was anybody's guess.

Figure 10.4 Examples of the scenes used to assess human color experience (A). By analyzing millions of small patches from hundreds of these natural scenes (B), we could approximate the accumulated experience with the spectral variations and relationships that we thought must underlie the psychophysical functions determined in colorimetry testing. (After Long, et al., 2006. Copyright 2006 National Academy of Sciences, U.S.A.)

Figure 10.4 Examples of the scenes used to assess human color experience (A). By analyzing millions of small patches from hundreds of these natural scenes (B), we could approximate the accumulated experience with the spectral variations and relationships that we thought must underlie the psychophysical functions determined in colorimetry testing. (After Long, et al., 2006. Copyright 2006 National Academy of Sciences, U.S.A.)

Figure 10.5 Prediction of a colorimetric function from empirical data. A) Graph showing the amount of change in the wavelength of a stimulus needed to elicit a just noticeable difference in color when tested across the full range of light wavelengths. B) The function predicted by an empirical analysis of the spectral characteristics in a database of natural scenes (see Figure 10.4). The black dots are the predictions from the empirical data, and the red line is a best fit to the dots. (The data in [A] were drawn from Wyszecki and Stiles, 1958; [B] is from Long, et al., 2006, copyright 2006 National Academy of Sciences, U.S.A.)

Figure 10.5 Prediction of a colorimetric function from empirical data. A) Graph showing the amount of change in the wavelength of a stimulus needed to elicit a just noticeable difference in color when tested across the full range of light wavelengths. B) The function predicted by an empirical analysis of the spectral characteristics in a database of natural scenes (see Figure 10.4). The black dots are the predictions from the empirical data, and the red line is a best fit to the dots. (The data in [A] were drawn from Wyszecki and Stiles, 1958; [B] is from Long, et al., 2006, copyright 2006 National Academy of Sciences, U.S.A.)

Wavelength {nm}

Predicting this function on empirical grounds depends on the fact that the light experienced at each point in stimuli arising from natural scenes will always have varied in particular ways determined by the qualities of object surfaces, illumination from the sun, and other characteristics of the world. For instance, middle-wavelength light is more likely than short- or long-wavelength light at any point because of the spectrum of sunlight and the typical properties of object surfaces. The wavelengths arising from a scene will also have varied as the illumination changes because of the time of day, the weather, or shadowing from other objects. These and myriad other influences cause the spectral qualities of each point in visual stimuli to have a typical frequency of occurrence (see Figure 10.4). Humans will always have experienced these relationships and, based on this empirical information, will have evolved and refined visual circuitry to promote successful behavior in response to the inevitably uncertain sources of light spectra. If the organization of the perceptual space for color has been determined this way, then the effects of this routinely experienced spectral information should be evident in the ability of subjects to discriminate color differences as the wavelength of a stimulus is varied in colorimetry testing. Figure 10.5 compares this psychophysical function with the function predicted by analyzing the spectral characteristics of millions of patches in hundreds of natural scenes. Although certainly not perfect, the psychophysically determined function in Figure 10.5A is in reasonable agreement with the function predicted by the empirical data in Figure 10.5B. The other colorimetry effects mentioned earlier could also be predicted in this same general way.

Just as important as rationalizing scaling and colorimetry functions was to show how the empirical molding of the perceptual spaces for lightness and brightness could explain the contrast and constancy phenomena considered in Chapters 8 and 9, and Zhiyong Yang focused on this task. Yang had received a doctorate in computer vision from the Chinese Academy of Sciences. Before coming to the lab, he had done postdoctoral work with David Mumford at Brown working on pattern theory and with Richard Zemel at the University of Arizona on probabilistic models of vision. Like Long, his remarkable skills were well suited to extracting empirical information from scenes and figuring out how we could use this information in a more quantitative way to explain otherwise puzzling perceptions. Although Lotto and I had suggested empirical explanations for lightness/brightness contrast stimuli, they had not been taken very seriously, and the perceptual effects generated by many other patterns of light and dark remained to be accounted for. A good example is the stimulus in Figure 8.4 (White's effect), a pattern that produces a perception of the lightness or brightness of targets that is opposite the effect elicited by the standard brightness contrast stimulus (see Figure 8.1). The empirical basis of White's effect, if there was one, was not obvious despite the considerable effort that we had spent trying to come up with an empirical rationale. This failure underscored the implication of the colorimetric functions: For all but a few simple stimuli, intuition about the empirical experience underlying perception quickly runs out of steam. The only way to understand the perceptions generated by most visual stimuli would be to analyze databases that could serve as proxies for the relevant human experience.

To explain lightness/brightness effects in these terms, Yang began to explore a database of natural scenes in black and white (Figure 10.6). As with the explanation of colorimetry functions, we thought that the images produced by such scenes would approximate human experience with the patterns of light intensities we would have depended on to overcome the inevitably uncertain ability of images to indicate their underlying sources. As with Long's color scene database, we assumed that a collection of natural scenes gathered today would fairly represent the relationships between images and natural sources that humans would always have experienced. If the organization of brightness and lightness perceptions in the brain is indeed a result of the success or failure of behavior in response to retinal luminance patterns over the course of evolution and individual experience, then analyzing a database of this sort would be the only way to understand what we actually see in response to the full range of such stimuli.

Determining the frequency of occurrence of different luminance patterns in scenes is not as difficult as it might seem. For example, consider the standard brightness contrast stimulus in the upper panels of Figure 10.7 or the versions of this pattern shown in Chapter 8. The frequency of occurrence of particular values of luminance corresponding to the central patch in either a dark or light surround will have varied greatly depending on the patterns of retinal luminance generated by typical scenes. Nature being what it is, the luminance value of the central patch in a dark surround will, more often than not, have been less than the luminance value of the central patch in a light surround. The reason is that the patch and surround tend to comprise the same material and be in the same illumination, as is apparent in the scene in the middle panel in Figure 10.7 or the several scenes in Figure 10.6. This bias of the luminance of the central patch toward the luminance of the surround is what humans will always have experienced, and the magnitude of the bias indicated by the frequency of occurrence of the luminance relationships in natural scenes will have guided successful behavior. It follows that this information should be instantiated in visual system circuitry during the course of evolution and individual development, and expressed in the perceptual organization of lightness/brightness.

Figure 10.6 Examples of natural scenes from a database of black-and-white images used as a proxy for human experience with the frequency of occurrence of luminance patterns on the retina arising from the world in which we act. (Photos are from the database provided by Van Hateren and Van der Schaaf, 1998.)

This universal experience should generate lightness or brightness perceptions in response to a given target luminance according to the relative rank of that luminance among all the possible target luminance values experienced in the context in which it occurred (see the lower panels in Figure 10.7). Despite the directly unknowable real-world sources of the luminance values in retinal images, the relative rank of luminance or some other physical characteristic of light resolves the inverse problem in proportion to the degree that trial-and-error experience that has been incorporated into visual system circuitry. In the case of the standard stimulus in the upper panel, this cumulative human experience would cause the same target luminance value in a dark surround to appear brighter or lighter than in a light surround. Using this general approach, Yang showed that the frequency of occurrence of luminance relationships extracted from natural scenes with templates configured in the form of other stimuli predicted a variety of lightness/brightness effects, including White's effect.

Figure 10.7 Predicting the apparent lightness/brightness of targets in different contexts based on human experience with the retinal luminance patterns generated by natural scenes. The middle panel shows how the stimulus configurations in the upper panels can be used as templates to repeatedly sample scenes such as those in Figure 10.6. In this way, the frequency of occurrence of luminance values for the target (T) in different contexts—such as a light surround versus a dark surround—can be determined. The graph's lower panels show that different perceptions of lightness or brightness elicited by the identical luminance values of the targets in the upper panels (the standard brightness contrast effect) are predicted by the different frequencies of occurrence of the same target luminance value in the accumulated experience of humans with dark and light surrounds (expressed as percentiles). See text for further explanation. (After Yang and Purves, 2004. Copyright 2004 National Academy of Sciences, U.S.A.)

Light imensity Light intensity

A difficulty many people have understanding this approach is how an analysis of scenes serves as a proxy for accumulated human experience. It is relatively easy to grasp the idea that feedback from the success or failure of behavior drives the evolution and development of visual circuitry, and that the connectivity of this circuitry ultimately shapes the organization of perceptual space to reflect the evidence accumulated by trial and error. But where in the analyses of scenes is a representation of the interactions with the world that underlie this scheme? Although not immediately obvious, interactions with the world are represented in the frequency of occurrence of retinal image features generated by the scenes in the database. For example, applying the template in Figure 10.7 millions of times to hundreds of scenes shows how often we would have experienced a particular pattern of light and dark in nature. To be successful, the behavior executed in response to the pattern on the retina would need to have tracked these frequencies of occurrence, which can therefore stand for the lessons that would have been learned by actually operating in the world.

A related point worth emphasizing is that determining the perceptual space of lightness, brightness, or color in this way maintains the relative similarities and differences that objects in the world actually possess. To be biologically useful—for behavior to be successful—surfaces or light sources that are physically the same must look the same, and surfaces or light sources that are different must look different. The construction of visual circuitry according to the frequency of occurrence of light patterns inevitably shapes perception to accomplish this. To appreciate this point, consider the way observers perceive the geometrical attributes of objects, the topic of the next chapter. In relating perceived geometry to physical space, the need to generate perceptions that accord with the physical arrangement of objects is obvious. If geometrical relationships were not preserved, perceptions could not generate motor behaviors that actually worked. To be useful, the perceptions of lightness/brightness elicited by luminance values and the colors elicited by spectral distributions must also be ordered in perceptual space according to the physical similarities and differences among objects and conditions in the world, however discrepant perceptions may be when compared to physical measurements. This pervasive parallelism of perception and physical reality based on operational success in a world that we can't know directly makes it difficult to appreciate that what we see is not the world as it is, and that all perceptions are equally illusory. Indeed, our visual system does this job so well that most people are convinced that what we see is "reality."

Finally, despite the successful prediction of a number of specific perceptions and psychophysical functions pertinent to lightness, brightness, and color in wholly empirical terms, extending this approach to predict more complicated perceptions of these qualities will not be easy. As already noted, the perceptual qualities of lightness/brightness, hue, and saturation are entangled: The visual circuitry that generates lightness/brightness is affected by the circuitry that generates hue, the circuitry that generates hue is affected by the circuitry that generates saturation, and so on. This entanglement occurs because interactions among the various sensory circuits, whether in vision, within some other modality, or among modalities, generate behavior that has a better chance of success. A simple example is the interaction between what we see and what we hear: When we hear a sound, we tend to look in that direction, with improved behavior as a result. It is possible to partially disentangle some of these interactions, as we did by using databases that focused on only one type of information (such as black-and-white scenes or scenes that include color spectra). But the interplay among sensory qualities within a modality (such as how hue affects brightness), among sensory modalities (such as how what we hear influences what we see), and even among sensory and nonsensory functions (such as how what we think, feel, or remember influences what we see and hear) means that the definition of the perceptual space of any quality can be complete only when all the brain systems that influence that space are taken into account. The highly interactive organization of brains is enormously useful in generating behavior that has the best chance of success. Nonetheless, the biologically necessary entanglements among perceptual spaces will make understanding more complex perceptions in empirical terms increasingly difficult.

Was this article helpful?

0 0

Post a comment