## PH PfElH

H is a hypothesis to be tested, E is the evidence pertinent to its validity, and P is the probability. The first term on the right side of Bayes's equation, P(H), is called the prior probability distribution, or simply "the prior," and is a statistical measure of confidence in the hypothesis, without any present evidence about its truth or falsity. With respect to vision, the prior describes the probabilities of different physical states of the world that might have produced a retinal image. Therefore, the needed priors are the frequency of occurrence in the world of surface reflectance values, illuminants, distances, object sizes, and so on. The second term, P(E|H), is called the likelihood function. If hypothesis H was true, this term indicates the probability that the evidence E would have been available to support it. Given a particular state of the physical world (a particular combination of illumination, surface properties, object sizes, etc.), the likelihood function describes the probability of that state having generated the retinal projection in question. The product of the prior and the likelihood function, divided by the normalization constant P(E), gives the posterior probability distribution, P(H|E). The posterior distribution defines the probability of hypothesis H being true, given the evidence E, and would therefore indicate the relative probability of a given retinal image having been generated by some set of possible physical realities. The most likely physical reality in the posterior probability distribution is taken to be what an observer sees.

Despite the difficulty of these more abstract ideas, Bayes's theorem correctly (and usefully) spells out the logical relationship among the factors that underlie any empirical approach to vision. Nonetheless, the way it has been used in vision science has tended to obscure instead of clarify the way brains seem to work. For most people using Bayesian decision theory, the visual system is taken to compute statistical inferences about possible physical states of the world based on prior knowledge acquired through experience with image features. The evidence in the preceding chapters, however, argues that the accumulated knowledge stored in brain circuitry is based on feedback from behavioral success, not image features. As a result, what we see is not the state of the world that most likely produced the image. On the contrary, the qualities we see define a subjective universe that comprises the various perceptual spaces described in preceding chapters. These perceptions successfully guide behavior not because they represent likely states of the world but because of the trial-and-error manner in which the relevant perceptual spaces and the underlying neural circuitry have been generated.

Although the human brain is enormously complex, the evidence based on what we see and why implies that the brain and the rest of our nervous systems are doing just one basic thing: linking sensory information (perceptions or the unconscious equivalent) to successful behavior by means of synaptic connectivity that has been entirely determined by trial and error. The reason for this blunt assertion is that the inverse problem in vision and other sensory modalities doesn't allow much choice.

The biological mechanisms that create neural circuitry on this basis are straightforward, at least in principle. Whenever the neural connections inherited by an individual link perception and action a little more successfully, the slightly greater reproductive fitness of the individual tends to increase the prevalence of that brain circuitry in the population. Over eons, the neuronal associations underlying successful behavior in response to stimuli will therefore increase and unsuccessful associations will decrease, leading to the brain circuitry humans and other animals have today. All that is needed to implement this strategy is plenty of time and a way of providing feedback about the relative success of behavior. The history of life on Earth has provided ample time (approximately 3.5 billion years and counting), and natural selection is a wonderfully powerful mechanism for assigning credit to behavior. Indeed, our perceptions and actions have become so effective in dealing with our circumstances world that it is difficult to imagine that what we see, hear, or otherwise perceive is only an operationally useful surrogate for the world, not physical reality.

The critical importance of this inherited neural infrastructure for survival is self-evident. Nonetheless, the neural connectivity we are born with is continually refined by individual experience (see Chapters 2- 7). The reasons for this are also clear. From a developmental perspective, neural circuits must adjust to the changing body, as well as reaping the obvious benefits that come from encoding additional information about the specific objects, conditions, and contingencies encountered in any particular life. Although the mechanisms of synaptic plasticity that enable lifetime learning are quite different from the mechanisms of inheritance, both serve to encode experience in neural circuitry. The common denominator is the association of input from the external and internal environment with empirically successful behavioral output. From this perspective, the complexity of brain function and structure boils down to using, making, and modifying neuronal connections. The best evidence so far that brains really do work in this wholly empirical way is that so many peculiarities of subjective experience can be explained in this way. For each of the basic visual qualities—lightness, brightness, color, geometrical form, and motion—a variety of otherwise puzzling phenomena can be predicted by the frequency of occurrence of the stimulus characteristics derived from databases that serve as proxies for human experience. The universal discrepancies between perception and physical measurement (the more flagrant examples of which are wrongly categorized as "illusions") are neither anomalies nor the result of the limitations in neural processing circuitry. They are the signatures of this strategy of brain operation. Whereas the end result gives us a compelling sense that our brains must be detecting features by analyzing stimuli and producing perceptions that represent the world, terms such as feature detection, feature analysis, and feature representation are not appropriate descriptors of what the machinery of the brain is doing, the problem this mode of operation is solving, or what we actually perceive.

Accounting in these terms for everything we see or otherwise perceive, however, is going to be much harder than suggested by the ability to predict the relatively simple visual responses used so far to support the empirical case. Even in the most basic circumstances, what we see with respect to any particular visual quality is, for good reason, affected by the sensory information pertinent to other qualities (see Chapter 9). To complicate matters further, such interactions extend across different sensory modalities for the same reasons. One of many examples of how what we hear affects what we see is illustrated in Figure 13.3. This further evidence implies that percepts and behaviors ultimately depend on neural processing that is going on at the same time in many brain systems, taking into account the influence of all the neural activity elicited by the stimuli acting on us at any given time. The advantages accruing from such interactions help explain why the connectivity of the brain is so complex. Any brain that did not take into account the full range of information available to it to determine behavior would not be doing its job. If one adds the influences of memory, motivation, and emotional state to sensory input, what we perceive and do at any given moment is determined by the processing going on in much, if not all, of the brain.

This understanding of perception and sensory systems generally implies yet another difficult conclusion about how brains work: Perceptions, and the behaviors they lead to, are entirely reflexive. Although the concept of a reflex is not precisely defined in neuroscience, the term is typically used to refer to involuntary sensory-motor behaviors that are assumed to occur with a minimum of the higher-order cortical processing thought to underlie voluntary actions. The usual example presented in textbooks is one of the spinal reflexes that Sherrington studied in the early twentieth century (Figure 13.4), or an autonomic reflex such as salivation in response to food that Ivan Pavlov used at about the same time to study conditioned learning. For Sherrington and Pavlov, a reflex meant an automatic response that depends on a relatively simple neural pathway with little or no cognitive input. However, Sherrington was well aware that the idea of a simple reflex isolated from the rest of the nervous system is, in his words, a "convenient...fiction." Sherrington went on to emphasize that "all parts of the nervous system are connected together and no part of it is ever capable of reaction without affecting and being affected by other parts". Clinicians know very well that spinal reflexes depend on cortical processing and that they no longer operate normally when the descending cortical input to spinal neurons is compromised by disease or injury.

Figure 13.3 Example of how what we hear influences what we see. The perceived trajectories (colored arrows) of two objects (1, 2) can be altered by the sound of a collision coincident with the moment the objects would have bumped into each other. (After Purves, Brannon, et al., 2008)

Despite Sherrington's caveat, reflexes are usually regarded as behaviors that are immune from the influence of cortical processing, particularly from "top-down" cognitive influences. But the reality is that brain systems and circuits are invariably interconnected to take full advantage of the range of information the nervous system provides at any given moment. From this vantage, the conventional distinction between involuntary (reflexive) and voluntary neural processing makes little sense. Any perception or behavior is equally a product of all the neural processing occurring in the nervous system at that time. Depending on the circumstances, some parts of the nervous system will be more—or less—active than others. But it would be a mistake to think that processing activity in any part of the brain is irrelevant to processing in another region, or to perceptual and behavioral outcome. A subjective sense of what our brains are doing has given pride of place to what we assume to be "voluntary" behaviors or thought processes, but no neurobiological evidence supports this bias.

Figure 13.4 Diagram of a "simple" spinal reflex. The pathways that carry information from the spinal cord to the brainstem and cerebrum, and the descending pathways from the brain that modulate the execution of this or any other behavior, are not shown in this diagram of the knee-jerk reflex. (After Purves, Brannon et al., 2008)