Overview: Priors in visual perception

Vision is a fabrication of our minds. Sensory information from our eyes is often ambiguous or limited, yet vision is remarkably robust and surprisingly able to correctly interpret impoverished sensory signals. What cortical computations make this possible? In the framework of Bayesian statistical decision theory; how does the cortex combine sensory evidence from the eyes with priors or expectations to form percepts? Priors may be short term and signaled by the task at hand - a particular spatial location may be more likely to contain information that is needed. Or priors may be long-term and developed over extended exposure to the natural statistics of the visual world - objects may tend to move slowly rather than quickly. While much is known about the encoding of sensory evidence, comparatively little is known about priors. Where do priors interact with sensory signals and how do they modify and augment perception? We use psychophysics to make precise behavioral measurements of how priors bias sensory decisions while concurrently measuring cortical activity with functional magnetic resonance imaging. Using knowledge of the visual system and decision theoretical models of how behavior is linked to cortical activity, we seek to understand the cortical computations that construct human vision.

Selected publications

Below is a selected list of our key publications with illustrations, some demos and a basic introduction to each one of the findings. For a full list of our research publications, see here.

Attentional enhancement via selection and pooling of early sensory responses in human visual cortex

Pestilli, F., Carrasco, M., Heeger, D. J. and Gardner, J. L. (2011) Attentional enhancement via selection and pooling of early sensory responses in human visual cortex. Neuron 72:832-46 Link SI <Preview by John T. Serences> Abstract pdf

Our world is filled with multiple distractions - flashing images on a television screen, blinking lights, blaring horns. How is our brain able to focus attention only on relevant stimuli? The brain might turn up the sensory gain of responses (B above) or turn down noise in sensory cortical circuits responding to the relevant stimulus (C above) - thus enhancing our sensitivity. Alternatively (or in addition to), the brain might efficiently select just the most relevant sensory responses for routing to higher perceptual and action related areas (D above) - thus improving behavioral sensitivity by blocking out irrelevant signals. We studied contrast discrimination performance when subjects were cued to a single (focal attention) or multiple locations (distributed attention), while concurrently measuring cortical responses using fMRI. Using computational models we found that improved behavioral performance could be quantitatively accounted for by a model which included efficient selection of sensory signals using a max-pooling selection rule, but not by models that only allowed behavior to be improved by sensitivity enhancement. The max-pooling rule simply selected responses based on the magnitude of response. We conclude that attention enhanced behavioral performance predominantly by enabling efficient selection of the behaviorally relevant sensory signals.

Feature-specific attentional priority signals in human cortex

Liu, T., Hospadaruk, L., Zhu, D., and Gardner, J. L. (2011) Feature-specific attentional priority signals in human cortex. Journal of Neuroscience 31:4484-95 DOIAbstract pdf

Priority of visual stimuli has been hypothesized to be represented in spatial maps in cortex. Indeed, responses in many topographically mapped visual and parietal areas show spatially specific increased responses for stimuli located at the focus of attention. But, stimuli can be prioritized not only by space but by features such as color and direction of motion. When these non-spatial features are prioritized, how and where are they encoded? We used classification analyses of human fMRI responses as subjects performed a feature based attention task with spatially overlapped stimuli and found that priority for color and motion are represented in frontal (e.g. FEF) and parietal (e.g. IPS1-4) areas commonly associated with spatial priority. This suggests that multiplexed into these spatial representations these areas encode priority of different non-spatial features.

Maps of visual space in human occipital cortex are retinotopic, not spatiotopic

Gardner, J. L. , Merriam, E. P., Movshon, J. A., and Heeger, D.J. (2008) Maps of visual space in human occipital cortex are retinotopic, not spatiotopic. Journal of Neuroscience 28:3988-3999 DOI Abstract pdf

Every time we move our eyes or head the image of stationary visual objects shift to a different location on the retina. Thus, after an eye movement, a completely different set of sensory neurons encodes an object then the ones that encoded the object before the eye movement. Nonetheless we are able to perceive the world as stable across eye movements. These facts led many to hypothesize the existence of spatially mapped responses in the brain that do not change with eye movements; i.e. responses in a spatiotopic, rather than retinotopic, reference frame. Recently, it was reported that human cortical area MT, unlike its counterpart in the monkey, encodes space in a spatiotopic map. We used BOLD imaging to determine the reference frame of 12 visual areas and found that all areas including MT represent stimuli in a retinotopic reference frame. Our data lend support to the idea that human early visual areas encode stimuli in a retinotopic reference frame just like monkey visual areas and that explicit representations of spatiotopic space are not necessarily required for stable perception.

Contrast adaptation and representation in human early visual cortex

Gardner, J. L. , Sun, P., Waggoner, R. A., Ueno K., Tanaka, K., and Cheng K. (2005) Contrast adaptation and representation in human early visual cortex. Neuron 47:607-620 DOI <Preview by Geoffrey M. Boynton>Abstact pdf

NoimageChanges in the contrast of visual stimuli could signal an informative event, like the sudden appearance of a predator or prey, or a mundane one, like a change in lighting conditions as the sun sets. The visual system should optimally adjust sensitivity to discount slow changes yet remain sensitive to rapid ones. Using event-related fMRI and a data-driven analysis approach, we uncovered two mechanisms in human early visual cortex that do just this. We found a horizontal shift of the relationship between contrast and response (see figure at left) akin to that reported in anesthetized animals which slowly adapts responses to current viewing conditions. In human V4 (hV4), we found a counterpart to this adaptation mechanism: hV4 represents all changes in image contrast, be they increments or decrements, with a positive response. This suggests that hV4 responses do not faithfully follow contrast, rather they signal salient changes.

A population decoding framework for motion aftereffects on smooth pursuit eye movements

Gardner, J. L. , Tokiyama, S., and Lisberger, S. G. (2004) A population decoding framework for motion aftereffects on smooth pursuit eye movements. Journal of Neuroscience 24:9035-9048 DOI Abstract pdf

Watch a waterfall for some period of time and then shift your gaze to the person standing next to you and you will get a weird sensation that their face is moving upwards (click the spiral above for a demo). This “motion aftereffect” is likely the result of adaptation of responses in the visual cortex - but what adaptive changes give rise to the illusion, and what might that tell us about how populations of neurons encode properties of stimuli for perception and action? After adaptation, it has been reported that the gain of cortical neurons is reduced, tuning narrows, and that tuning preferences are either attracted towards or repelled from the adaptation stimulus (see figure at left). First, we found that this perceptual illusions is also manifest in visually guided movement; namely in motion tracking movements of the eye called smooth pursuit. We then used computational modeling to see which neuronal adaptation effect (when considered by itself) could quantitatively account for the pattern of observed adaptation in the eye movements. We found that by considering vector-average decoding of populations of simulated MT neurons that gain changes and narrowing of tuning, but not shifts in tuning preference were able to account for changes in the direction of pursuit eye movements after adaptation.

Serial linkage of target selection for orienting and tracking eye movements

Gardner, J. L. , and Lisberger, S. G. (2002) Serial linkage of target selection for orienting and tracking eye movements. Nature Neuroscience 5:892-899 DOI <News and Views by Michael N. Shadlen> Abstract pdf

Gardner, J. L. , and Lisberger, S. G. (2001) Linked target selection for saccadic and smooth pursuit eye movements. Journal of Neuroscience 21(6):2075-2084 link Abstract pdf

How does the brain coordinate the choice of target between two different motor systems like saccadic and smooth pursuit eye movements? In principle this could be done in parallel – sending a command to choose a target to both systems at once. Or it could be in serial – first choosing a target with the saccadic movement and then sending that command in serial to the pursuit system. In a series of behavioral and physiological studies we have found that the choice of target is sent in serial from the saccadic to the pursuit motor system. See this demo which steps you through a series of microstimulation studies that we used to show this.