Overview

Overview: Priors in visual perception
Vision is a fabrication of our minds. Sensory information from our eyes is often ambiguous or limited, yet vision is remarkably robust and surprisingly able to correctly interpret impoverished sensory signals. What cortical computations make this possible? In the framework of Bayesian statistical decision theory; how does the cortex combine sensory evidence from the eyes with priors or expectations to form percepts? Priors may be short term and signaled by the task at hand - a particular spatial location may be more likely to contain information that is needed. Or priors may be long-term and developed over extended exposure to the natural statistics of the visual world - objects may tend to move slowly rather than quickly. While much is known about the encoding of sensory evidence, comparatively little is known about priors. Where do priors interact with sensory signals and how do they modify and augment perception? We use psychophysics to make precise behavioral measurements of how priors bias sensory decisions while concurrently measuring cortical activity with functional magnetic resonance imaging. Using knowledge of the visual system and decision theoretical models of how behavior is linked to cortical activity, we seek to understand the cortical computations that construct human vision.

Selected publications

Below is a selected list of our key publications with illustrations, some demos and a basic introduction to each one of the findings. For a full list of our research publications, see here.

Cortical correlates of human motion perception biases
Human sensory perception is not a faithful reproduction of the sensory environment. For example, at low contrast, objects appear to move slower and flicker faster than veridical. While these biases have been robustly observed, their neural underpinning are unknown, thus suggesting a possible disconnect of the well established link between motion perception and cortical responses. We used functional imaging to examine the encoding of speed in the human cortex at the scale of neuronal populations and asked where and how these biases are encoded. Decoding, voxel population and forward- encoding analyses revealed biases towards slow speeds and high temporal frequencies at low contrast in the earliest visual cortical regions, matching perception. These findings thus offer a resolution to the disconnect between cortical responses and motion perception in humans. Moreover, biases in speed perception are considered a leading example of Bayesian inference as they can be interpreted as a prior for slow speeds. Our data therefore suggest that perceptual priors of this sort can be encoded by neural populations in the same early cortical areas that provide sensory evidence
Vintch, B. and Gardner, J. L. (2014) Cortical correlates of human motion perception biases. The Journal of Neuroscience 34: 2592–2604. DOI pdf
Abstract
Attentional enhancement via selection and pooling of early sensory responses in human visual cortex
Our world is filled with multiple distractions - flashing images on a television screen, blinking lights, blaring horns. How is our brain able to focus attention only on relevant stimuli? The brain might turn up the sensory gain of responses (B above) or turn down noise in sensory cortical circuits responding to the relevant stimulus (C above) - thus enhancing our sensitivity. Alternatively (or in addition to), the brain might efficiently select just the most relevant sensory responses for routing to higher perceptual and action related areas (D above) - thus improving behavioral sensitivity by blocking out irrelevant signals. We studied contrast discrimination performance when subjects were cued to a single (focal attention) or multiple locations (distributed attention), while concurrently measuring cortical responses using fMRI. Using computational models we found that improved behavioral performance could be quantitatively accounted for by a model which included efficient selection of sensory signals using a max-pooling selection rule, but not by models that only allowed behavior to be improved by sensitivity enhancement. The max-pooling rule simply selected responses based on the magnitude of response. We conclude that attention enhanced behavioral performance predominantly by enabling efficient selection of the behaviorally relevant sensory signals.
Pestilli, F., Carrasco, M., Heeger, D. J. and Gardner, J. L. (2011) Attentional enhancement via selection and pooling of early sensory responses in human visual cortex. Neuron 72:832-46 DOI <Preview by John T. Serences> pdf SI
Abstract
Word cloud
Hara, Y. and Gardner, J. L. (2014) Encoding of graded changes in spatial specificity of prior cues in human visual cortex. Journal of Neurophysiology 112:2834-49. DOIpdf
Abstract
Hara Y., Pestilli F. and Gardner J. L. (2014). Differing effects of attention in single-units and populations are well predicted by heterogeneous tuning and the normalization model of attention. Frontiers in Computational Neuroscience 8:12. DOIpdf
Abstract
Feature-specific attentional priority signals in human cortex
Priority of visual stimuli has been hypothesized to be represented in spatial maps in cortex. Indeed, responses in many topographically mapped visual and parietal areas show spatially specific increased responses for stimuli located at the focus of attention. But, stimuli can be prioritized not only by space but by features such as color and direction of motion. When these non-spatial features are prioritized, how and where are they encoded? We used classification analyses of human fMRI responses as subjects performed a feature based attention task with spatially overlapped stimuli and found that priority for color and motion are represented in frontal (e.g. FEF) and parietal (e.g. IPS1-4) areas commonly associated with spatial priority. This suggests that multiplexed into these spatial representations these areas encode priority of different non-spatial features.
Liu, T., Hospadaruk, L., Zhu, D., and Gardner, J. L. (2011) Feature-specific attentional priority signals in human cortex. Journal of Neuroscience 31:4484-95 DOI pdf
Abstract
Word cloud
Maps of visual space in human occipital cortex are retinotopic, not spatiotopic
Every time we move our eyes or head the image of stationary visual objects shift to a different location on the retina. Thus, after an eye movement, a completely different set of sensory neurons encodes an object then the ones that encoded the object before the eye movement. Nonetheless we are able to perceive the world as stable across eye movements. These facts led many to hypothesize the existence of spatially mapped responses in the brain that do not change with eye movements; i.e. responses in a spatiotopic, rather than retinotopic, reference frame. Recently, it was reported that human cortical area MT, unlike its counterpart in the monkey, encodes space in a spatiotopic map. We used BOLD imaging to determine the reference frame of 12 visual areas and found that all areas including MT represent stimuli in a retinotopic reference frame. Our data lend support to the idea that human early visual areas encode stimuli in a retinotopic reference frame just like monkey visual areas and that explicit representations of spatiotopic space are not necessarily required for stable perception.
Gardner, J. L. , Merriam, E. P., Movshon, J. A., and Heeger, D.J. (2008) Maps of visual space in human occipital cortex are retinotopic, not spatiotopic. Journal of Neuroscience 28:3988-3999 DOI This Week in the Journal pdf
Abstract
Word cloud
Contrast adaptation and representation in human early visual cortex
Noimage Changes in the contrast of visual stimuli could signal an informative event, like the sudden appearance of a predator or prey, or a mundane one, like a change in lighting conditions as the sun sets. The visual system should optimally adjust sensitivity to discount slow changes yet remain sensitive to rapid ones. Using event-related fMRI and a data-driven analysis approach, we uncovered two mechanisms in human early visual cortex that do just this. We found a horizontal shift of the relationship between contrast and response (see figure at left) akin to that reported in anesthetized animals which slowly adapts responses to current viewing conditions. In human V4 (hV4), we found a counterpart to this adaptation mechanism: hV4 represents all changes in image contrast, be they increments or decrements, with a positive response. This suggests that hV4 responses do not faithfully follow contrast, rather they signal salient changes.
Gardner, J. L. , Sun, P., Waggoner, R. A., Ueno K., Tanaka, K., and Cheng K. (2005) Contrast adaptation and representation in human early visual cortex. Neuron 47:607-620 DOI <Preview by Geoffrey M. Boynton>
pdf
Abstract
Word cloud
A population decoding framework for motion aftereffects on smooth pursuit eye movements
Watch a waterfall for some period of time and then shift your gaze to the person standing next to you and you will get a sensation that their face is moving upwards (click the spiral for a demo). This “motion aftereffect” is likely the result of adaptation of responses in the visual cortex - but what adaptive changes give rise to the illusion, and what might that tell us about how populations of neurons encode properties of stimuli for perception and action? After adaptation, it has been reported that the gain of cortical neurons is reduced, tuning narrows, and that tuning preferences are either attracted towards or repelled from the adaptation stimulus (see figure at left). First, we found that this perceptual illusions is also manifest in visually guided movement; namely in motion tracking movements of the eye called smooth pursuit. We then used computational modeling to see which neuronal adaptation effect (when considered by itself) could quantitatively account for the pattern of observed adaptation in the eye movements. We found that by considering vector-average decoding of populations of simulated MT neurons that gain changes and narrowing of tuning, but not shifts in tuning preference were able to account for changes in the direction of pursuit eye movements after adaptation.
Gardner, J. L. , Tokiyama, S., and Lisberger, S. G. (2004) A population decoding framework for motion aftereffects on smooth pursuit eye movements. Journal of Neuroscience 24:9035-9048 DOI
pdf
Abstract
Word cloud
Serial linkage of target selection for orienting and tracking eye movements
How does the brain coordinate the choice of target between two different motor systems like saccadic and smooth pursuit eye movements? In principle this could be done in parallel – sending a command to choose a target to both systems at once. Or it could be in serial – first choosing a target with the saccadic movement and then sending that command in serial to the pursuit system. In a series of behavioral and physiological studies we have found that the choice of target is sent in serial from the saccadic to the pursuit motor system. See this demo which steps you through a series of microstimulation studies that we used to show this.
Gardner, J. L. , and Lisberger, S. G. (2002) Serial linkage of target selection for orienting and tracking eye movements. Nature Neuroscience 5:892-899 DOI <News and Views by Michael N. Shadlen> pdf
Abstract
Gardner, J. L. , and Lisberger, S. G. (2001) Linked target selection for saccadic and smooth pursuit eye movements. Journal of Neuroscience 21(6):2075-2084 link pdf
Abstract
Word cloud