Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Previous revision
shared:research [2015/01/22 10:32]
shared:research [2015/01/22 10:34]
Line 18: Line 18:
 |{{ :​shared:​efficientselection.png?​725x195.75 |}}||| |{{ :​shared:​efficientselection.png?​725x195.75 |}}|||
 |Our world is filled with multiple distractions - flashing images on a television screen, blinking lights, blaring horns. How is our brain able to focus attention only on relevant stimuli? The brain might turn up the sensory gain of responses (B above) or turn down noise in sensory cortical circuits responding to the relevant stimulus (C above) - thus enhancing our sensitivity. Alternatively (or in addition to), the brain might efficiently select just the most relevant sensory responses for routing to higher perceptual and action related areas (D above) - thus improving behavioral sensitivity by blocking out irrelevant signals. We studied contrast discrimination performance when subjects were cued to a single (focal attention) or multiple locations (distributed attention), while concurrently measuring cortical responses using fMRI. Using computational models we found that improved behavioral performance could be quantitatively accounted for by a model which included efficient selection of sensory signals using a max-pooling selection rule, but not by models that only allowed behavior to be improved by sensitivity enhancement. The max-pooling rule simply selected responses based on the magnitude of response. We conclude that attention enhanced behavioral performance predominantly by enabling efficient selection of the behaviorally relevant sensory signals.||| |Our world is filled with multiple distractions - flashing images on a television screen, blinking lights, blaring horns. How is our brain able to focus attention only on relevant stimuli? The brain might turn up the sensory gain of responses (B above) or turn down noise in sensory cortical circuits responding to the relevant stimulus (C above) - thus enhancing our sensitivity. Alternatively (or in addition to), the brain might efficiently select just the most relevant sensory responses for routing to higher perceptual and action related areas (D above) - thus improving behavioral sensitivity by blocking out irrelevant signals. We studied contrast discrimination performance when subjects were cued to a single (focal attention) or multiple locations (distributed attention), while concurrently measuring cortical responses using fMRI. Using computational models we found that improved behavioral performance could be quantitatively accounted for by a model which included efficient selection of sensory signals using a max-pooling selection rule, but not by models that only allowed behavior to be improved by sensitivity enhancement. The max-pooling rule simply selected responses based on the magnitude of response. We conclude that attention enhanced behavioral performance predominantly by enabling efficient selection of the behaviorally relevant sensory signals.|||
-| Pestilli, F., Carrasco, M., Heeger, D. J. and **Gardner, J. L.** (2011) Attentional enhancement via selection and pooling of early sensory responses in human visual cortex. ​ //Neuron// 72:832-46 [[http://​dx.doi.org/​10.1016/​j.neuron.2011.09.025|DOI]] <​[[http://​dx.doi.org/​10.1016/​j.neuron.2011.11.005|Preview by John T. Serences]]>​ |{{reprints:​attentionselection.pdf|pdf}} |{{reprints:​attentionselectionsi.pdf|SI}}|+| Pestilli, F., Carrasco, M., Heeger, D. J. and Gardner, J. L. (2011) Attentional enhancement via selection and pooling of early sensory responses in human visual cortex. ​ //Neuron// 72:832-46 [[http://​dx.doi.org/​10.1016/​j.neuron.2011.09.025|DOI]] <​[[http://​dx.doi.org/​10.1016/​j.neuron.2011.11.005|Preview by John T. Serences]]>​ |{{reprints:​attentionselection.pdf|pdf}} |{{reprints:​attentionselectionsi.pdf|SI}}|
 |<​html><​a class="​folder"​ href="#​folded_2">​ Abstract</​a><​span class="​folded hidden"​ id="​folded_2"><​br/>​ |<​html><​a class="​folder"​ href="#​folded_2">​ Abstract</​a><​span class="​folded hidden"​ id="​folded_2"><​br/>​
 To characterize the computational processes by which attention improves behavioral performance,​ we measured activity in visual cortex with functional magnetic resonance imaging as humans performed a contrast-discrimination task with focal and distributed attention. Focal attention yielded robust improvements in behavioral performance that were accompanied by increases in cortical responses. Using a quantitative analysis, we determined that if performance were limited only by the sensitivity of the measured sensory signals, the improvements in behavioral performance would have corresponded to an unrealistically large (approximately 400%) reduction in response variability. Instead, behavioral performance was well characterized by a pooling and selection process for which the largest sensory responses, those most strongly modulated by attention, dominated the perceptual decision. This characterization predicts that high contrast distracters that evoke large sensory responses should have a negative impact on behavioral performance. We tested and confirmed this prediction. We conclude that attention enhanced behavioral performance predominantly by enabling efficient selection of the behaviorally relevant sensory signals. To characterize the computational processes by which attention improves behavioral performance,​ we measured activity in visual cortex with functional magnetic resonance imaging as humans performed a contrast-discrimination task with focal and distributed attention. Focal attention yielded robust improvements in behavioral performance that were accompanied by increases in cortical responses. Using a quantitative analysis, we determined that if performance were limited only by the sensitivity of the measured sensory signals, the improvements in behavioral performance would have corresponded to an unrealistically large (approximately 400%) reduction in response variability. Instead, behavioral performance was well characterized by a pooling and selection process for which the largest sensory responses, those most strongly modulated by attention, dominated the perceptual decision. This characterization predicts that high contrast distracters that evoke large sensory responses should have a negative impact on behavioral performance. We tested and confirmed this prediction. We conclude that attention enhanced behavioral performance predominantly by enabling efficient selection of the behaviorally relevant sensory signals.
Line 29: Line 29:
 </​span></​html>​||| </​span></​html>​|||
 |Hara Y., Pestilli F. and Gardner J. L. (2014). Differing effects of attention in single-units and populations are well predicted by heterogeneous tuning and the normalization model of attention. //Frontiers in Computational Neuroscience//​ 8:12. [[http://​dx.doi.org/​10.3389/​fncom.2014.00012|DOI]]||{{reprints:​hara_pestilli_gardner_2014.pdf|pdf}}| |Hara Y., Pestilli F. and Gardner J. L. (2014). Differing effects of attention in single-units and populations are well predicted by heterogeneous tuning and the normalization model of attention. //Frontiers in Computational Neuroscience//​ 8:12. [[http://​dx.doi.org/​10.3389/​fncom.2014.00012|DOI]]||{{reprints:​hara_pestilli_gardner_2014.pdf|pdf}}|
-|<​html><​a class="​folder"​ href="#​folded_16"> Abstract</​a><​span class="​folded hidden"​ id="folded_16"><​br/>​+|<​html><​a class="​folder"​ href="#​folded_17"> Abstract</​a><​span class="​folded hidden"​ id="folded_17"><​br/>​
 Single-unit measurements have reported many different effects of attention on contrast-response (e.g. contrast-gain,​ response-gain,​ additive-offset dependent on visibility),​ while functional imaging measurements have more uniformly reported increases in response across all contrasts (additive-offset). The normalization model of attention elegantly predicts the diversity of effects of attention reported in single-units well-tuned to the stimulus, but what predictions does it make for more realistic populations of neurons with heterogeneous tuning? Are predictions in accordance with population-scale measurements?​ We used functional imaging data from humans to determine a realistic ratio of attention-field to stimulus-drive size (a key parameter for the model) and predicted effects of attention in a population of model neurons with heterogeneous tuning. We found that within the population, neurons well-tuned to the stimulus showed a response-gain effect, while less-well-tuned neurons showed a contrast-gain effect. Averaged across the population, these disparate effects of attention gave rise to additive-offsets in contrast-response,​ similar to reports in human functional imaging as well as population averages of single-units. Differences in predictions for single-units and populations were observed across a wide range of model parameters (ratios of attention-field to stimulus-drive size and the amount of baseline response modifiable by attention), offering an explanation for disparity in physiological reports. Thus, by accounting for heterogeneity in tuning of realistic neuronal populations,​ the normalization model of attention can not only predict responses of well-tuned neurons, but also the activity of large populations of neurons. More generally, computational models can unify physiological findings across different scales of measurement,​ and make links to behavior, but only if factors such as heterogeneous tuning within a population are properly accounted for.</​span></​html>​||| Single-unit measurements have reported many different effects of attention on contrast-response (e.g. contrast-gain,​ response-gain,​ additive-offset dependent on visibility),​ while functional imaging measurements have more uniformly reported increases in response across all contrasts (additive-offset). The normalization model of attention elegantly predicts the diversity of effects of attention reported in single-units well-tuned to the stimulus, but what predictions does it make for more realistic populations of neurons with heterogeneous tuning? Are predictions in accordance with population-scale measurements?​ We used functional imaging data from humans to determine a realistic ratio of attention-field to stimulus-drive size (a key parameter for the model) and predicted effects of attention in a population of model neurons with heterogeneous tuning. We found that within the population, neurons well-tuned to the stimulus showed a response-gain effect, while less-well-tuned neurons showed a contrast-gain effect. Averaged across the population, these disparate effects of attention gave rise to additive-offsets in contrast-response,​ similar to reports in human functional imaging as well as population averages of single-units. Differences in predictions for single-units and populations were observed across a wide range of model parameters (ratios of attention-field to stimulus-drive size and the amount of baseline response modifiable by attention), offering an explanation for disparity in physiological reports. Thus, by accounting for heterogeneity in tuning of realistic neuronal populations,​ the normalization model of attention can not only predict responses of well-tuned neurons, but also the activity of large populations of neurons. More generally, computational models can unify physiological findings across different scales of measurement,​ and make links to behavior, but only if factors such as heterogeneous tuning within a population are properly accounted for.</​span></​html>​|||
  
 ^ Feature-specific attentional priority signals in human cortex ^^ ^ Feature-specific attentional priority signals in human cortex ^^
 | {{:​shared:​motioncolor.png?​300x172 |}} Priority of visual stimuli has been hypothesized to be represented in spatial maps in cortex. Indeed, responses in many topographically mapped visual and parietal areas show spatially specific increased responses for stimuli located at the focus of attention. But, stimuli can be prioritized not only by space but by features such as color and direction of motion. When these non-spatial features are prioritized,​ how and where are they encoded? We used classification analyses of human fMRI responses as subjects performed a feature based attention task with spatially overlapped stimuli and found that priority for color and motion are represented in frontal (e.g. FEF) and parietal (e.g. IPS1-4) areas commonly associated with spatial priority. This suggests that multiplexed into these spatial representations these areas encode priority of different non-spatial features. || | {{:​shared:​motioncolor.png?​300x172 |}} Priority of visual stimuli has been hypothesized to be represented in spatial maps in cortex. Indeed, responses in many topographically mapped visual and parietal areas show spatially specific increased responses for stimuli located at the focus of attention. But, stimuli can be prioritized not only by space but by features such as color and direction of motion. When these non-spatial features are prioritized,​ how and where are they encoded? We used classification analyses of human fMRI responses as subjects performed a feature based attention task with spatially overlapped stimuli and found that priority for color and motion are represented in frontal (e.g. FEF) and parietal (e.g. IPS1-4) areas commonly associated with spatial priority. This suggests that multiplexed into these spatial representations these areas encode priority of different non-spatial features. ||
-| Liu, T., Hospadaruk, L., Zhu, D., and **Gardner, J. L.** (2011) Feature-specific attentional priority signals in human cortex. ​ //Journal of Neuroscience//​ 31:4484-95 [[http://​dx.doi.org/​10.1523/​JNEUROSCI.5745-10.2011|DOI]]| {{:​reprints:​liu_hospadaruk_zhu_gardner_jn_2011.pdf|pdf}}|+| Liu, T., Hospadaruk, L., Zhu, D., and Gardner, J. L. (2011) Feature-specific attentional priority signals in human cortex. ​ //Journal of Neuroscience//​ 31:4484-95 [[http://​dx.doi.org/​10.1523/​JNEUROSCI.5745-10.2011|DOI]]| {{:​reprints:​liu_hospadaruk_zhu_gardner_jn_2011.pdf|pdf}}|
 |<​html><​a class="​folder"​ href="#​folded_3">​ Abstract</​a><​span class="​folded hidden"​ id="​folded_3"><​br/>​ |<​html><​a class="​folder"​ href="#​folded_3">​ Abstract</​a><​span class="​folded hidden"​ id="​folded_3"><​br/>​
 Human can flexibly attend to a variety of stimulus dimensions, including spatial location and various features such as color and direction of motion. While the locus of spatial attention has been hypothesized to be represented by priority maps encoded in several dorsal frontal and parietal areas, it is unknown how the brain represents attended features. Here we examined the distribution and organization of neural signals related to deployment of feature-based attention. Subjects viewed a compound stimulus containing two superimposed motion directions (or colors), and were instructed to perform an attention-demanding task on one of the directions (or colors). We found elevated and sustained fMRI response for the attention task compared to a neutral condition, without reliable differences in overall response amplitude between attending to different features. However, using multi-voxel pattern analysis, we were able to decode the attended feature in both early visual areas (V1 to hMT+) and frontal and parietal areas (e.g., IPS1-4 and FEF) that are commonly associated with spatial attention. Furthermore,​ analysis of the classifier weight maps showed that attending to motion and color evoked different patterns of activity, suggesting different neuronal subpopulations in these regions are recruited for attending to different feature dimensions. Thus, our finding suggests that rather than a purely spatial representation of priority, frontal and parietal cortical areas also contain multiplexed signals related to the priority of different non-spatial features. Human can flexibly attend to a variety of stimulus dimensions, including spatial location and various features such as color and direction of motion. While the locus of spatial attention has been hypothesized to be represented by priority maps encoded in several dorsal frontal and parietal areas, it is unknown how the brain represents attended features. Here we examined the distribution and organization of neural signals related to deployment of feature-based attention. Subjects viewed a compound stimulus containing two superimposed motion directions (or colors), and were instructed to perform an attention-demanding task on one of the directions (or colors). We found elevated and sustained fMRI response for the attention task compared to a neutral condition, without reliable differences in overall response amplitude between attending to different features. However, using multi-voxel pattern analysis, we were able to decode the attended feature in both early visual areas (V1 to hMT+) and frontal and parietal areas (e.g., IPS1-4 and FEF) that are commonly associated with spatial attention. Furthermore,​ analysis of the classifier weight maps showed that attending to motion and color evoked different patterns of activity, suggesting different neuronal subpopulations in these regions are recruited for attending to different feature dimensions. Thus, our finding suggests that rather than a purely spatial representation of priority, frontal and parietal cortical areas also contain multiplexed signals related to the priority of different non-spatial features.
Line 43: Line 43:
 ^ Maps of visual space in human occipital cortex are retinotopic,​ not spatiotopic ^^ ^ Maps of visual space in human occipital cortex are retinotopic,​ not spatiotopic ^^
 | {{ :​shared:​v1_and_mt_retinotopic.png?​400x266.67|}} Every time we move our eyes or head the image of stationary visual objects shift to a different location on the retina. Thus, after an eye movement, a completely different set of sensory neurons encodes an object then the ones that encoded the object before the eye movement. Nonetheless we are able to perceive the world as stable across eye movements. These facts led many to hypothesize the existence of spatially mapped responses in the brain that do not change with eye movements; i.e. responses in a spatiotopic,​ rather than retinotopic,​ reference frame. Recently, it was reported that human cortical area MT, unlike its counterpart in the monkey, encodes space in a spatiotopic map. We used BOLD imaging to determine the reference frame of 12 visual areas and found that all areas including MT represent stimuli in a retinotopic reference frame. Our data lend support to the idea that human early visual areas encode stimuli in a retinotopic reference frame just like monkey visual areas and that explicit representations of spatiotopic space are not necessarily required for stable perception. || | {{ :​shared:​v1_and_mt_retinotopic.png?​400x266.67|}} Every time we move our eyes or head the image of stationary visual objects shift to a different location on the retina. Thus, after an eye movement, a completely different set of sensory neurons encodes an object then the ones that encoded the object before the eye movement. Nonetheless we are able to perceive the world as stable across eye movements. These facts led many to hypothesize the existence of spatially mapped responses in the brain that do not change with eye movements; i.e. responses in a spatiotopic,​ rather than retinotopic,​ reference frame. Recently, it was reported that human cortical area MT, unlike its counterpart in the monkey, encodes space in a spatiotopic map. We used BOLD imaging to determine the reference frame of 12 visual areas and found that all areas including MT represent stimuli in a retinotopic reference frame. Our data lend support to the idea that human early visual areas encode stimuli in a retinotopic reference frame just like monkey visual areas and that explicit representations of spatiotopic space are not necessarily required for stable perception. ||
-**Gardner, J. L.** , Merriam, E. P., Movshon, J. A., and Heeger, D.J. (2008) Maps of visual space in human occipital cortex are retinotopic,​ not spatiotopic. //Journal of Neuroscience// ​ 28:​3988-3999 [[http://​dx.doi.org/​10.1523/​JNEUROSCI.5476-07.2008|DOI]] [[http://​www.jneurosci.org/​content/​28/​15/​i.full|This Week in the Journal]]| {{reprints:​retinotopic.pdf|pdf}} |+| Gardner, J. L. , Merriam, E. P., Movshon, J. A., and Heeger, D.J. (2008) Maps of visual space in human occipital cortex are retinotopic,​ not spatiotopic. //Journal of Neuroscience// ​ 28:​3988-3999 [[http://​dx.doi.org/​10.1523/​JNEUROSCI.5476-07.2008|DOI]] [[http://​www.jneurosci.org/​content/​28/​15/​i.full|This Week in the Journal]]| {{reprints:​retinotopic.pdf|pdf}} |
 |<​html><​a class="​folder"​ href="#​folded_4">​ Abstract</​a><​span class="​folded hidden"​ id="​folded_4"><​br/>​ |<​html><​a class="​folder"​ href="#​folded_4">​ Abstract</​a><​span class="​folded hidden"​ id="​folded_4"><​br/>​
 We experience the visual world as phenomenally invariant to eye position, but almost all cortical maps of visual space in monkeys use a retinotopic reference frame, that is, the cortical representation of a point in the visual world is different across eye positions. It was recently reported that human cortical area MT (unlike monkey MT) represents stimuli in a reference frame linked to the position of stimuli in space, a "​spatiotopic"​ reference frame. We used visuotopic mapping with blood oxygen level-dependent functional magnetic resonance imaging signals to define 12 human visual cortical areas, and then determined whether the reference frame in each area was spatiotopic or retinotopic. We found that all 12 areas, including MT, represented stimuli in a retinotopic reference frame. Although there were patches of cortex in and around these visual areas that were ostensibly spatiotopic,​ none of these patches exhibited reliable stimulus-evoked responses. We conclude that the early, visuotopically organized visual cortical areas in the human brain (like their counterparts in the monkey brain) represent stimuli in a retinotopic reference frame. We experience the visual world as phenomenally invariant to eye position, but almost all cortical maps of visual space in monkeys use a retinotopic reference frame, that is, the cortical representation of a point in the visual world is different across eye positions. It was recently reported that human cortical area MT (unlike monkey MT) represents stimuli in a reference frame linked to the position of stimuli in space, a "​spatiotopic"​ reference frame. We used visuotopic mapping with blood oxygen level-dependent functional magnetic resonance imaging signals to define 12 human visual cortical areas, and then determined whether the reference frame in each area was spatiotopic or retinotopic. We found that all 12 areas, including MT, represented stimuli in a retinotopic reference frame. Although there were patches of cortex in and around these visual areas that were ostensibly spatiotopic,​ none of these patches exhibited reliable stimulus-evoked responses. We conclude that the early, visuotopically organized visual cortical areas in the human brain (like their counterparts in the monkey brain) represent stimuli in a retinotopic reference frame.
Line 52: Line 52:
 ^ Contrast adaptation and representation in human early visual cortex ^^ ^ Contrast adaptation and representation in human early visual cortex ^^
 |{{ :​shared:​contrast_adpatation.png?​250x316.67|Noimage}} | {{ :​shared:​contrast_series.png?​400x63 |}}Changes in the contrast of visual stimuli could signal an informative event, like the sudden appearance of a predator or prey, or a mundane one, like a change in lighting conditions as the sun sets. The visual system should optimally adjust sensitivity to discount slow changes yet remain sensitive to rapid ones. Using event-related fMRI and a data-driven analysis approach, we uncovered two mechanisms in human early visual cortex that do just this. We found a horizontal shift of the relationship between contrast and response (see figure at left) akin to that reported in anesthetized animals which slowly adapts responses to current viewing conditions. In human V4 (hV4), we found a counterpart to this adaptation mechanism: hV4 represents all changes in image contrast, be they increments or decrements, with a positive response. This suggests that hV4 responses do not faithfully follow contrast, rather they signal salient changes. | |{{ :​shared:​contrast_adpatation.png?​250x316.67|Noimage}} | {{ :​shared:​contrast_series.png?​400x63 |}}Changes in the contrast of visual stimuli could signal an informative event, like the sudden appearance of a predator or prey, or a mundane one, like a change in lighting conditions as the sun sets. The visual system should optimally adjust sensitivity to discount slow changes yet remain sensitive to rapid ones. Using event-related fMRI and a data-driven analysis approach, we uncovered two mechanisms in human early visual cortex that do just this. We found a horizontal shift of the relationship between contrast and response (see figure at left) akin to that reported in anesthetized animals which slowly adapts responses to current viewing conditions. In human V4 (hV4), we found a counterpart to this adaptation mechanism: hV4 represents all changes in image contrast, be they increments or decrements, with a positive response. This suggests that hV4 responses do not faithfully follow contrast, rather they signal salient changes. |
-**Gardner, J. L.** , Sun, P., Waggoner, R. A., Ueno K., Tanaka, K., and Cheng K. (2005) Contrast adaptation and representation in human early visual cortex. //​Neuron// ​ 47:607-620 [[http://​dx.doi.org/​10.1016/​j.neuron.2005.07.016|DOI]] <​[[http://​dx.doi.org/​10.1016/​j.neuron.2005.08.003|Preview by Geoffrey M. Boynton]]>​ ||+| Gardner, J. L. , Sun, P., Waggoner, R. A., Ueno K., Tanaka, K., and Cheng K. (2005) Contrast adaptation and representation in human early visual cortex. //​Neuron// ​ 47:607-620 [[http://​dx.doi.org/​10.1016/​j.neuron.2005.07.016|DOI]] <​[[http://​dx.doi.org/​10.1016/​j.neuron.2005.08.003|Preview by Geoffrey M. Boynton]]>​ ||
 | {{reprints:​cadapt.pdf|pdf}} || | {{reprints:​cadapt.pdf|pdf}} ||
 |<​html><​a class="​folder"​ href="#​folded_5">​ Abstract</​a><​span class="​folded hidden"​ id="​folded_5"><​br/>​ |<​html><​a class="​folder"​ href="#​folded_5">​ Abstract</​a><​span class="​folded hidden"​ id="​folded_5"><​br/>​
Line 62: Line 62:
 ^ A population decoding framework for motion aftereffects on smooth pursuit eye movements ^^ ^ A population decoding framework for motion aftereffects on smooth pursuit eye movements ^^
 |[[:​demos:​mae|{{:​shared:​motionadaptmodels.png?​400x485 |}}]] | [[:​demos:​mae|{{ :​demos:​spiral.png?​100x100 |}}]] Watch a waterfall for some period of time and then shift your gaze to the person standing next to you and you will get a sensation that their face is moving upwards (click the spiral for a [[:​demos:​mae|demo]]). This "​motion aftereffect"​ is likely the result of adaptation of responses in the visual cortex - but what adaptive changes give rise to the illusion, and what might that tell us about how populations of neurons encode properties of stimuli for perception and action? After adaptation, it has been reported that the gain of cortical neurons is reduced, tuning narrows, and that tuning preferences are either attracted towards or repelled from the adaptation stimulus (see figure at left). First, we found that this perceptual illusions is also manifest in visually guided movement; namely in motion tracking movements of the eye called smooth pursuit. We then used computational modeling to see which neuronal adaptation effect (when considered by itself) could quantitatively account for the pattern of observed adaptation in the eye movements. We found that by considering vector-average decoding of populations of simulated MT neurons that gain changes and narrowing of tuning, but not shifts in tuning preference were able to account for changes in the direction of pursuit eye movements after adaptation.| |[[:​demos:​mae|{{:​shared:​motionadaptmodels.png?​400x485 |}}]] | [[:​demos:​mae|{{ :​demos:​spiral.png?​100x100 |}}]] Watch a waterfall for some period of time and then shift your gaze to the person standing next to you and you will get a sensation that their face is moving upwards (click the spiral for a [[:​demos:​mae|demo]]). This "​motion aftereffect"​ is likely the result of adaptation of responses in the visual cortex - but what adaptive changes give rise to the illusion, and what might that tell us about how populations of neurons encode properties of stimuli for perception and action? After adaptation, it has been reported that the gain of cortical neurons is reduced, tuning narrows, and that tuning preferences are either attracted towards or repelled from the adaptation stimulus (see figure at left). First, we found that this perceptual illusions is also manifest in visually guided movement; namely in motion tracking movements of the eye called smooth pursuit. We then used computational modeling to see which neuronal adaptation effect (when considered by itself) could quantitatively account for the pattern of observed adaptation in the eye movements. We found that by considering vector-average decoding of populations of simulated MT neurons that gain changes and narrowing of tuning, but not shifts in tuning preference were able to account for changes in the direction of pursuit eye movements after adaptation.|
-**Gardner, J. L.** , Tokiyama, S., and Lisberger, S. G. (2004) A population decoding framework for motion aftereffects on smooth pursuit eye movements. //Journal of Neuroscience// ​ 24:​9035-9048 [[http://​dx.doi.org/​10.1523/​JNEUROSCI.0337-04.2004|DOI]]||+| Gardner, J. L. , Tokiyama, S., and Lisberger, S. G. (2004) A population decoding framework for motion aftereffects on smooth pursuit eye movements. //Journal of Neuroscience// ​ 24:​9035-9048 [[http://​dx.doi.org/​10.1523/​JNEUROSCI.0337-04.2004|DOI]]||
 | {{reprints:​aftereffects.pdf|pdf}} || | {{reprints:​aftereffects.pdf|pdf}} ||
 |<​html><​a class="​folder"​ href="#​folded_6">​ Abstract</​a><​span class="​folded hidden"​ id="​folded_6"><​br/>​ |<​html><​a class="​folder"​ href="#​folded_6">​ Abstract</​a><​span class="​folded hidden"​ id="​folded_6"><​br/>​
Line 72: Line 72:
 ^ Serial linkage of target selection for orienting and tracking eye movements ^^ ^ Serial linkage of target selection for orienting and tracking eye movements ^^
 |[[:​demos:​stimsac|{{ :​shared:​stimsac.png?​200x128|}}]] How does the brain coordinate the choice of target between two different motor systems like saccadic and smooth pursuit eye movements? In principle this could be done in parallel – sending a command to choose a target to both systems at once. Or it could be in serial – first choosing a target with the saccadic movement and then sending that command in serial to the pursuit system. In a series of behavioral and physiological studies we have found that the choice of target is sent in serial from the saccadic to the pursuit motor system. See [[:​demos:​stimsac|this demo]] which steps you through a series of microstimulation studies that we used to show this.|| |[[:​demos:​stimsac|{{ :​shared:​stimsac.png?​200x128|}}]] How does the brain coordinate the choice of target between two different motor systems like saccadic and smooth pursuit eye movements? In principle this could be done in parallel – sending a command to choose a target to both systems at once. Or it could be in serial – first choosing a target with the saccadic movement and then sending that command in serial to the pursuit system. In a series of behavioral and physiological studies we have found that the choice of target is sent in serial from the saccadic to the pursuit motor system. See [[:​demos:​stimsac|this demo]] which steps you through a series of microstimulation studies that we used to show this.||
-|**Gardner, J. L.** , and Lisberger, S. G. (2002) Serial linkage of target selection for orienting and tracking eye movements. //Nature Neuroscience// ​ 5:892-899 [[http://​dx.doi.org/​10.1038/​nn897|DOI]] <​[[http://​dx.doi.org/​10.1038/​nn0902-819|News and Views by Michael N. Shadlen]]> ​ | {{reprints:​stimsac.pdf|pdf}} |+|Gardner, J. L. , and Lisberger, S. G. (2002) Serial linkage of target selection for orienting and tracking eye movements. //Nature Neuroscience// ​ 5:892-899 [[http://​dx.doi.org/​10.1038/​nn897|DOI]] <​[[http://​dx.doi.org/​10.1038/​nn0902-819|News and Views by Michael N. Shadlen]]> ​ | {{reprints:​stimsac.pdf|pdf}} |
 |<​html><​a class="​folder"​ href="#​folded_7">​ Abstract</​a><​span class="​folded hidden"​ id="​folded_7"><​br/>​ |<​html><​a class="​folder"​ href="#​folded_7">​ Abstract</​a><​span class="​folded hidden"​ id="​folded_7"><​br/>​
 Many natural actions require the coordination of two different kinds of movements. How are targets chosen under these circumstances:​ do central commands instruct different movement systems in parallel, or does the execution of one movement activate a serial chain that automatically chooses targets for the other movement? We examined a natural eye tracking action that consists of orienting saccades and tracking smooth pursuit eye movements, and found strong physiological evidence for a serial strategy. Monkeys chose freely between two identical spots that appeared at different sites in the visual field and moved in orthogonal directions. If a saccade was evoked to one of the moving targets by microstimulation in either the frontal eye field (FEF) or the superior colliculus (SC), then the same target was automatically chosen for pursuit. Our results imply that the neural signals responsible for saccade execution can also act as an internal command of target choice for other movement systems. Many natural actions require the coordination of two different kinds of movements. How are targets chosen under these circumstances:​ do central commands instruct different movement systems in parallel, or does the execution of one movement activate a serial chain that automatically chooses targets for the other movement? We examined a natural eye tracking action that consists of orienting saccades and tracking smooth pursuit eye movements, and found strong physiological evidence for a serial strategy. Monkeys chose freely between two identical spots that appeared at different sites in the visual field and moved in orthogonal directions. If a saccade was evoked to one of the moving targets by microstimulation in either the frontal eye field (FEF) or the superior colliculus (SC), then the same target was automatically chosen for pursuit. Our results imply that the neural signals responsible for saccade execution can also act as an internal command of target choice for other movement systems.
 </​span></​html>​|| </​span></​html>​||
-**Gardner, J. L.** , and Lisberger, S. G. (2001) Linked target selection for saccadic and smooth pursuit eye movements. //Journal of Neuroscience// ​ 21(6):​2075-2084 ​ [[http://​www.jneurosci.org/​content/​21/​6/​2075.short|link]] |{{reprints:​pursac.pdf|pdf}} |+| Gardner, J. L. , and Lisberger, S. G. (2001) Linked target selection for saccadic and smooth pursuit eye movements. //Journal of Neuroscience// ​ 21(6):​2075-2084 ​ [[http://​www.jneurosci.org/​content/​21/​6/​2075.short|link]] |{{reprints:​pursac.pdf|pdf}} |
 |<​html><​a class="​folder"​ href="#​folded_8">​ Abstract</​a><​span class="​folded hidden"​ id="​folded_8"><​br/>​ |<​html><​a class="​folder"​ href="#​folded_8">​ Abstract</​a><​span class="​folded hidden"​ id="​folded_8"><​br/>​
 In natural situations, motor activity must often choose a single target when multiple distractors are present. The present paper asks how primate smooth pursuit eye movements choose targets, by analysis of a natural target-selection task. Monkeys tracked two targets that started 1.5 degrees eccentric and moved in different directions (up, right, down, and left) toward the position of fixation. As expected from previous results, the smooth pursuit before the first saccade reflected a vector average of the responses to the two target motions individually. However, post-saccadic smooth eye velocity showed enhancement that was spatially selective for the motion at the endpoint of the saccade. If the saccade endpoint was close to one of the two targets, creating a targeting saccade, then pursuit was selectively enhanced for the visual motion of that target and suppressed for the other target. If the endpoint landed between the two targets, creating an averaging saccade, then post-saccadic smooth eye velocity also reflected a vector average of the two target motions. Saccades with latencies >200 msec were almost always targeting saccades. However, pursuit did not transition from vector-averaging to target-selecting until the occurrence of a saccade, even when saccade latencies were >300 msec. Thus, our data demonstrate that post-saccadic enhancement of pursuit is spatially selective and that noncued target selection for pursuit is time-locked to the occurrence of a saccade. This raises the possibility that the motor commands for saccades play a causal role, not only in enhancing visuomotor transmission for pursuit but also in choosing a target for pursuit. In natural situations, motor activity must often choose a single target when multiple distractors are present. The present paper asks how primate smooth pursuit eye movements choose targets, by analysis of a natural target-selection task. Monkeys tracked two targets that started 1.5 degrees eccentric and moved in different directions (up, right, down, and left) toward the position of fixation. As expected from previous results, the smooth pursuit before the first saccade reflected a vector average of the responses to the two target motions individually. However, post-saccadic smooth eye velocity showed enhancement that was spatially selective for the motion at the endpoint of the saccade. If the saccade endpoint was close to one of the two targets, creating a targeting saccade, then pursuit was selectively enhanced for the visual motion of that target and suppressed for the other target. If the endpoint landed between the two targets, creating an averaging saccade, then post-saccadic smooth eye velocity also reflected a vector average of the two target motions. Saccades with latencies >200 msec were almost always targeting saccades. However, pursuit did not transition from vector-averaging to target-selecting until the occurrence of a saccade, even when saccade latencies were >300 msec. Thus, our data demonstrate that post-saccadic enhancement of pursuit is spatially selective and that noncued target selection for pursuit is time-locked to the occurrence of a saccade. This raises the possibility that the motor commands for saccades play a causal role, not only in enhancing visuomotor transmission for pursuit but also in choosing a target for pursuit.