Differences

This shows you the differences between two versions of the page.

Link to this comparison view

shared:research [2015/01/22 10:34]
shared:research [2022/08/30 13:40]
Line 1: Line 1:
-====== Overview ====== 
-^ Overview: Priors in visual perception ^ 
-|{{:​shared:​brainmt.jpg |}} Vision is a fabrication of our minds. Sensory information from our eyes is often ambiguous or limited, yet vision is remarkably robust and surprisingly able to correctly interpret impoverished sensory signals. What cortical computations make this possible? In the framework of Bayesian statistical decision theory; how does the cortex combine sensory evidence from the eyes with priors or expectations to form percepts? Priors may be short term and signaled by the task at hand - a particular spatial location may be more likely to contain information that is needed. Or priors may be long-term and developed over extended exposure to the natural statistics of the visual world - objects may tend to move slowly rather than quickly. While much is known about the encoding of sensory evidence, comparatively little is known about priors. Where do priors interact with sensory signals and how do they modify and augment perception? We use psychophysics to make precise behavioral measurements of how priors bias sensory decisions while concurrently measuring cortical activity with functional magnetic resonance imaging. Using knowledge of the visual system and decision theoretical models of how behavior is linked to cortical activity, we seek to understand the cortical computations that construct human vision.| 
  
-====== Selected publications ====== 
- 
-Below is a selected list of our key publications with illustrations,​ some demos and a basic introduction to each one of the findings. For a full list of our research publications,​ [[:​shared:​publications|see here]]. 
- 
-^Cortical correlates of human motion perception biases ^^^ 
-|{{ :​shared:​vintchpopanal.png |}}||| 
-| Human sensory perception is not a faithful reproduction of the sensory environment. For example, at low contrast, objects appear to move slower and flicker faster than veridical. While these biases have been robustly observed, their neural underpinning are unknown, thus suggesting a possible disconnect of the well established link between motion perception and cortical responses. We used functional imaging to examine the encoding of speed in the human cortex at the scale of neuronal populations and asked where and how these biases are encoded. Decoding, voxel population and forward- encoding analyses revealed biases towards slow speeds and high temporal frequencies at low contrast in the earliest visual cortical regions, matching perception. These findings thus offer a resolution to the disconnect between cortical responses and motion perception in humans. Moreover, biases in speed perception are considered a leading example of Bayesian inference as they can be interpreted as a prior for slow speeds. Our data therefore suggest that perceptual priors of this sort can be encoded by neural populations in the same early cortical areas that provide sensory evidence||| 
-| Vintch, B. and Gardner, J. L. (2014) Cortical correlates of human motion perception biases. //The Journal of Neuroscience//​ 34: 2592–2604. [[http://​dx.doi.org/​10.1523/​JNEUROSCI.2809-13.2014|DOI]] |{{:​reprints:​2014vintch.pdf|pdf}}|| 
-|<​html><​a class="​folder"​ href="#​folded_15">​ Abstract</​a><​span class="​folded hidden"​ id="​folded_15"><​br/>​ 
-Human sensory perception is not a faithful reproduction of the sensory environment. For example, at low contrast, objects appear to move slower and flicker faster than veridical. While these biases have been robustly observed, their neural underpinning are unknown, thus suggesting a possible disconnect of the well established link between motion perception and cortical responses. We used functional imaging to examine the encoding of speed in the human cortex at the scale of neuronal populations and asked where and how these biases are encoded. Decoding, voxel population and forward- encoding analyses revealed biases towards slow speeds and high temporal frequencies at low contrast in the earliest visual cortical regions, matching perception. These findings thus offer a resolution to the disconnect between cortical responses and motion perception in humans. Moreover, biases in speed perception are considered a leading example of Bayesian inference as they can be interpreted as a prior for slow speeds. Our data therefore suggest that perceptual priors of this sort can be encoded by neural populations in the same early cortical areas that provide sensory evidence. 
-</​span></​html>​||| 
- 
-^ Attentional enhancement via selection and pooling of early sensory responses in human visual cortex ^^^ 
-|{{ :​shared:​efficientselection.png?​725x195.75 |}}||| 
-|Our world is filled with multiple distractions - flashing images on a television screen, blinking lights, blaring horns. How is our brain able to focus attention only on relevant stimuli? The brain might turn up the sensory gain of responses (B above) or turn down noise in sensory cortical circuits responding to the relevant stimulus (C above) - thus enhancing our sensitivity. Alternatively (or in addition to), the brain might efficiently select just the most relevant sensory responses for routing to higher perceptual and action related areas (D above) - thus improving behavioral sensitivity by blocking out irrelevant signals. We studied contrast discrimination performance when subjects were cued to a single (focal attention) or multiple locations (distributed attention), while concurrently measuring cortical responses using fMRI. Using computational models we found that improved behavioral performance could be quantitatively accounted for by a model which included efficient selection of sensory signals using a max-pooling selection rule, but not by models that only allowed behavior to be improved by sensitivity enhancement. The max-pooling rule simply selected responses based on the magnitude of response. We conclude that attention enhanced behavioral performance predominantly by enabling efficient selection of the behaviorally relevant sensory signals.||| 
-| Pestilli, F., Carrasco, M., Heeger, D. J. and Gardner, J. L. (2011) Attentional enhancement via selection and pooling of early sensory responses in human visual cortex. ​ //Neuron// 72:832-46 [[http://​dx.doi.org/​10.1016/​j.neuron.2011.09.025|DOI]] <​[[http://​dx.doi.org/​10.1016/​j.neuron.2011.11.005|Preview by John T. Serences]]>​ |{{reprints:​attentionselection.pdf|pdf}} |{{reprints:​attentionselectionsi.pdf|SI}}| 
-|<​html><​a class="​folder"​ href="#​folded_2">​ Abstract</​a><​span class="​folded hidden"​ id="​folded_2"><​br/>​ 
-To characterize the computational processes by which attention improves behavioral performance,​ we measured activity in visual cortex with functional magnetic resonance imaging as humans performed a contrast-discrimination task with focal and distributed attention. Focal attention yielded robust improvements in behavioral performance that were accompanied by increases in cortical responses. Using a quantitative analysis, we determined that if performance were limited only by the sensitivity of the measured sensory signals, the improvements in behavioral performance would have corresponded to an unrealistically large (approximately 400%) reduction in response variability. Instead, behavioral performance was well characterized by a pooling and selection process for which the largest sensory responses, those most strongly modulated by attention, dominated the perceptual decision. This characterization predicts that high contrast distracters that evoke large sensory responses should have a negative impact on behavioral performance. We tested and confirmed this prediction. We conclude that attention enhanced behavioral performance predominantly by enabling efficient selection of the behaviorally relevant sensory signals. 
-</​span></​html>​||| 
-|<​html><​a class="​folder"​ href="#​folded_9">​ Word cloud</​a><​span class="​folded hidden"​ id="​folded_9"><​br/>​ 
-<img src="/​lib/​exe/​fetch.php/​shared/​efficientselection800.png"​ cache=cache"​ class="​media"​ alt=""​ /></​span></​html>​||| 
-|Hara, Y. and Gardner, J. L. (2014) Encoding of graded changes in spatial specificity of prior cues in human visual cortex. //Journal of Neurophysiology//​ 112:​2834-49. [[http://​dx.doi.org/​10.1152/​jn.00729.2013|DOI]]||{{:​reprints:​haragardnerjnp2014.pdf|pdf}}| 
-|<​html><​a class="​folder"​ href="#​folded_16">​ Abstract</​a><​span class="​folded hidden"​ id="​folded_16"><​br/>​ 
-Prior information about the relevance of spatial locations can vary in specificity;​ a single location, a subset of locations or all locations may be of potential importance. Using a contrast-discrimination task with 4 possible targets, we asked whether performance benefits are graded with the spatial specificity of a prior cue and whether we could quantitatively account for behavioral performance with cortical activity changes measured by blood oxygenation level dependent (BOLD) imaging. Thus we changed the prior probability that each location contained the target from 100 to 50 to 25% by cueing in advance 1, 2 or 4 of the possible locations. We found that behavioral performance (discrimination thresholds) improved in a graded fashion with spatial specificity. However, concurrently measured cortical responses from retinotopically-defined visual areas were not strictly graded; response magnitude decreased when all four locations were cued (25% prior probability) relative to the 100 and 50% prior probability conditions, but no significant difference in response magnitude was found between the 100 and 50% prior probability conditions for either cued or uncued locations. Also, while cueing locations increased responses relative to non-cueing, this cue-sensitivity was not graded with prior probability. Further, contrast-sensitivity of cortical responses, which could improve contrast discrimination performance,​ was not graded. Instead, an efficient-selection model showed that even if sensory responses do not strictly scale with prior probability,​ selection of sensory responses by weighting larger responses more can result in graded behavioral performance benefits with increasing spatial specificity of prior information. 
-</​span></​html>​||| 
-|Hara Y., Pestilli F. and Gardner J. L. (2014). Differing effects of attention in single-units and populations are well predicted by heterogeneous tuning and the normalization model of attention. //Frontiers in Computational Neuroscience//​ 8:12. [[http://​dx.doi.org/​10.3389/​fncom.2014.00012|DOI]]||{{reprints:​hara_pestilli_gardner_2014.pdf|pdf}}| 
-|<​html><​a class="​folder"​ href="#​folded_17">​ Abstract</​a><​span class="​folded hidden"​ id="​folded_17"><​br/>​ 
-Single-unit measurements have reported many different effects of attention on contrast-response (e.g. contrast-gain,​ response-gain,​ additive-offset dependent on visibility),​ while functional imaging measurements have more uniformly reported increases in response across all contrasts (additive-offset). The normalization model of attention elegantly predicts the diversity of effects of attention reported in single-units well-tuned to the stimulus, but what predictions does it make for more realistic populations of neurons with heterogeneous tuning? Are predictions in accordance with population-scale measurements?​ We used functional imaging data from humans to determine a realistic ratio of attention-field to stimulus-drive size (a key parameter for the model) and predicted effects of attention in a population of model neurons with heterogeneous tuning. We found that within the population, neurons well-tuned to the stimulus showed a response-gain effect, while less-well-tuned neurons showed a contrast-gain effect. Averaged across the population, these disparate effects of attention gave rise to additive-offsets in contrast-response,​ similar to reports in human functional imaging as well as population averages of single-units. Differences in predictions for single-units and populations were observed across a wide range of model parameters (ratios of attention-field to stimulus-drive size and the amount of baseline response modifiable by attention), offering an explanation for disparity in physiological reports. Thus, by accounting for heterogeneity in tuning of realistic neuronal populations,​ the normalization model of attention can not only predict responses of well-tuned neurons, but also the activity of large populations of neurons. More generally, computational models can unify physiological findings across different scales of measurement,​ and make links to behavior, but only if factors such as heterogeneous tuning within a population are properly accounted for.</​span></​html>​||| 
- 
-^ Feature-specific attentional priority signals in human cortex ^^ 
-| {{:​shared:​motioncolor.png?​300x172 |}} Priority of visual stimuli has been hypothesized to be represented in spatial maps in cortex. Indeed, responses in many topographically mapped visual and parietal areas show spatially specific increased responses for stimuli located at the focus of attention. But, stimuli can be prioritized not only by space but by features such as color and direction of motion. When these non-spatial features are prioritized,​ how and where are they encoded? We used classification analyses of human fMRI responses as subjects performed a feature based attention task with spatially overlapped stimuli and found that priority for color and motion are represented in frontal (e.g. FEF) and parietal (e.g. IPS1-4) areas commonly associated with spatial priority. This suggests that multiplexed into these spatial representations these areas encode priority of different non-spatial features. || 
-| Liu, T., Hospadaruk, L., Zhu, D., and Gardner, J. L. (2011) Feature-specific attentional priority signals in human cortex. ​ //Journal of Neuroscience//​ 31:4484-95 [[http://​dx.doi.org/​10.1523/​JNEUROSCI.5745-10.2011|DOI]]| {{:​reprints:​liu_hospadaruk_zhu_gardner_jn_2011.pdf|pdf}}| 
-|<​html><​a class="​folder"​ href="#​folded_3">​ Abstract</​a><​span class="​folded hidden"​ id="​folded_3"><​br/>​ 
-Human can flexibly attend to a variety of stimulus dimensions, including spatial location and various features such as color and direction of motion. While the locus of spatial attention has been hypothesized to be represented by priority maps encoded in several dorsal frontal and parietal areas, it is unknown how the brain represents attended features. Here we examined the distribution and organization of neural signals related to deployment of feature-based attention. Subjects viewed a compound stimulus containing two superimposed motion directions (or colors), and were instructed to perform an attention-demanding task on one of the directions (or colors). We found elevated and sustained fMRI response for the attention task compared to a neutral condition, without reliable differences in overall response amplitude between attending to different features. However, using multi-voxel pattern analysis, we were able to decode the attended feature in both early visual areas (V1 to hMT+) and frontal and parietal areas (e.g., IPS1-4 and FEF) that are commonly associated with spatial attention. Furthermore,​ analysis of the classifier weight maps showed that attending to motion and color evoked different patterns of activity, suggesting different neuronal subpopulations in these regions are recruited for attending to different feature dimensions. Thus, our finding suggests that rather than a purely spatial representation of priority, frontal and parietal cortical areas also contain multiplexed signals related to the priority of different non-spatial features. 
-</​span></​html>​|| 
-|<​html><​a class="​folder"​ href="#​folded_10">​Word cloud</​a><​span class="​folded hidden"​ id="​folded_10"><​br/>​ 
-<img src="/​lib/​exe/​fetch.php/​shared/​featclass800.png"​ cache=cache"​ class="​media"​ alt=""​ /></​span></​html>​||| 
- 
-^ Maps of visual space in human occipital cortex are retinotopic,​ not spatiotopic ^^ 
-| {{ :​shared:​v1_and_mt_retinotopic.png?​400x266.67|}} Every time we move our eyes or head the image of stationary visual objects shift to a different location on the retina. Thus, after an eye movement, a completely different set of sensory neurons encodes an object then the ones that encoded the object before the eye movement. Nonetheless we are able to perceive the world as stable across eye movements. These facts led many to hypothesize the existence of spatially mapped responses in the brain that do not change with eye movements; i.e. responses in a spatiotopic,​ rather than retinotopic,​ reference frame. Recently, it was reported that human cortical area MT, unlike its counterpart in the monkey, encodes space in a spatiotopic map. We used BOLD imaging to determine the reference frame of 12 visual areas and found that all areas including MT represent stimuli in a retinotopic reference frame. Our data lend support to the idea that human early visual areas encode stimuli in a retinotopic reference frame just like monkey visual areas and that explicit representations of spatiotopic space are not necessarily required for stable perception. || 
-| Gardner, J. L. , Merriam, E. P., Movshon, J. A., and Heeger, D.J. (2008) Maps of visual space in human occipital cortex are retinotopic,​ not spatiotopic. //Journal of Neuroscience// ​ 28:​3988-3999 [[http://​dx.doi.org/​10.1523/​JNEUROSCI.5476-07.2008|DOI]] [[http://​www.jneurosci.org/​content/​28/​15/​i.full|This Week in the Journal]]| {{reprints:​retinotopic.pdf|pdf}} | 
-|<​html><​a class="​folder"​ href="#​folded_4">​ Abstract</​a><​span class="​folded hidden"​ id="​folded_4"><​br/>​ 
-We experience the visual world as phenomenally invariant to eye position, but almost all cortical maps of visual space in monkeys use a retinotopic reference frame, that is, the cortical representation of a point in the visual world is different across eye positions. It was recently reported that human cortical area MT (unlike monkey MT) represents stimuli in a reference frame linked to the position of stimuli in space, a "​spatiotopic"​ reference frame. We used visuotopic mapping with blood oxygen level-dependent functional magnetic resonance imaging signals to define 12 human visual cortical areas, and then determined whether the reference frame in each area was spatiotopic or retinotopic. We found that all 12 areas, including MT, represented stimuli in a retinotopic reference frame. Although there were patches of cortex in and around these visual areas that were ostensibly spatiotopic,​ none of these patches exhibited reliable stimulus-evoked responses. We conclude that the early, visuotopically organized visual cortical areas in the human brain (like their counterparts in the monkey brain) represent stimuli in a retinotopic reference frame. 
-</​span></​html>​|| 
-|<​html><​a class="​folder"​ href="#​folded_11">​Word cloud</​a><​span class="​folded hidden"​ id="​folded_11"><​br/>​ 
-<img src="/​lib/​exe/​fetch.php/​shared/​retinotopic800.png"​ cache=cache"​ class="​media"​ alt=""​ /></​span></​html>​||| 
- 
-^ Contrast adaptation and representation in human early visual cortex ^^ 
-|{{ :​shared:​contrast_adpatation.png?​250x316.67|Noimage}} | {{ :​shared:​contrast_series.png?​400x63 |}}Changes in the contrast of visual stimuli could signal an informative event, like the sudden appearance of a predator or prey, or a mundane one, like a change in lighting conditions as the sun sets. The visual system should optimally adjust sensitivity to discount slow changes yet remain sensitive to rapid ones. Using event-related fMRI and a data-driven analysis approach, we uncovered two mechanisms in human early visual cortex that do just this. We found a horizontal shift of the relationship between contrast and response (see figure at left) akin to that reported in anesthetized animals which slowly adapts responses to current viewing conditions. In human V4 (hV4), we found a counterpart to this adaptation mechanism: hV4 represents all changes in image contrast, be they increments or decrements, with a positive response. This suggests that hV4 responses do not faithfully follow contrast, rather they signal salient changes. | 
-| Gardner, J. L. , Sun, P., Waggoner, R. A., Ueno K., Tanaka, K., and Cheng K. (2005) Contrast adaptation and representation in human early visual cortex. //​Neuron// ​ 47:607-620 [[http://​dx.doi.org/​10.1016/​j.neuron.2005.07.016|DOI]] <​[[http://​dx.doi.org/​10.1016/​j.neuron.2005.08.003|Preview by Geoffrey M. Boynton]]>​ || 
-| {{reprints:​cadapt.pdf|pdf}} || 
-|<​html><​a class="​folder"​ href="#​folded_5">​ Abstract</​a><​span class="​folded hidden"​ id="​folded_5"><​br/>​ 
-The human visual system can distinguish variations in image contrast over a much larger range than measurements of the static relationship between contrast and response in visual cortex would suggest. This discrepancy may be explained if adaptation serves to re-center contrast response functions around the ambient contrast, yet experiments on humans have yet to report such an effect. By using event-related fMRI and a data-driven analysis approach, we found that contrast response functions in V1, V2, and V3 shift to approximately center on the adapting contrast. Furthermore,​ we discovered that, unlike earlier areas, human V4 (hV4) responds positively to contrast changes, whether increments or decrements, suggesting that hV4 does not faithfully represent contrast, but instead responds to salient changes. These findings suggest that the visual system discounts slow uninformative changes in contrast with adaptation, yet remains exquisitely sensitive to changes that may signal important events in the environment. 
-</​span></​html>​|| 
-|<​html><​a class="​folder"​ href="#​folded_12">​Word cloud</​a><​span class="​folded hidden"​ id="​folded_12"><​br/>​ 
-<img src="/​lib/​exe/​fetch.php/​shared/​cadapt800.png"​ cache=cache"​ class="​media"​ alt=""​ /></​span></​html>​||| 
- 
-^ A population decoding framework for motion aftereffects on smooth pursuit eye movements ^^ 
-|[[:​demos:​mae|{{:​shared:​motionadaptmodels.png?​400x485 |}}]] | [[:​demos:​mae|{{ :​demos:​spiral.png?​100x100 |}}]] Watch a waterfall for some period of time and then shift your gaze to the person standing next to you and you will get a sensation that their face is moving upwards (click the spiral for a [[:​demos:​mae|demo]]). This "​motion aftereffect"​ is likely the result of adaptation of responses in the visual cortex - but what adaptive changes give rise to the illusion, and what might that tell us about how populations of neurons encode properties of stimuli for perception and action? After adaptation, it has been reported that the gain of cortical neurons is reduced, tuning narrows, and that tuning preferences are either attracted towards or repelled from the adaptation stimulus (see figure at left). First, we found that this perceptual illusions is also manifest in visually guided movement; namely in motion tracking movements of the eye called smooth pursuit. We then used computational modeling to see which neuronal adaptation effect (when considered by itself) could quantitatively account for the pattern of observed adaptation in the eye movements. We found that by considering vector-average decoding of populations of simulated MT neurons that gain changes and narrowing of tuning, but not shifts in tuning preference were able to account for changes in the direction of pursuit eye movements after adaptation.| 
-| Gardner, J. L. , Tokiyama, S., and Lisberger, S. G. (2004) A population decoding framework for motion aftereffects on smooth pursuit eye movements. //Journal of Neuroscience// ​ 24:​9035-9048 [[http://​dx.doi.org/​10.1523/​JNEUROSCI.0337-04.2004|DOI]]|| 
-| {{reprints:​aftereffects.pdf|pdf}} || 
-|<​html><​a class="​folder"​ href="#​folded_6">​ Abstract</​a><​span class="​folded hidden"​ id="​folded_6"><​br/>​ 
-Both perceptual and motor systems must decode visual information from the distributed activity of large populations of cortical neurons. We have sought a common framework for understanding decoding strategies for visually guided movement and perception by asking whether the strong motion aftereffects seen in the perceptual domain lead to similar expressions in motor output. We found that motion adaptation indeed has strong sequelae in the direction and speed of smooth pursuit eye movements. After adaptation with a stimulus that moves in a given direction for 7 sec, the direction of pursuit is repelled from the direction of pursuit targets that move within 90 degrees of the adapting direction. The speed of pursuit decreases for targets that move at the direction and speed of the adapting stimulus and is repelled from the adapting speed in the sense that the decrease either becomes greater or smaller (eventually turning to an increase) when tracking targets move slower or faster than the adapting speed. The effects of adaptation are spatially specific and fixed to the retinal location of the adapting stimulus. The magnitude of adaptation of pursuit speed and direction is uncorrelated,​ suggesting that the two parameters are decoded independently. Computer simulation of motion adaptation in the middle temporal visual area (MT) shows that vector-averaging decoding of the population response in MT can account for the effects of adaptation on the direction of pursuit. Our results suggest a unified framework for thinking, in terms of population decoding, about motion adaptation for both perception and action. 
-</​span></​html>​|| 
-|<​html><​a class="​folder"​ href="#​folded_13">​Word cloud</​a><​span class="​folded hidden"​ id="​folded_13"><​br/>​ 
-<img src="/​lib/​exe/​fetch.php/​shared/​padapt800.png"​ cache=cache"​ class="​media"​ alt=""​ /></​span></​html>​||| 
- 
-^ Serial linkage of target selection for orienting and tracking eye movements ^^ 
-|[[:​demos:​stimsac|{{ :​shared:​stimsac.png?​200x128|}}]] How does the brain coordinate the choice of target between two different motor systems like saccadic and smooth pursuit eye movements? In principle this could be done in parallel – sending a command to choose a target to both systems at once. Or it could be in serial – first choosing a target with the saccadic movement and then sending that command in serial to the pursuit system. In a series of behavioral and physiological studies we have found that the choice of target is sent in serial from the saccadic to the pursuit motor system. See [[:​demos:​stimsac|this demo]] which steps you through a series of microstimulation studies that we used to show this.|| 
-|Gardner, J. L. , and Lisberger, S. G. (2002) Serial linkage of target selection for orienting and tracking eye movements. //Nature Neuroscience// ​ 5:892-899 [[http://​dx.doi.org/​10.1038/​nn897|DOI]] <​[[http://​dx.doi.org/​10.1038/​nn0902-819|News and Views by Michael N. Shadlen]]> ​ | {{reprints:​stimsac.pdf|pdf}} | 
-|<​html><​a class="​folder"​ href="#​folded_7">​ Abstract</​a><​span class="​folded hidden"​ id="​folded_7"><​br/>​ 
-Many natural actions require the coordination of two different kinds of movements. How are targets chosen under these circumstances:​ do central commands instruct different movement systems in parallel, or does the execution of one movement activate a serial chain that automatically chooses targets for the other movement? We examined a natural eye tracking action that consists of orienting saccades and tracking smooth pursuit eye movements, and found strong physiological evidence for a serial strategy. Monkeys chose freely between two identical spots that appeared at different sites in the visual field and moved in orthogonal directions. If a saccade was evoked to one of the moving targets by microstimulation in either the frontal eye field (FEF) or the superior colliculus (SC), then the same target was automatically chosen for pursuit. Our results imply that the neural signals responsible for saccade execution can also act as an internal command of target choice for other movement systems. 
-</​span></​html>​|| 
-| Gardner, J. L. , and Lisberger, S. G. (2001) Linked target selection for saccadic and smooth pursuit eye movements. //Journal of Neuroscience// ​ 21(6):​2075-2084 ​ [[http://​www.jneurosci.org/​content/​21/​6/​2075.short|link]] |{{reprints:​pursac.pdf|pdf}} | 
-|<​html><​a class="​folder"​ href="#​folded_8">​ Abstract</​a><​span class="​folded hidden"​ id="​folded_8"><​br/>​ 
-In natural situations, motor activity must often choose a single target when multiple distractors are present. The present paper asks how primate smooth pursuit eye movements choose targets, by analysis of a natural target-selection task. Monkeys tracked two targets that started 1.5 degrees eccentric and moved in different directions (up, right, down, and left) toward the position of fixation. As expected from previous results, the smooth pursuit before the first saccade reflected a vector average of the responses to the two target motions individually. However, post-saccadic smooth eye velocity showed enhancement that was spatially selective for the motion at the endpoint of the saccade. If the saccade endpoint was close to one of the two targets, creating a targeting saccade, then pursuit was selectively enhanced for the visual motion of that target and suppressed for the other target. If the endpoint landed between the two targets, creating an averaging saccade, then post-saccadic smooth eye velocity also reflected a vector average of the two target motions. Saccades with latencies >200 msec were almost always targeting saccades. However, pursuit did not transition from vector-averaging to target-selecting until the occurrence of a saccade, even when saccade latencies were >300 msec. Thus, our data demonstrate that post-saccadic enhancement of pursuit is spatially selective and that noncued target selection for pursuit is time-locked to the occurrence of a saccade. This raises the possibility that the motor commands for saccades play a causal role, not only in enhancing visuomotor transmission for pursuit but also in choosing a target for pursuit. 
-</​span></​html>​|| 
-|<​html><​a class="​folder"​ href="#​folded_14">​Word cloud</​a><​span class="​folded hidden"​ id="​folded_14"><​br/>​ 
-<img src="/​lib/​exe/​fetch.php/​shared/​stimsac800.png"​ cache=cache"​ class="​media"​ alt=""​ /></​span></​html>​||| 
- 
-++|++