Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Previous revision
shared:publications [2019/03/26 10:10]
shared:publications [2025/11/07 12:23] (current)
justin
Line 1: Line 1:
 ====== Publication List ====== ====== Publication List ======
 +Kuo, A., Gardner, J. L., Merriam, E. P. (2025) Orientation maps in mouse superior colliculus explained by population model of non-orientation selective neurons //Journal of Neuroscience//​ In press [[https://​doi.org/​10.1523/​JNEUROSCI.1133-25.2025|DOI]] ++Abstract|
 +\\
 +\\
 +Mouse superficial superior colliculus (sSC) has been found to have orientation selective maps, suggesting a fundamentally different selectivity than in primate SC. Moreover, orientation selectivity in mouse sSC appears to change with stimulus properties such as size, shape and spatial frequency, in contradistinction to the computational principle of invariance in primates. To reconcile mouse and primate mechanisms for orientation selectivity,​ we constructed a computational model of mouse sSC populations with circular-symmetric,​ center-surround (i.e., not intrinsically orientation selective), stimulus-invariant receptive fields (RFs), classically used to describe monkey lateral geniculate nucleus (LGN) neurons. This model produced population maps similar to those found in mouse sSC, which show strong radial orientation preferences at retinotopic locations along stimulus edges. We show how this selectivity depended critically on spatial frequency tuning of the model units. The model predicted a shift from radial to anti-radial orientation preferences from the same simulated units at high stimulus spatial frequencies,​ also consistent with measurements from mouse sSC. We found intrinsically oriented RFs were largely unnecessary to explain the imaging data, but could explain a possible small subpopulation of intrinsically orientation selective neurons. We conclude that to study orientation selectivity in mouse sSC and other systems, the problem is not the choice of stimulus. Rather than endless tweaks to find the perfect, unbiased stimulus, image-computable population modeling is the solution. Regardless of the stimulus presented, comparing how well models of intrinsically or non-intrinsically orientation selective units account for empirical data provides definitive evidence for underlying neural selectivity.++
 +
 +
 +Ryu, J. J. H., Gardner, J. L. (2024) Plaudits for logits in sensory neuroscience //Neuron// 112:2825-27 [[https://​doi.org/​10.1016/​j.neuron.2024.08.008|DOI]] ++Abstract|
 +\\
 +\\
 +A workhorse tool of economic decision-making has long sought to get inside people’s heads through careful examination of their choices. In this issue of Neuron, Carandini​1​ flips the script, showing how it can model how the brain makes sensory choices.++
 +{{:​reprints:​plaudits.pdf|pdf}}
 +
 +Wilson, J. M., Wu, H., Kerr A. B., Wandell, B. A., Gardner, J. L. (2024) Limitations of 2-dimensional line-scan MRI for directly measuring neural activity //Imaging Neuroscience//​ 2 1–18 [[https://​doi.org/​10.1162/​imag_a_00275|DOI]] ++Abstract|
 +\\
 +\\
 +A 2D-line-scan MRI sequence has been reported to directly measure neural responses to stimuli (the “DIANA response”). Subsequent attempts have been unable to replicate the DIANA response, even with higher field strength and more repetitions. Part of this discrepancy is likely due to a limited understanding of how physiological noise manifests in 2D-line-scan acquisition sequences. Specifically,​ it is unclear what the consequences are of breaking the assumption that the imaging substrate remains constant between each line acquisition. Here, we study how physiological noise manifests in a 2D-line-scan acquisition. Data were collected at 3T from humans subjects viewing a blank screen. We found temporal fluctuations in the reconstructed time series that could easily be confused with neural responses to stimuli. These fluctuations were present both in the head and in the surrounding empty volume along the span of the phase-encoding direction from the head. The timing of these fluctuations varied systematically and smoothly along the phase-encoding direction. These artifacts are similar to well-known phase-encode artifacts in EPI and GRE images, but are exacerbated due to longer acquisition times (seconds vs milliseconds). We explain their unique features with a model that accounts for the acquisition sequence and incorporates time-varying contrast fluctuations and movement in the imaging substrate that mimic normal physiological fluctuations. Using the model, we quantify the amount of cortical- and scan-averaging one might need to reliably distinguish a DIANA response from noise, and show that navigator echoes might help in reducing phase-encode noise in the 2D-line-scan sequence.++
 +{{:​reprints:​imag_a_00275.pdf|pdf}}
 +
 +Bolaños, F., Orlandi, J. G., Aoki, R., Jagadeesh, A. V., Gardner, J. L., Benucci, A. (2024) Efficient coding of natural images in the mouse visual cortex //Nature Communications// ​ [[ https://​doi.org/​10.1038/​s41467-024-45919-3|DOI]] 19;​15(1):​2466 ++Abstract|
 +\\
 +\\
 +How the activity of neurons gives rise to natural vision remains a matter of intense investigation. The mid-level visual areas along the ventral stream are selective to a common class of natural images-textures-but a circuit-level understanding of this selectivity and its link to perception remains unclear. We addressed these questions in mice, first showing that they can perceptually discriminate between textures and statistically simpler spectrally matched stimuli, and between texture types. Then, at the neural level, we found that the secondary visual area (LM) exhibited a higher degree of selectivity for textures compared to the primary visual area (V1). Furthermore,​ textures were represented in distinct neural activity subspaces whose relative distances were found to correlate with the statistical similarity of the images and the mice's ability to discriminate between them. Notably, these dependencies were more pronounced in LM, where the texture-related subspaces were smaller than in V1, resulting in superior stimulus decoding capabilities. Together, our results demonstrate texture vision in mice, finding a linking framework between stimulus statistics, neural representations,​ and perceptual sensitivity-a distinct hallmark of efficient coding computations.++{{:​reprints:​bolanos_et_al_2024.pdf|pdf}}
 +
 +Fox, K. J., Birman, D. and Gardner, J. L. (2023) Gain, not concomitant changes in spatial receptive field properties, improves task performance in a neural network attention model //​eLife// ​ [[ https://​doi.org/​10.7554/​eLife.78392|DOI]] 12:e78392 ++Abstract|
 +\\
 +\\
 +Attention allows us to focus sensory processing on behaviorally relevant aspects of the visual world. One potential mechanism of attention is a change in the gain of sensory responses. However, changing gain at early stages could have multiple downstream consequences for visual processing. Which, if any, of these effects can account for the benefits of attention for detection and discrimination?​ Using a model of primate visual cortex we document how a Gaussian-shaped gain modulation results in changes to spatial tuning properties. Forcing the model to use only these changes failed to produce any benefit in task performance. Instead, we found that gain alone was both necessary and sufficient to explain category detection and discrimination during attention. Our results show how gain can give rise to changes in receptive fields which are not necessary for enhancing task performance.++{{:​reprints:​elife-78392-v2.pdf|pdf}}
 +
 +Himmelberg, M. M., Gardner, J. L. and Winawer, J. (2022) What has vision science taught us about functional MRI? //​Neuroimage//​ 262:119536. [[https://​doi.org/​10.1016/​j.neuroimage.2022.119536|DOI]] ++Abstract|
 +\\
 +\\
 +In the domain of human neuroimaging,​ much attention has been paid to the question of whether and how the development of functional magnetic resonance imaging (fMRI) has advanced our scientific knowledge of the human brain. However, the opposite question is also important; how has our knowledge of the visual system advanced our understanding of fMRI? Here, we discuss how and why scientific knowledge about the human and animal visual system has been used to answer fundamental questions about fMRI as a brain measurement tool and how these answers have contributed to scientific discoveries beyond vision science.++{{:​reprints:​1-s2.0-s1053811922006516-main.pdf|pdf}}
 +
 +Jagadeesh, A. V., and Gardner, J. L. (2022) Texture-like representation of objects in human visual cortex //​Proceedings of the National Academy of Sciences// 119 (17) e2115302119. [[https://​doi.org/​10.1073/​pnas.2115302119|DOI]] ++Abstract|
 +\\
 +\\
 +Task-optimized convolutional neural networks (CNNs) show striking similarities to the ventral visual stream. However, human-imperceptible image perturbations can cause a CNN to make incorrect predictions. Here we provide insight into this brittleness by investigating the representations of models that are either robust or not robust to image perturbations. Theory suggests that the robustness of a system to these perturbations could be related to the power law exponent of the eigenspectrum of its set of neural responses, where power law exponents closer to and larger than one would indicate a system that is less susceptible to input perturbations. We show that neural responses in mouse and macaque primary visual cortex (V1) obey the predictions of this theory, where their eigenspectra have power law exponents of at least one. We also find that the eigenspectra of model representations decay slowly relative to those observed in neurophysiology and that robust models have eigenspectra that decay slightly faster and have higher power law exponents than those of non-robust models. The slow decay of the eigenspectra suggests that substantial variance in the model responses is related to the encoding of fine stimulus features. We therefore investigated the spatial frequency tuning of artificial neurons and found that a large proportion of them preferred high spatial frequencies and that robust models had preferred spatial frequency distributions more aligned with the measured spatial frequency distribution of macaque V1 cells. Furthermore,​ robust models were quantitatively better models of V1 than non-robust models. Our results are consistent with other findings that there is a misalignment between human and machine perception. They also suggest that it may be useful to penalize slow-decaying eigenspectra or to bias models to extract features of lower spatial frequencies during task-optimization in order to improve robustness and V1 neural response predictivity.++{{:​reprints:​pnas.2115302119.pdf|pdf}}
 +
 +Kong, N. C. L., Margalit, E., Gardner, J. L., and Norcia, A. M. (2022) Increasing neural network robustness improves match to macaque V1 eigenspectrum,​ spatial frequency preference and predictivity //PLoS Computational Biology// 18:​e1009739. [[https://​doi.org/​10.1371/​journal.pcbi.10009739|DOI]] ++Abstract|
 +\\
 +\\
 +Task-optimized convolutional neural networks (CNNs) show striking similarities to the ventral visual stream. However, human-imperceptible image perturbations can cause a CNN to make incorrect predictions. Here we provide insight into this brittleness by investigating the representations of models that are either robust or not robust to image perturbations. Theory suggests that the robustness of a system to these perturbations could be related to the power law exponent of the eigenspectrum of its set of neural responses, where power law exponents closer to and larger than one would indicate a system that is less susceptible to input perturbations. We show that neural responses in mouse and macaque primary visual cortex (V1) obey the predictions of this theory, where their eigenspectra have power law exponents of at least one. We also find that the eigenspectra of model representations decay slowly relative to those observed in neurophysiology and that robust models have eigenspectra that decay slightly faster and have higher power law exponents than those of non-robust models. The slow decay of the eigenspectra suggests that substantial variance in the model responses is related to the encoding of fine stimulus features. We therefore investigated the spatial frequency tuning of artificial neurons and found that a large proportion of them preferred high spatial frequencies and that robust models had preferred spatial frequency distributions more aligned with the measured spatial frequency distribution of macaque V1 cells. Furthermore,​ robust models were quantitatively better models of V1 than non-robust models. Our results are consistent with other findings that there is a misalignment between human and machine perception. They also suggest that it may be useful to penalize slow-decaying eigenspectra or to bias models to extract features of lower spatial frequencies during task-optimization in order to improve robustness and V1 neural response predictivity.++{{:​reprints:​journal.pcbi.1009739.pdf|pdf}}
 +
 +Gardner, J. L., and Merriam, E.M. (2021) Population models, not analyses, of human neuroscience measurements //Annual Review of Vision Science// 7:1-31. [[https://​doi.org/​10.1146/​annurev-vision-093019-111124|DOI]] ++Abstract|
 +\\
 +\\
 +Selectivity for many basic properties of visual stimuli, such as orientation,​ is thought to be organized at the scale of cortical columns, making it difficult or impossible to measure directly with noninvasive human neuroscience measurement. However, computational analyses of neuroimaging data have shown that selectivity for orientation can be recovered by considering the pattern of response across a region of cortex. This suggests that computational analyses can reveal representation encoded at a finer spatial scale than is implied by the spatial resolution limits of measurement techniques. This potentially opens up the possibility to study a much wider range of neural phenomena that are otherwise inaccessible through noninvasive measurement. However, as we review in this article, a large body of evidence suggests an alternative hypothesis to this superresolution account: that orientation information is available at the spatial scale of cortical maps and thus easily measurable at the spatial resolution of standard techniques. In fact, a population model shows that this orientation information need not even come from single-unit selectivity for orientation tuning, but instead can result from population selectivity for spatial frequency. Thus, a categorical error of interpretation can result whereby orientation selectivity can be confused with spatial frequency selectivity. This is similarly problematic for the interpretation of results from numerous studies of more complex representations and cognitive functions that have built upon the computational techniques used to reveal stimulus orientation. We suggest in this review that these interpretational ambiguities can be avoided by treating computational analyses as models of the neural processes that give rise to measurement. Building upon the modeling tradition in vision science using considerations of whether population models meet a set of core criteria is important for creating the foundation for a cumulative and replicable approach to making valid inferences from human neuroscience measurements.++{{:​reprints:​annurev-vision-093019-111124.pdf|pdf}}
 +
 +Lin, Y., Zhou, X., Naya, Y., Gardner, J. L., and Sun, P. (2021) Voxel-wise linearity analysis of increments and decrements in BOLD responses in human visual cortex using a contrast adaptation paradigm //Frontiers in Human Neuroscience//​ 15:​541314.[[https://​doi.org/​10.3389/​fnhum.2021.541314|DOI]] ++Abstract|
 +\\
 +\\
 +The linearity of BOLD responses is a fundamental presumption in most analysis procedures for BOLD fMRI studies. Previous studies have examined the linearity of BOLD signal increments, but less is known about the linearity of BOLD signal decrements. The present study assessed the linearity of both BOLD signal increments and decrements in the human primary visual cortex using a contrast adaptation paradigm. Results showed that both BOLD signal increments and decrements kept linearity to long stimuli (e.g., 3 s, 6 s), yet, deviated from linearity to transient stimuli (e.g., 1 s). Furthermore,​ a voxel-wise analysis showed that the deviation patterns were different for BOLD signal increments and decrements: while the BOLD signal increments demonstrated a consistent overestimation pattern, the patterns for BOLD signal decrements varied from overestimation to underestimation. Our results suggested that corrections to deviations from linearity of transient responses should consider the different effects of BOLD signal increments and decrements.++{{:​reprints:​fnhum-15-541314.pdf|pdf}}
 +
 +Lin W-H., Gardner J. L., Wu S-W. (2020) Context effects on probability estimation. //PLoS Biology// 18:​e3000634.[[https://​doi.org/​10.1371/​journal.pbio.3000634|DOI]] ++ Abstract|
 +\\
 +\\
 +Many decisions rely on how we evaluate potential outcomes and estimate their corresponding probabilities of occurrence. Outcome evaluation is subjective because it requires consulting internal preferences and is sensitive to context. In contrast, probability estimation requires extracting statistics from the environment and therefore imposes unique challenges to the decision maker. Here, we show that probability estimation, like outcome evaluation, is subject to context effects that bias probability estimates away from other events present in the same context. However, unlike valuation, these context effects appeared to be scaled by estimated uncertainty,​ which is largest at intermediate probabilities. Blood-oxygen-level-dependent (BOLD) imaging showed that patterns of multivoxel activity in the dorsal anterior cingulate cortex (dACC), ventromedial prefrontal cortex (VMPFC), and intraparietal sulcus (IPS) predicted individual differences in context effects on probability estimates. These results establish VMPFC as the neurocomputational substrate shared between valuation and probability estimation and highlight the additional involvement of dACC and IPS that can be uniquely attributed to probability estimation. Because probability estimation is a required component of computational accounts from sensory inference to higher cognition, the context effects found here may affect a wide array of cognitive computations.++{{:​reprints:​lin_gardner_wu_2020.pdf|pdf}}
 +
 +Riesen, G., Norcia, A. M. and Gardner, J. L. (2019) Humans perceive binocular rivalry and fusion in a tristable dynamic state. //The Journal of Neuroscience// ​ 39(43):​8527-8537 [[https://​doi.org/​10.1523/​JNEUROSCI.0713-19.2019|DOI]] ++Abstract|
 +\\
 +\\
 +Human vision combines inputs from the two eyes into one percept. Small differences ‘fuse’ together, while larger differences are seen ‘rivalrously’ from one eye at a time. These outcomes are typically treated as mutually exclusive processes, with paradigms targeting one or the other and fusion being unreported in most rivalry studies. Is fusion truly a default, stable state that only breaks into rivalry for non-fusible stimuli? Or are monocular and fused percepts three sub-states of one dynamical system? To determine whether fusion and rivalry are separate processes, we measured human perception of Gabor patches with a range of inter-ocular orientation disparities. Observers (10 female, 5 male) reported rivalrous, fused and uncertain percepts over time. We found a dynamic “tristable” zone spanning from ~25-35 degrees of orientation disparity where fused, left- or right-eye dominant percepts could all occur. The temporal characteristics of fusion and non-fusion periods during tristability matched other bistable processes. We tested statistical models with fusion as a higher-level bistable process alternating with rivalry against our findings. None of these fit our data, but a simple bistable model extended to have three states reproduced many of our observations. We conclude that rivalry and fusion are multistable sub-states capable of direct competition,​ rather than separate bistable processes.++{{:​reprints:​riesen_et_al_jn_2019.pdf|pdf}}
 +
 +Birman, D. and Gardner, J. L. (2019) A flexible readout mechanism of human sensory representations. //Nature Communications//​ 10:3500 [[https://​doi.org/​10.1038/​s41467-019-11448-7|DOI]] ++Abstract|
 +\\
 +\\
 +Attention can both enhance and suppress cortical sensory representations. However, changing sensory representations can also be detrimental to behavior. Behavioral consequences can be avoided by flexibly changing sensory readout, while leaving the representations unchanged. Here, we asked human observers to attend to and report about either one of two features which control the visibility of motion while making concurrent measurements of cortical activity with BOLD imaging (fMRI). Extending a well-established linking model to account for the relationship between these measurements,​ we found that changes in sensory representation during directed attention were insufficient to explain perceptual reports. A flexible downstream readout was also necessary to best explain our data. Such a model implies that observers should be able to recover information about ignored features, a prediction which we confirmed behaviorally. Thus, flexible readout is a critical component of the cortical implementation of human adaptive behavior.++{{:​reprints:​nat_commun_2019_birman.pdf|pdf}}
 +
 +Fukuda H., Ma N., Suzuki S., Harasawa N., Ueno K., Gardner J.L., Ichinohe N., Haruno M., Cheng K., Nakahara H. (2019) Computing social value conversion in the human brain. //The Journal of Neuroscience//​ 39(26):​5153-72 [[http://​doi.org/​10.1523/​JNEUROSCI.3117-18.2019|DOI]] ++Abstract|
 +\\
 +\\
 +Social signals play powerful roles in shaping self-oriented reward valuation and decision making. These signals activate social and valuation/​decision areas, but the core computation for their integration into the self-oriented decision machinery remains unclear. Here, we study how a fundamental social signal, social value (others'​ reward value), is converted into self-oriented decision making in the human brain. Using behavioral analysis, modeling, and neuroimaging,​ we show three-stage processing of social value conversion from the offer to the effective value and then to the final decision value. First, a value of others'​ bonus on offer, called offered value, was encoded uniquely in the right temporoparietal junction (rTPJ) and also in the left dorsolateral prefrontal cortex (ldlPFC), which is commonly activated by offered self-bonus value. The effective value, an intermediate value representing the effective influence of the offer on the decision, was represented in the right anterior insula (rAI), and the final decision value was encoded in the medial prefrontal cortex (mPFC). Second, using psychophysiological interaction and dynamic causal modeling analyses, we demonstrated three-stage feedforward processing from the rTPJ and ldPFC to the rAI and then from rAI to the mPFC. Further, we showed that these characteristics of social conversion underlie distinct sociobehavioral phenotypes. We demonstrate that the variability in the conversion underlies the difference between prosocial and selfish subjects, as seen from the differential strength of the rAI and ldlPFC coupling to the mPFC responses, respectively. Together, these findings identified fundamental neural computation processes for social value conversion underlying complex social decision making behaviors.++{{:​reprints:​fukuda_et_al_2019.pdf|pdf}}
 +
 Gardner, J. L. and Liu, T. (2019) Inverted encoding models reconstruct an arbitrary model response, not the stimulus. //eNeuro// 6(2) e0363-18.2019 1–11 [[https://​doi.org/​10.1523/​ENEURO.0363-18.2019|DOI]] ​ ++Abstract| Gardner, J. L. and Liu, T. (2019) Inverted encoding models reconstruct an arbitrary model response, not the stimulus. //eNeuro// 6(2) e0363-18.2019 1–11 [[https://​doi.org/​10.1523/​ENEURO.0363-18.2019|DOI]] ​ ++Abstract|
 \\ \\
 \\ \\
 Probing how large populations of neurons represent stimuli is key to understanding sensory representations as many stimulus characteristics can only be discerned from population activity and not from individual single-units. Recently, inverted encoding models have been used to produce channel response functions from large spatial-scale measurements of human brain activity that are reminiscent of single-unit tuning functions and have been proposed to assay “population-level stimulus representations” (Sprague et al., 2018a). However, these channel response functions do not assay population tuning. We show by derivation that the channel response function is only determined up to an invertible linear transform. Thus, these channel response functions are arbitrary, one of an infinite family and therefore not a unique description of population representation. Indeed, simulations demonstrate that bimodal, even random, channel basis functions can account perfectly well for population responses without any underlying neural response units that are so tuned. However, the approach can be salvaged by extending it to reconstruct the stimulus, not the assumed model. We show that when this is done, even using bimodal and random channel basis functions, a unimodal function peaking at the appropriate value of the stimulus is recovered which can be interpreted as a measure of population selectivity. More precisely, the recovered function signifies how likely any value of the stimulus is, given the observed population response. Whether an analysis is recovering the hypothetical responses of an arbitrary model rather than assessing the selectivity of population representations is not an issue unique to the inverted encoding model and human neuroscience,​ but a general problem that must be confronted as more complex analyses intervene between measurement of population activity and presentation of data.++{{:​reprints:​enu002192895p.pdf|pdf}} Probing how large populations of neurons represent stimuli is key to understanding sensory representations as many stimulus characteristics can only be discerned from population activity and not from individual single-units. Recently, inverted encoding models have been used to produce channel response functions from large spatial-scale measurements of human brain activity that are reminiscent of single-unit tuning functions and have been proposed to assay “population-level stimulus representations” (Sprague et al., 2018a). However, these channel response functions do not assay population tuning. We show by derivation that the channel response function is only determined up to an invertible linear transform. Thus, these channel response functions are arbitrary, one of an infinite family and therefore not a unique description of population representation. Indeed, simulations demonstrate that bimodal, even random, channel basis functions can account perfectly well for population responses without any underlying neural response units that are so tuned. However, the approach can be salvaged by extending it to reconstruct the stimulus, not the assumed model. We show that when this is done, even using bimodal and random channel basis functions, a unimodal function peaking at the appropriate value of the stimulus is recovered which can be interpreted as a measure of population selectivity. More precisely, the recovered function signifies how likely any value of the stimulus is, given the observed population response. Whether an analysis is recovering the hypothetical responses of an arbitrary model rather than assessing the selectivity of population representations is not an issue unique to the inverted encoding model and human neuroscience,​ but a general problem that must be confronted as more complex analyses intervene between measurement of population activity and presentation of data.++{{:​reprints:​enu002192895p.pdf|pdf}}
-\\+
 Gardner, J. L. (2019) Optimality and heuristics in perceptual neuroscience. //Nature Neuroscience//​ 22:514-523 [[https://​doi.org/​10.1038/​s41593-019-0340-4|DOI]] ​ ++Abstract| Gardner, J. L. (2019) Optimality and heuristics in perceptual neuroscience. //Nature Neuroscience//​ 22:514-523 [[https://​doi.org/​10.1038/​s41593-019-0340-4|DOI]] ​ ++Abstract|
 \\ \\
 \\ \\
-The foundation for modern understanding of how we make perceptual decisions about what it is that we see or where to look comes from considering the optimal way to perform these behaviors. While statistical computation is useful for deriving the optimal solution to a perceptual problem, optimality requires perfect knowledge of priors and often complex computation. Accumulating evidence, however, suggests that optimal perceptual goals can be achieved or approximated more simply by human observers using heuristic approaches. Perceptual neuroscientists captivated by optimal explanations of sensory behaviors will fail in their search for the neural circuits and cortical processes that implement an optimal computation whenever that behavior is actually achieved through heuristics. This article provides a cross-disciplinary review of decision-making with the aim of building perceptual theory that uses optimality to set the computational goals for perceptual behavior, but through consideration of ecological, computational and energetic constraints incorporates how these optimal goals can be achieved through heuristic approximation.++{{:​reprints:​Gardner-2019-Nature_Neuroscience.pdf|pdf}} +The foundation for modern understanding of how we make perceptual decisions about what it is that we see or where to look comes from considering the optimal way to perform these behaviors. While statistical computation is useful for deriving the optimal solution to a perceptual problem, optimality requires perfect knowledge of priors and often complex computation. Accumulating evidence, however, suggests that optimal perceptual goals can be achieved or approximated more simply by human observers using heuristic approaches. Perceptual neuroscientists captivated by optimal explanations of sensory behaviors will fail in their search for the neural circuits and cortical processes that implement an optimal computation whenever that behavior is actually achieved through heuristics. This article provides a cross-disciplinary review of decision-making with the aim of building perceptual theory that uses optimality to set the computational goals for perceptual behavior, but through consideration of ecological, computational and energetic constraints incorporates how these optimal goals can be achieved through heuristic approximation.++{{:​reprints:​gardner-2019-nature-neuroscience.pdf|pdf}} 
-\\+
 Birman, D., and Gardner, J. L. (2018) A quantitative framework for motion visibility in human cortex. //Journal of Neurophysiology//​ 120:​1824-1839. [[https://​www.physiology.org/​doi/​abs/​10.1152/​jn.00433.2018|DOI]] [[https://​osf.io/​s7j9p/​|DATA]]++Abstract| Birman, D., and Gardner, J. L. (2018) A quantitative framework for motion visibility in human cortex. //Journal of Neurophysiology//​ 120:​1824-1839. [[https://​www.physiology.org/​doi/​abs/​10.1152/​jn.00433.2018|DOI]] [[https://​osf.io/​s7j9p/​|DATA]]++Abstract|
 \\ \\
 \\ \\
 Despite the central use of motion visibility to reveal the neural basis of perception, perceptual decision making, and sensory inference there exists no comprehensive quantitative framework establishing how motion visibility parameters modulate human cortical response. Random-dot motion stimuli can be made less visible by reducing image contrast or motion coherence, or by shortening the stimulus duration. Because each of these manipulations modulates the strength of sensory neural responses they have all been extensively used to reveal cognitive and other non-sensory phenomenon such as the influence of priors, attention, and choice-history biases. However, each of these manipulations is thought to influence response in different ways across different cortical regions and a comprehensive study is required to interpret this literature. Here, human participants observed random-dot stimuli varying across a large range of contrast, coherence, and stimulus durations as we measured blood-oxygen-level dependent responses. We developed a framework for modeling these responses which quantifies their functional form and sensitivity across areas. Our framework demonstrates the sensitivity of all visual areas to each parameter, with early visual areas V1-V4 showing more parametric sensitivity to changes in contrast and V3A and MT to coherence. Our results suggest that while motion contrast, coherence, and duration share cortical representation,​ they are encoded with distinct functional forms and sensitivity. Thus, our quantitative framework serves as a reference for interpretation of the vast perceptual literature manipulating these parameters and shows that different manipulations of visibility will have different effects across human visual cortex and need to be interpreted accordingly.++{{:​reprints:​birman_jnphys_2018.pdf|pdf}} Despite the central use of motion visibility to reveal the neural basis of perception, perceptual decision making, and sensory inference there exists no comprehensive quantitative framework establishing how motion visibility parameters modulate human cortical response. Random-dot motion stimuli can be made less visible by reducing image contrast or motion coherence, or by shortening the stimulus duration. Because each of these manipulations modulates the strength of sensory neural responses they have all been extensively used to reveal cognitive and other non-sensory phenomenon such as the influence of priors, attention, and choice-history biases. However, each of these manipulations is thought to influence response in different ways across different cortical regions and a comprehensive study is required to interpret this literature. Here, human participants observed random-dot stimuli varying across a large range of contrast, coherence, and stimulus durations as we measured blood-oxygen-level dependent responses. We developed a framework for modeling these responses which quantifies their functional form and sensitivity across areas. Our framework demonstrates the sensitivity of all visual areas to each parameter, with early visual areas V1-V4 showing more parametric sensitivity to changes in contrast and V3A and MT to coherence. Our results suggest that while motion contrast, coherence, and duration share cortical representation,​ they are encoded with distinct functional forms and sensitivity. Thus, our quantitative framework serves as a reference for interpretation of the vast perceptual literature manipulating these parameters and shows that different manipulations of visibility will have different effects across human visual cortex and need to be interpreted accordingly.++{{:​reprints:​birman_jnphys_2018.pdf|pdf}}
-\\+
 Dobs, K., Schultz, J., Bulthoff, I., and Gardner, J. L. (2018) Task-dependent enhancement of facial expression and identity representations in human cortex. //​Neuroimage//​ 10:689-702. [[https://​doi.org/​10.1016/​j.neuroimage.2018.02.013|DOI]]++Abstract| Dobs, K., Schultz, J., Bulthoff, I., and Gardner, J. L. (2018) Task-dependent enhancement of facial expression and identity representations in human cortex. //​Neuroimage//​ 10:689-702. [[https://​doi.org/​10.1016/​j.neuroimage.2018.02.013|DOI]]++Abstract|
 \\ \\
 \\ \\
 What cortical mechanisms allow humans to easily discern the expression or identity of a face? Subjects detected changes in expression or identity of a stream of dynamic faces while we measured BOLD responses from topo-graphically and functionally defined areas throughout the visual hierarchy. Responses in dorsal areas increasedduring the expression task, whereas responses in ventral areas increased during the identity task, consistent with previous studies. Similar to ventral areas, early visual areas showed increased activity during the identity task. If visual responses are weighted by perceptual mechanisms according to their magnitude, these increased responses would lead to improved attentional selection of the task-appropriate facial aspect. Alternatively,​ increased responses could be a signature of a sensitivity enhancement mechanism that improves representations of the attended facial aspect. Consistent with the latter sensitivity enhancement mechanism, attending to expression led to enhanced decoding of exemplars of expression both in early visual and dorsal areas relative to attending identity. Similarly, decoding identity exemplars when attending to identity was improved in dorsal and ventral areas. We conclude that attending to expression or identity of dynamic faces is associated with increased selectivity in representations consistent with sensitivity enhancement.++{{:​reprints:​dobs_et_al.pdf|pdf}} What cortical mechanisms allow humans to easily discern the expression or identity of a face? Subjects detected changes in expression or identity of a stream of dynamic faces while we measured BOLD responses from topo-graphically and functionally defined areas throughout the visual hierarchy. Responses in dorsal areas increasedduring the expression task, whereas responses in ventral areas increased during the identity task, consistent with previous studies. Similar to ventral areas, early visual areas showed increased activity during the identity task. If visual responses are weighted by perceptual mechanisms according to their magnitude, these increased responses would lead to improved attentional selection of the task-appropriate facial aspect. Alternatively,​ increased responses could be a signature of a sensitivity enhancement mechanism that improves representations of the attended facial aspect. Consistent with the latter sensitivity enhancement mechanism, attending to expression led to enhanced decoding of exemplars of expression both in early visual and dorsal areas relative to attending identity. Similarly, decoding identity exemplars when attending to identity was improved in dorsal and ventral areas. We conclude that attending to expression or identity of dynamic faces is associated with increased selectivity in representations consistent with sensitivity enhancement.++{{:​reprints:​dobs_et_al.pdf|pdf}}
-\\+
 Laquitaine, S. and Gardner, J. L. (2018) A switching observer for human perceptual estimation. //Neuron// 97(2): 462-474. [[https://​dx.doi.org/​10.1016/​j.neuron.2017.12.011|DOI]] ​ Laquitaine, S. and Gardner, J. L. (2018) A switching observer for human perceptual estimation. //Neuron// 97(2): 462-474. [[https://​dx.doi.org/​10.1016/​j.neuron.2017.12.011|DOI]] ​
 ++Abstract| ++Abstract|
Line 25: Line 98:
 \\ \\
 Human perceptual inference has been fruitfully characterized as a normative Bayesian process in which sensory evidence and priors are multiplicatively combined to form posteriors from which sensory estimates can be optimally read-out. We tested whether this basic Bayesian framework could explain human subjects’ behavior in two estimation tasks in which we varied the strength of sensory evidence (motion coherence or contrast) and priors (set of directions or orientations). We found that despite excellent agreement of estimates mean and variability with a Basic Bayesian observer model, the estimate distributions were bimodal with unpredicted modes near the prior and the likelihood. We developed a model that switched between prior and sensory evidence rather than integrating the two, that better explained the data than the Basic and several other Bayesian observers. Our data suggest that humans can approximate Bayesian optimality with a switching heuristic that forgoes multiplicative combination of priors and likelihoods.++{{:​reprints:​switching.pdf|pdf}} Human perceptual inference has been fruitfully characterized as a normative Bayesian process in which sensory evidence and priors are multiplicatively combined to form posteriors from which sensory estimates can be optimally read-out. We tested whether this basic Bayesian framework could explain human subjects’ behavior in two estimation tasks in which we varied the strength of sensory evidence (motion coherence or contrast) and priors (set of directions or orientations). We found that despite excellent agreement of estimates mean and variability with a Basic Bayesian observer model, the estimate distributions were bimodal with unpredicted modes near the prior and the likelihood. We developed a model that switched between prior and sensory evidence rather than integrating the two, that better explained the data than the Basic and several other Bayesian observers. Our data suggest that humans can approximate Bayesian optimality with a switching heuristic that forgoes multiplicative combination of priors and likelihoods.++{{:​reprints:​switching.pdf|pdf}}
-\\+
 Liu, T., Cable, D.,  and Gardner, J. L. (2018) Inverted encoding models of human population response conflate noise and neural tuning width. //The Journal of Neuroscience//​ 38(2): 398-408. [[https://​doi.org/​10.1523/​JNEUROSCI.2453-17.2017|DOI]] [[https://​doi.org/​10.17605/​OSF.IO/​9D3EX|DATA]] Liu, T., Cable, D.,  and Gardner, J. L. (2018) Inverted encoding models of human population response conflate noise and neural tuning width. //The Journal of Neuroscience//​ 38(2): 398-408. [[https://​doi.org/​10.1523/​JNEUROSCI.2453-17.2017|DOI]] [[https://​doi.org/​10.17605/​OSF.IO/​9D3EX|DATA]]
 ++Abstract| ++Abstract|
 \\ \\
 \\ \\
-Channel encoding models offer the ability to bridge different scales of neuronal measurement by interpreting population responses, typically measured with BOLD imaging in humans, as linear sums of groups of neurons (channels) tuned for visual stimulus properties. Inverting these models to form predicted channel responses from population measurements in humans seemingly offers the potential to infer neuronal tuning properties. Here, we test the ability to make inferences about neural tuning width from inverted encoding models. We examined contrast invariance of orientation selectivity in human V1 (both sexes) and found that inverting the encoding model resulted in channel response functions that became broader with lower contrast, thus, apparently, violating contrast invariance. Simulations showed that this broadening could be explained by contrast-invariant single-unit tuning with the measured decrease in response amplitude at lower contrast. The decrease in response lowers the signal-to-noise ratio of population responses that results in poorer population representation of orientation. Simulations further showed that increasing signal-to-noise makes channel response functions less sensitive to underlying neural tuning width, and in the limit of zero noise will reconstruct the channel function assumed by the model regardless of the bandwidth of single-units. We conclude that our data are consistent with contrast invariant orientation tuning in human V1. More generally, our results demonstrate that population selectivity measures obtained by encoding models can deviate substantially from the behavior of single-units because they conflate neural tuning width and noise and are therefore better used to estimate the uncertainty of decoded stimulus properties. ++{{:​reprints:​cinvor.pdf|pdf}}\\+Channel encoding models offer the ability to bridge different scales of neuronal measurement by interpreting population responses, typically measured with BOLD imaging in humans, as linear sums of groups of neurons (channels) tuned for visual stimulus properties. Inverting these models to form predicted channel responses from population measurements in humans seemingly offers the potential to infer neuronal tuning properties. Here, we test the ability to make inferences about neural tuning width from inverted encoding models. We examined contrast invariance of orientation selectivity in human V1 (both sexes) and found that inverting the encoding model resulted in channel response functions that became broader with lower contrast, thus, apparently, violating contrast invariance. Simulations showed that this broadening could be explained by contrast-invariant single-unit tuning with the measured decrease in response amplitude at lower contrast. The decrease in response lowers the signal-to-noise ratio of population responses that results in poorer population representation of orientation. Simulations further showed that increasing signal-to-noise makes channel response functions less sensitive to underlying neural tuning width, and in the limit of zero noise will reconstruct the channel function assumed by the model regardless of the bandwidth of single-units. We conclude that our data are consistent with contrast invariant orientation tuning in human V1. More generally, our results demonstrate that population selectivity measures obtained by encoding models can deviate substantially from the behavior of single-units because they conflate neural tuning width and noise and are therefore better used to estimate the uncertainty of decoded stimulus properties. ++{{:​reprints:​cinvor.pdf|pdf}} 
 Abrahamyan, A., Silva, L. L., Dakin, S. C., Carandini, M. and Gardner, J. L. (2016) Adaptable history biases in human perceptual decisions. //​Proceedings of the National Academy of Sciences// 113.25: E3548-E3557 [[http://​dx.doi.org/​10.1073/​pnas.1518786113|DOI]] Abrahamyan, A., Silva, L. L., Dakin, S. C., Carandini, M. and Gardner, J. L. (2016) Adaptable history biases in human perceptual decisions. //​Proceedings of the National Academy of Sciences// 113.25: E3548-E3557 [[http://​dx.doi.org/​10.1073/​pnas.1518786113|DOI]]
 ++Abstact| ++Abstact|
 \\ \\
 \\ \\
-When making choices under conditions of perceptual uncertainty,​ past experience can play a vital role. However, it can also lead to biases that worsen decisions. Consistent with previous observations,​ we found that human choices are influenced by the success or failure of past choices even in a standard two-alternative detection task, where choice history is irrelevant. The typical bias was one that made the subject switch choices after a failure. These choice-history biases led to poorer performance and were similar for observers in different countries. They were well captured by a simple logistic regression model that had been previously applied to describe psychophysical performance in mice. Such irrational biases seem at odds with the principles of reinforcement learning, which would predict exquisite adaptability to choice history. We therefore asked whether subjects could adapt their irrational biases following changes in trial order statistics. Adaptability was strong in the direction that confirmed a subject’s default biases, but weaker in the opposite direction, so that existing biases could not be eradicated. We conclude that humans can adapt choice history biases, but cannot easily overcome existing biases even if irrational in the current context: adaptation is more sensitive to confirmatory than contradictory statistics.++{{:​reprints:​PNAS-2016-Abrahamyan-E3548-57.pdf|pdf}}\\+When making choices under conditions of perceptual uncertainty,​ past experience can play a vital role. However, it can also lead to biases that worsen decisions. Consistent with previous observations,​ we found that human choices are influenced by the success or failure of past choices even in a standard two-alternative detection task, where choice history is irrelevant. The typical bias was one that made the subject switch choices after a failure. These choice-history biases led to poorer performance and were similar for observers in different countries. They were well captured by a simple logistic regression model that had been previously applied to describe psychophysical performance in mice. Such irrational biases seem at odds with the principles of reinforcement learning, which would predict exquisite adaptability to choice history. We therefore asked whether subjects could adapt their irrational biases following changes in trial order statistics. Adaptability was strong in the direction that confirmed a subject’s default biases, but weaker in the opposite direction, so that existing biases could not be eradicated. We conclude that humans can adapt choice history biases, but cannot easily overcome existing biases even if irrational in the current context: adaptation is more sensitive to confirmatory than contradictory statistics.++{{:​reprints:​pnas-2016-abrahamyan.pdf|pdf}} 
 Birman, D. & Gardner, J. L. (2016) Parietal and prefrontal: categorical differences?​ //Nature Neuroscience//​ 19: 5-7 [[http://​dx.doi.org/​10.1038/​nn.4204|DOI]] Birman, D. & Gardner, J. L. (2016) Parietal and prefrontal: categorical differences?​ //Nature Neuroscience//​ 19: 5-7 [[http://​dx.doi.org/​10.1038/​nn.4204|DOI]]
 ++Abstact| ++Abstact|
Line 42: Line 117:
 A working memory representation goes missing in monkey parietal cortex during categorization learning, but is still found in the prefrontal cortex. A working memory representation goes missing in monkey parietal cortex during categorization learning, but is still found in the prefrontal cortex.
 ++{{:​reprints:​nn4204.pdf|pdf}} ++{{:​reprints:​nn4204.pdf|pdf}}
-\\+
 Gardner, J. L. (2015) A case for human systems neuroscience. //​Neuroscience//​ 296: 130-137 [[http://​dx.doi.org/​10.1016/​j.neuroscience.2014.06.052|DOI]] Gardner, J. L. (2015) A case for human systems neuroscience. //​Neuroscience//​ 296: 130-137 [[http://​dx.doi.org/​10.1016/​j.neuroscience.2014.06.052|DOI]]
 ++Abstact| ++Abstact|
Line 49: Line 124:
 Can the human brain itself serve as a model for a systems neuroscience approach to understanding the human brain? After all, how the brain is able to create the richness and complexity of human behavior is still largely mysterious. What better choice to study that complexity than to study it in humans? However, measurements of brain activity typically need to be made non-invasively which puts severe constraints on what can be learned about the internal workings of the brain. Our approach has been to use a combination of psychophysics in which we can use human behavioral flexibility to make quantitative measure- ments of behavior and link those through computational models to measurements of cortical activity through mag- netic resonance imaging. In particular, we have tested vari- ous computational hypotheses about what neural mechanisms could account for behavioral enhancement with spatial attention (Pestilli et al., 2011). Resting both on quantitative measurements and considerations of what is known through animal models, we concluded that weighting of sensory signals by the magnitude of their response is a neural mechanism for efficient selection of sensory signals and consequent improvements in behavioral performance with attention. While animal models have many technical advantages over studying the brain in humans, we believe that human systems neuroscience should endeavor to validate, replicate and extend basic knowledge learned from animal model systems and thus form a bridge to understanding how the brain creates the complex and rich cognitive capacities of humans. Can the human brain itself serve as a model for a systems neuroscience approach to understanding the human brain? After all, how the brain is able to create the richness and complexity of human behavior is still largely mysterious. What better choice to study that complexity than to study it in humans? However, measurements of brain activity typically need to be made non-invasively which puts severe constraints on what can be learned about the internal workings of the brain. Our approach has been to use a combination of psychophysics in which we can use human behavioral flexibility to make quantitative measure- ments of behavior and link those through computational models to measurements of cortical activity through mag- netic resonance imaging. In particular, we have tested vari- ous computational hypotheses about what neural mechanisms could account for behavioral enhancement with spatial attention (Pestilli et al., 2011). Resting both on quantitative measurements and considerations of what is known through animal models, we concluded that weighting of sensory signals by the magnitude of their response is a neural mechanism for efficient selection of sensory signals and consequent improvements in behavioral performance with attention. While animal models have many technical advantages over studying the brain in humans, we believe that human systems neuroscience should endeavor to validate, replicate and extend basic knowledge learned from animal model systems and thus form a bridge to understanding how the brain creates the complex and rich cognitive capacities of humans.
 ++{{:​reprints:​gardner_neuroscience_2014.pdf|pdf}} ++{{:​reprints:​gardner_neuroscience_2014.pdf|pdf}}
-\\+
 Hara, Y. and Gardner, J. L. (2014) Encoding of graded changes in spatial specificity of prior cues in human visual cortex. //Journal of Neurophysiology//​ 112:​2834-49. [[http://​dx.doi.org/​10.1152/​jn.00729.2013|DOI]]. ++Abstract| Hara, Y. and Gardner, J. L. (2014) Encoding of graded changes in spatial specificity of prior cues in human visual cortex. //Journal of Neurophysiology//​ 112:​2834-49. [[http://​dx.doi.org/​10.1152/​jn.00729.2013|DOI]]. ++Abstract|
 \\ \\
 \\ \\
 Prior information about the relevance of spatial locations can vary in specificity;​ a single location, a subset of locations or all locations may be of potential importance. Using a contrast-discrimination task with 4 possible targets, we asked whether performance benefits are graded with the spatial specificity of a prior cue and whether we could quantitatively account for behavioral performance with cortical activity changes measured by blood oxygenation level dependent (BOLD) imaging. Thus we changed the prior probability that each location contained the target from 100 to 50 to 25% by cueing in advance 1, 2 or 4 of the possible locations. We found that behavioral performance (discrimination thresholds) improved in a graded fashion with spatial specificity. However, concurrently measured cortical responses from retinotopically-defined visual areas were not strictly graded; response magnitude decreased when all four locations were cued (25% prior probability) relative to the 100 and 50% prior probability conditions, but no significant difference in response magnitude was found between the 100 and 50% prior probability conditions for either cued or uncued locations. Also, while cueing locations increased responses relative to non-cueing, this cue-sensitivity was not graded with prior probability. Further, contrast-sensitivity of cortical responses, which could improve contrast discrimination performance,​ was not graded. Instead, an efficient-selection model showed that even if sensory responses do not strictly scale with prior probability,​ selection of sensory responses by weighting larger responses more can result in graded behavioral performance benefits with increasing spatial specificity of prior information.++{{:​reprints:​haragardnerjnp2014.pdf|pdf}} Prior information about the relevance of spatial locations can vary in specificity;​ a single location, a subset of locations or all locations may be of potential importance. Using a contrast-discrimination task with 4 possible targets, we asked whether performance benefits are graded with the spatial specificity of a prior cue and whether we could quantitatively account for behavioral performance with cortical activity changes measured by blood oxygenation level dependent (BOLD) imaging. Thus we changed the prior probability that each location contained the target from 100 to 50 to 25% by cueing in advance 1, 2 or 4 of the possible locations. We found that behavioral performance (discrimination thresholds) improved in a graded fashion with spatial specificity. However, concurrently measured cortical responses from retinotopically-defined visual areas were not strictly graded; response magnitude decreased when all four locations were cued (25% prior probability) relative to the 100 and 50% prior probability conditions, but no significant difference in response magnitude was found between the 100 and 50% prior probability conditions for either cued or uncued locations. Also, while cueing locations increased responses relative to non-cueing, this cue-sensitivity was not graded with prior probability. Further, contrast-sensitivity of cortical responses, which could improve contrast discrimination performance,​ was not graded. Instead, an efficient-selection model showed that even if sensory responses do not strictly scale with prior probability,​ selection of sensory responses by weighting larger responses more can result in graded behavioral performance benefits with increasing spatial specificity of prior information.++{{:​reprints:​haragardnerjnp2014.pdf|pdf}}
-\\+
 Vintch, B. and Gardner, J. L. (2014) Cortical correlates of human motion perception biases. //The Journal of Neuroscience//​ 34: 2592–2604. [[http://​dx.doi.org/​10.1523/​JNEUROSCI.2809-13.2014|DOI]] ++Abstract| Vintch, B. and Gardner, J. L. (2014) Cortical correlates of human motion perception biases. //The Journal of Neuroscience//​ 34: 2592–2604. [[http://​dx.doi.org/​10.1523/​JNEUROSCI.2809-13.2014|DOI]] ++Abstract|
 \\ \\
 \\ \\
 Human sensory perception is not a faithful reproduction of the sensory environment. For example, at low contrast, objects appear to move slower and flicker faster than veridical. While these biases have been robustly observed, their neural underpinning are unknown, thus suggesting a possible disconnect of the well established link between motion perception and cortical responses. We used functional imaging to examine the encoding of speed in the human cortex at the scale of neuronal populations and asked where and how these biases are encoded. Decoding, voxel population and forward- encoding analyses revealed biases towards slow speeds and high temporal frequencies at low contrast in the earliest visual cortical regions, matching perception. These findings thus offer a resolution to the disconnect between cortical responses and motion perception in humans. Moreover, biases in speed perception are considered a leading example of Bayesian inference as they can be interpreted as a prior for slow speeds. Our data therefore suggest that perceptual priors of this sort can be encoded by neural populations in the same early cortical areas that provide sensory evidence.++{{:​reprints:​2014vintch.pdf|pdf}} Human sensory perception is not a faithful reproduction of the sensory environment. For example, at low contrast, objects appear to move slower and flicker faster than veridical. While these biases have been robustly observed, their neural underpinning are unknown, thus suggesting a possible disconnect of the well established link between motion perception and cortical responses. We used functional imaging to examine the encoding of speed in the human cortex at the scale of neuronal populations and asked where and how these biases are encoded. Decoding, voxel population and forward- encoding analyses revealed biases towards slow speeds and high temporal frequencies at low contrast in the earliest visual cortical regions, matching perception. These findings thus offer a resolution to the disconnect between cortical responses and motion perception in humans. Moreover, biases in speed perception are considered a leading example of Bayesian inference as they can be interpreted as a prior for slow speeds. Our data therefore suggest that perceptual priors of this sort can be encoded by neural populations in the same early cortical areas that provide sensory evidence.++{{:​reprints:​2014vintch.pdf|pdf}}
-\\+
 Hara Y., Pestilli F. and Gardner J. L. (2014). Differing effects of attention in single-units and populations are well predicted by heterogeneous tuning and the normalization model of attention. //Frontiers in Computational Neuroscience//​ 8:12. [[http://​dx.doi.org/​10.3389/​fncom.2014.00012|DOI]] ++Abstract| Hara Y., Pestilli F. and Gardner J. L. (2014). Differing effects of attention in single-units and populations are well predicted by heterogeneous tuning and the normalization model of attention. //Frontiers in Computational Neuroscience//​ 8:12. [[http://​dx.doi.org/​10.3389/​fncom.2014.00012|DOI]] ++Abstract|
 \\ \\
Line 72: Line 147:
 How does our brain detect changes in a natural scene? While changes by increments of specific visual attributes, such as contrast or motion coherence, can be signaled by an increase in neuronal activity in early visual areas, like the primary visual cortex (V1) or the human middle temporal complex (hMT+), respectively,​ the mechanisms for signaling changes resulting from decrements in a stimulus attribute are largely unknown. We have discovered opposing patterns of cortical responses to changes in motion coherence: unlike areas hMT+, V3A and parieto-occipital complex (V6+) that respond to changes in the level of motion coherence monotonically,​ human areas V4 (hV4), V3B, and ventral occipital always respond positively to both transient increments and decrements. This pattern of responding always positively to stimulus changes can emerge in the presence of either coherence-selective neuron populations,​ or neurons that are not tuned to particular coherences but adapt to a particular coherence level in a stimulus-selective manner. Our findings provide evidence that these areas possess physiological properties suited for signaling increments and decrements in a stimulus and may form a part of cortical vigilance system for detecting salient changes in the environment.++ How does our brain detect changes in a natural scene? While changes by increments of specific visual attributes, such as contrast or motion coherence, can be signaled by an increase in neuronal activity in early visual areas, like the primary visual cortex (V1) or the human middle temporal complex (hMT+), respectively,​ the mechanisms for signaling changes resulting from decrements in a stimulus attribute are largely unknown. We have discovered opposing patterns of cortical responses to changes in motion coherence: unlike areas hMT+, V3A and parieto-occipital complex (V6+) that respond to changes in the level of motion coherence monotonically,​ human areas V4 (hV4), V3B, and ventral occipital always respond positively to both transient increments and decrements. This pattern of responding always positively to stimulus changes can emerge in the presence of either coherence-selective neuron populations,​ or neurons that are not tuned to particular coherences but adapt to a particular coherence level in a stimulus-selective manner. Our findings provide evidence that these areas possess physiological properties suited for signaling increments and decrements in a stimulus and may form a part of cortical vigilance system for detecting salient changes in the environment.++
 \\ \\
-{{reprints:2012Costagli.pdf|pdf}}+{{reprints:2012costagli.pdf|pdf}}
  
 Merriam, E. P., Gardner, J. L., Movshon, J. A., and Heeger, D. J. (2013) Modulation of visual responses by gaze direction in human visual cortex. //The Journal of Neuroscience//​ 33: 9879-9889 [[http://​dx.doi.org/​10.1523/​JNEUROSCI.0500-12.2013|DOI]] ++Abstract| Merriam, E. P., Gardner, J. L., Movshon, J. A., and Heeger, D. J. (2013) Modulation of visual responses by gaze direction in human visual cortex. //The Journal of Neuroscience//​ 33: 9879-9889 [[http://​dx.doi.org/​10.1523/​JNEUROSCI.0500-12.2013|DOI]] ++Abstract|
Line 100: Line 175:
 \\ \\
 To characterize the computational processes by which attention improves behavioral performance,​ we measured activity in visual cortex with functional magnetic resonance imaging as humans performed a contrast-discrimination task with focal and distributed attention. Focal attention yielded robust improvements in behavioral performance that were accompanied by increases in cortical responses. Using a quantitative analysis, we determined that if performance were limited only by the sensitivity of the measured sensory signals, the improvements in behavioral performance would have corresponded to an unrealistically large (approximately 400%) reduction in response variability. Instead, behavioral performance was well characterized by a pooling and selection process for which the largest sensory responses, those most strongly modulated by attention, dominated the perceptual decision. This characterization predicts that high contrast distracters that evoke large sensory responses should have a negative impact on behavioral performance. We tested and confirmed this prediction. We conclude that attention enhanced behavioral performance predominantly by enabling efficient selection of the behaviorally relevant sensory signals.++{{reprints:​attentionselection.pdf|pdf}} To characterize the computational processes by which attention improves behavioral performance,​ we measured activity in visual cortex with functional magnetic resonance imaging as humans performed a contrast-discrimination task with focal and distributed attention. Focal attention yielded robust improvements in behavioral performance that were accompanied by increases in cortical responses. Using a quantitative analysis, we determined that if performance were limited only by the sensitivity of the measured sensory signals, the improvements in behavioral performance would have corresponded to an unrealistically large (approximately 400%) reduction in response variability. Instead, behavioral performance was well characterized by a pooling and selection process for which the largest sensory responses, those most strongly modulated by attention, dominated the perceptual decision. This characterization predicts that high contrast distracters that evoke large sensory responses should have a negative impact on behavioral performance. We tested and confirmed this prediction. We conclude that attention enhanced behavioral performance predominantly by enabling efficient selection of the behaviorally relevant sensory signals.++{{reprints:​attentionselection.pdf|pdf}}
-\\+
 Liu, T., Hospadaruk, L., Zhu, D., and Gardner, J. L. (2011) Feature-specific attentional priority signals in human cortex. ​ //The Journal of Neuroscience//​ 31:4484-95 [[http://​dx.doi.org/​10.1523/​JNEUROSCI.5745-10.2011|DOI]]++Abstract| Liu, T., Hospadaruk, L., Zhu, D., and Gardner, J. L. (2011) Feature-specific attentional priority signals in human cortex. ​ //The Journal of Neuroscience//​ 31:4484-95 [[http://​dx.doi.org/​10.1523/​JNEUROSCI.5745-10.2011|DOI]]++Abstract|
 \\ \\
Line 106: Line 181:
 Human can flexibly attend to a variety of stimulus dimensions, including spatial location and various features such as color and direction of motion. While the locus of spatial attention has been hypothesized to be represented by priority maps encoded in several dorsal frontal and parietal areas, it is unknown how the brain represents attended features. Here we examined the distribution and organization of neural signals related to deployment of feature-based attention. Subjects viewed a compound stimulus containing two superimposed motion directions (or colors), and were instructed to perform an attention-demanding task on one of the directions (or colors). We found elevated and sustained fMRI response for the attention task compared to a neutral condition, without reliable differences in overall response amplitude between attending to different features. However, using multi-voxel pattern analysis, we were able to decode the attended feature in both early visual areas (V1 to hMT+) and frontal and parietal areas (e.g., IPS1-4 and FEF) that are commonly associated with spatial attention. Furthermore,​ analysis of the classifier weight maps showed that attending to motion and color evoked different patterns of activity, suggesting different neuronal subpopulations in these regions are recruited for attending to different feature dimensions. Thus, our finding suggests that rather than a purely spatial representation of priority, frontal and parietal cortical areas also contain multiplexed signals related to the priority of different non-spatial features. Human can flexibly attend to a variety of stimulus dimensions, including spatial location and various features such as color and direction of motion. While the locus of spatial attention has been hypothesized to be represented by priority maps encoded in several dorsal frontal and parietal areas, it is unknown how the brain represents attended features. Here we examined the distribution and organization of neural signals related to deployment of feature-based attention. Subjects viewed a compound stimulus containing two superimposed motion directions (or colors), and were instructed to perform an attention-demanding task on one of the directions (or colors). We found elevated and sustained fMRI response for the attention task compared to a neutral condition, without reliable differences in overall response amplitude between attending to different features. However, using multi-voxel pattern analysis, we were able to decode the attended feature in both early visual areas (V1 to hMT+) and frontal and parietal areas (e.g., IPS1-4 and FEF) that are commonly associated with spatial attention. Furthermore,​ analysis of the classifier weight maps showed that attending to motion and color evoked different patterns of activity, suggesting different neuronal subpopulations in these regions are recruited for attending to different feature dimensions. Thus, our finding suggests that rather than a purely spatial representation of priority, frontal and parietal cortical areas also contain multiplexed signals related to the priority of different non-spatial features.
 ++{{:​reprints:​liu_hospadaruk_zhu_gardner_jn_2011.pdf|pdf}} ++{{:​reprints:​liu_hospadaruk_zhu_gardner_jn_2011.pdf|pdf}}
-\\+
 Gardner, J. L. (2010) Is cortical vasculature functionally organized? //​Neuroimage//​ 49:1953-6. [[http://​dx.doi.org/​10.1016/​j.neuroimage.2009.07.004|DOI]] Gardner, J. L. (2010) Is cortical vasculature functionally organized? //​Neuroimage//​ 49:1953-6. [[http://​dx.doi.org/​10.1016/​j.neuroimage.2009.07.004|DOI]]
 ++Abstact| ++Abstact|
Line 113: Line 188:
 The cortical vasculature is a well-structured and organized system, but the extent to which it is organized with respect to the neuronal functional architecture is unknown. In particular, does vasculature follow the same functional organization as cortical columns? In principle, cortical columns that share tuning for stimulus features like orientation may often be active together and thus require oxygen and metabolic nutrients together. If the cortical vasculature is built to serve these needs, it may also tend to aggregate and amplify orientation specific signals and explain why they are available in fMRI data at very low resolution. The cortical vasculature is a well-structured and organized system, but the extent to which it is organized with respect to the neuronal functional architecture is unknown. In particular, does vasculature follow the same functional organization as cortical columns? In principle, cortical columns that share tuning for stimulus features like orientation may often be active together and thus require oxygen and metabolic nutrients together. If the cortical vasculature is built to serve these needs, it may also tend to aggregate and amplify orientation specific signals and explain why they are available in fMRI data at very low resolution.
 ++{{:​reprints:​jg_commentary_neuroimage.pdf|pdf}} ++{{:​reprints:​jg_commentary_neuroimage.pdf|pdf}}
-\\+
 Offen S, Gardner, J. L., Schluppeck D and Heeger, D.J. (2010) Differential roles for frontal eye fields (FEFs) and intraparietal sulcus (IPS) in visual working memory and visual attention. //Journal of Vision// ​ 10:​1-14 ​ [[http://​dx.doi.org/​10.1167/​10.11.28|DOI]] ++Abstract| Offen S, Gardner, J. L., Schluppeck D and Heeger, D.J. (2010) Differential roles for frontal eye fields (FEFs) and intraparietal sulcus (IPS) in visual working memory and visual attention. //Journal of Vision// ​ 10:​1-14 ​ [[http://​dx.doi.org/​10.1167/​10.11.28|DOI]] ++Abstract|
 \\ \\
Line 119: Line 194:
 Cortical activity was measured with functional magnetic resonance imaging to probe the involvement of the superior precentral sulcus (including putative human frontal eye fields, FEFs) and the intraparietal sulcus (IPS) in visual short-term memory and visual attention. In two experimental tasks, human subjects viewed two visual stimuli separated by a variable delay period. The tasks placed differential demands on short-term memory and attention, but the stimuli were visually identical until after the delay period. An earlier study (S. Offen, D. Schluppeck, & D. J. Heeger, 2009) had found a dissociation in early visual cortex that suggested different computational mechanisms underlying the two processes. In contrast, the results reported here show that the patterns of activation in prefrontal and parietal cortex were different from one another but were similar for the two tasks. In particular, the FEF showed evidence for sustained delay period activity for both the working memory and the attention task, while the IPS did not show evidence for sustained delay period activity for either task. The results imply differential roles for the FEF and IPS in these tasks; the results also suggest that feedback of sustained activity from frontal cortex to visual cortex might be gated by task demands. Cortical activity was measured with functional magnetic resonance imaging to probe the involvement of the superior precentral sulcus (including putative human frontal eye fields, FEFs) and the intraparietal sulcus (IPS) in visual short-term memory and visual attention. In two experimental tasks, human subjects viewed two visual stimuli separated by a variable delay period. The tasks placed differential demands on short-term memory and attention, but the stimuli were visually identical until after the delay period. An earlier study (S. Offen, D. Schluppeck, & D. J. Heeger, 2009) had found a dissociation in early visual cortex that suggested different computational mechanisms underlying the two processes. In contrast, the results reported here show that the patterns of activation in prefrontal and parietal cortex were different from one another but were similar for the two tasks. In particular, the FEF showed evidence for sustained delay period activity for both the working memory and the attention task, while the IPS did not show evidence for sustained delay period activity for either task. The results imply differential roles for the FEF and IPS in these tasks; the results also suggest that feedback of sustained activity from frontal cortex to visual cortex might be gated by task demands.
 ++{{reprints:​offen.pdf|pdf}} ++{{reprints:​offen.pdf|pdf}}
-\\+
 Dinstein I, Gardner, J. L., Jazayeri, M and Heeger, D.J. (2008) Executed and observed movements have different distributed representations in human aIPS. //The Journal of Neuroscience// ​ 28:​11231-11239 [[http://​dx.doi.org/​10.1523/​JNEUROSCI.3585-08.2008|DOI]] ++Abstract| Dinstein I, Gardner, J. L., Jazayeri, M and Heeger, D.J. (2008) Executed and observed movements have different distributed representations in human aIPS. //The Journal of Neuroscience// ​ 28:​11231-11239 [[http://​dx.doi.org/​10.1523/​JNEUROSCI.3585-08.2008|DOI]] ++Abstract|
 \\ \\