This is an old revision of the document!


Vision Breakfast

Weekly meeting of people at and around Stanford interested in vision research-related topics.

We meet Wednesdays at 9:30 AM in room 419 at Jordan Hall, Stanford University

Please visit vis-lunch-announce to sign up!

Organizers Mareike Grotheer and Daniel Birman

Next meeting: 6/12/18 and 6/13/18 Shaul Hochstein and Hsin-Hung Li

6/12/18: Set Summary Perception, Outlier Pop Out, and Categorization: A Common Underlying Computation?

Abstract

Recent research has focused on perception of set statistics. Presented briefly with a group of elements, either simultaneously or successively, observers report precisely the mean of a variety of set features, but are unaware of individual element values. This has been shown for both low and high level features, from circle size to facial expression. A remaining puzzle is how can the perceptual system compute the mean of element values without first knowing the individual values. We performed a series of studies to extend these findings and shed light on this conundrum. We found that set mean computation is performed automatically and implicitly, affecting performance of an unrelated task, and that it is performed on-the-fly for each psychophysical trial, independently. We find that observers also rapidly identify outliers within a set, indicating that they perceive the range of stimulus sets. We find that range perception, too, is automatic, implicit, and on-the-fly. In purposely-designed parallel studies, we find similar characteristics for set and category perception. In particular, category prototype and boundary correspond to set mean and range. Our matching findings suggest that categorization and set summary perception might share computational elements. We suggest and analyze a fundamental computational procedure, based on population encoding, that encompasses all the features of set summary perception, and might also underlie categorization. Finally, we note that this computational procedure might preclude the classic debate concerning category representation by prototype or boundary.

6/13/18: Attention model of binocular rivalry

Abstract

When the two eyes are presented with incompatible images, perception alternates between the two images, creating a phenomena known as binocular rivalry. During rivalry, perceptual experience evolves dynamically while the external inputs are held constant. Binocular rivalry thereby offers a gateway for studying intrinsic cortical computations. In conventional theories of binocular rivalry, the competition between the two percepts has been characterized as mutual inhibition between two populations of neurons selective for each of the two stimuli. However, converging experimental evidence has shown that rivalry also depends on attention: rivalry is largely eliminated when attention is diverted away from the stimuli. In addition, the competing image in one eye suppresses the target image in the other eye through a gain change similar to that induced by attentional modulation. These results require a revision of the current theories of binocular rivalry, in which the role of attention is ignored.

We investigated the role of attention in binocular rivalry in a psychophysical and a preliminary MEG experiment. We found that binocular competition is driven by both attention and mutual inhibition, which have distinct selectivity. We developed a new computational model of rivalry, and with a bifurcation analysis, we identified the parameter space in which the model’s behavior was consistent with experimental results. The model provides a parsimonious account of various perceptual dynamics of rivalry for which there was no previous explanation.

Schedule

Presenting at Vision Breakfast

We would like to encourage anyone in the Stanford community (or outside it) who is working on vision-related research to come and talk about their stuff. In particular, Vision Breakfasts are intended to allow people in Vision labs at Stanford to hear about each other's work early on, when feedback can be important, and the results are new and exciting.

Please email Mareike and Dan if you're interested in presenting.

In general, it should be no problem to bump journal clubs to a later week, so if you see a date you're interested in, chances are we can accommodate you.

Coming soon in 2018

DateSpeaker/JCTitle
06/12/18: 11.30amTalk: Shaul HochsteinSet Summary Perception, Outlier Pop Out, and Categorization: A Common Underlying Computation?
06/13/18Talk: Hsin-hung LiTBA

– we will reconvene in Fall 2018! Please send Dan or Mareieke an email if you are interested in presenting –

Previous meetings

DateSpeaker/JCTitle
06/06/18Journal Club: Elias WangImage reconstruction by domain-transform manifold learning (https://www.nature.com/articles/nature25988)
05/29/18Talk: Guillaume Riesen (PhD Student, Gardner Lab, Stanford)Rivalry and fusion can coexist in a tristable dynamic state
05/16/18Talk: Xiaomo Chen (Postdoc, Moore Lab, Stanford)Dissonant Representations of Visual Space in Prefrontal Cortex during Eye Movements
05/09/18Talk: Kathryn Bonnen (PhD Student, Huk Lab, UT Austin) Encoding and decoding 3D motion
05/02/18Talk: Dan BirmanFlexible readout of stable cortical representations support motion visibility perception
04/25/18Marc Zirnsak (Postdoc, Moore Lab, Stanford)A potential source of saliency in the primate brain
04/11/18Journal Club: Dan Birman Feedback determines the structure of correlated variability in primary visual cortex. (https://www.nature.com/articles/s41593-018-0089-1)

Archive

An archive of Vision Lunch meetings of the last year(s) exists here.

An archive of Vision Lunch meetings prior to 9/13/17 exists on the Vista Lab wiki.