James Herman
Humans and other primates display unparalleled flexibility in their use of visual information. The meaning of images depends, malleably, on our internal goals. Selection of visual stimuli ("Attention") is a canonical way that we flexibly interpret visual input - attended stimuli are used to guide thoughts and actions while ignored ones are often excluded from perception entirely. But the nature of stimulus encoding in the primate visual system - a distributed representation across feature selective visual cortical subregions - prevents the assignment of meaning to images from being "hardwired". Instead, internal-state-dependent meanings must be learned through experience. What are the neuronal mechanisms that support the learning of meaning? What are the limits on how visual information can be decoded? I am curious about all the ways that the primate brain makes use of visual information.