Research in the Center for Cognitive Computation focuses on how structured visual information arriving from the environment is interpreted and converted into sophisticated internal representations for controlling cognition and behavior. During development, humans and animals learn to make sense of their visual environment, make decision, and act based on their momentary sensory input and their internal representation of earlier short- and long-term experiences. Despite decades of behavioral and neurophysiological research, it is still unclear how this perceptual/cognitive process occurs, what representations the brain uses for it, and how these internal representations are acquired through visual learning. We use an integrated approach to address these questions with three main components:
- human psychophysical and learning experiments
- computational modeling of perception and learning
- multi-electrode recording from behaving animals and neurophysiological measures in humans
We are developing a statistically based and biologically sound framework to link low-level visual processes and mechanisms (e.g., orientation coding and adaptation) with the development and learning of higher level complex features and constancies for efficient representations of objects and scenes of the visual environment, which can support rational decisions and intelligent action.
Go to the site of the Center for Cognitive Computation
Go to the Lab Site of the Computational Learning and Memory Group
Go to the Lab Site of the Computational Systems Neuroscience Lab
Go to the Lab Site of the Vision Lab.