Skip to main content

Achievement

Algorithmic model of invariant representations in brain

Research Achievements

Algorithmic model of invariant representations in brain

People can recognize objects despite changes in their visual appearance that stem from changes in viewpoint. Looking at a television set, we can follow the action displayed on it even if we don’t look straight at it, if we sit closer than usual, or if we are lying sideways on a couch. The object identity is thus invariant to simple transformations of its visual appearance in the 2-D plane such as translation, scaling and rotation. There is experimental evidence for such invariant representations in the brain, and many competing theories of varying biological plausibility that try to explain how those representations arise. A recent paper detailing a biologcally plausible algorithmic model of this phenomenon is the result of a collaboration between Brandeis Neuroscience graduate student Pavel Sountsov, postdoctoral fellow David Santucci and Professor of Biology John Lisman. for more information, http://blogs.brandeis.edu/science/page/8/
SEE MORE: