Dr. Tomaso Poggio, Eugene McDermo Professor in the Dept. of Brain & Cognitive Sciences
The talk explores the theoretical consequences of a simple assumption: the computational goal of the feedforward path in the ventral stream â€“ from V1, V2, V4 and to IT â€“ is to discount image transformations, after learning them during development. The initial assumption is that a basic neural operation consists of dot products between input vectors and synaptic weights â€“ which can be modified by learning. It proves that a multi-layer hierarchical architecture of dot-product modules can learn in an unsupervised way geometric transformations of images and then achieve the dual goals of invariance to global affine transformations and of robustness to deformations. These architectures learn in an unsupervised way to be automatically invariant to transformations of a new object, achieving the goal of recognition with one or very few labeled examples. The theory should apply to a varying degree to a range of hierarchical architectures such as HMAX, convolutional networks and related feedforward models of the visual system and formally characterize some of their properties.