首页 | 本学科首页   官方微博 | 高级检索  
     


Learning viewpoint-invariant face representations from visual experience in an attractor network
Authors:MS Bartlett  TJ Sejnowski
Affiliation:University of California San Diego, Department of Cognitive Science, Salk Institute, La Jolla 92037, USA. marni@salk.edu
Abstract:In natural visual experience, different views of an object or face tend to appear in close temporal proximity as an animal manipulates the object or navigates around it, or as a face changes expression or pose. A set of simulations is presented which demonstrate how viewpoint-invariant representations of faces can be developed from visual experience by capturing the temporal relationships among the input patterns. The simulations explored the interaction of temporal smoothing of activity signals with Hebbian learning in both a feedforward layer and a second, recurrent layer of a network. The feedforward connections were trained by competitive Hebbian learning with temporal smoothing of the post-synaptic unit activities. The recurrent layer was a generalization of a Hopfield network with a low-pass temporal filter on all unit activities. The combination of basic Hebbian learning with temporal smoothing of unit activities produced an attractor network learning rule that associated temporally proximal input patterns into basins of attraction. These two mechanisms were demonstrated in a model that took grey-level images of faces as input. Following training on image sequences of faces as they changed pose, multiple views of a given face fell into the same basin of attraction, and the system acquired representations of faces that were approximately viewpoint-invariant.
Keywords:
本文献已被 PubMed 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号