首页 | 本学科首页   官方微博 | 高级检索  
     


Learning Generative Models of Scene Features
Authors:Sim  Robert  Dudek  Gregory
Affiliation:(1) Centre for Intelligent Machines, McGill University, 3480 University St., Montreal, Canada, H3A 2A7
Abstract:We present a method for learning a set of generative models which are suitable for representing selected image-domain features of a scene as a function of changes in the camera viewpoint. Such models are important for robotic tasks, such as probabilistic position estimation (i.e. localization), as well as visualization. Our approach entails the automatic selection of the features, as well as the synthesis of models of their visual behavior. The model we propose is capable of generating maximum-likelihood views, as well as a measure of the likelihood of a particular view from a particular camera position. Training the models involves regularizing observations of the features from known camera locations. The uncertainty of the model is evaluated using cross validation, which allows for a priori evaluation of features and their attributes. The features themselves are initially selected as salient points by a measure of visual attention, and are tracked across multiple views. While the motivation for this work is for robot localization, the results have implications for image interpolation, image-based scene reconstruction and object recognition. This paper presents a formulation of the problem and illustrative experimental results.
Keywords:robot pose estimation  appearance-based modeling  probabilistic localization  generative modeling  feature extraction  feature representation  visual attention
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号