首页 | 本学科首页   官方微博 | 高级检索  
     


Describing Visual Scenes Using Transformed Objects and Parts
Authors:Erik B Sudderth  Antonio Torralba  William T Freeman  Alan S Willsky
Affiliation:(1) Computer Science Division, University of California, Berkeley, USA;(2) Electrical Engineering & Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
Abstract:We develop hierarchical, probabilistic models for objects, the parts composing them, and the visual scenes surrounding them. Our approach couples topic models originally developed for text analysis with spatial transformations, and thus consistently accounts for geometric constraints. By building integrated scene models, we may discover contextual relationships, and better exploit partially labeled training images. We first consider images of isolated objects, and show that sharing parts among object categories improves detection accuracy when learning from few examples. Turning to multiple object scenes, we propose nonparametric models which use Dirichlet processes to automatically learn the number of parts underlying each object category, and objects composing each scene. The resulting transformed Dirichlet process (TDP) leads to Monte Carlo algorithms which simultaneously segment and recognize objects in street and office scenes.
Keywords:Object recognition  Dirichlet process  Hierarchical Dirichlet process  Transformation  Context  Graphical models  Scene analysis
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号