首页 | 本学科首页   官方微博 | 高级检索  
     


Searching for Complex Human Activities with No Visual Examples
Authors:Nazl? ?kizler  David A Forsyth
Affiliation:(1) Bilkent University, 06800 Ankara, Turkey;(2) University of Illinois at Urbana-Champaign, 61801 Urbana, IL, USA
Abstract:We describe a method of representing human activities that allows a collection of motions to be queried without examples, using a simple and effective query language. Our approach is based on units of activity at segments of the body, that can be composed across space and across the body to produce complex queries. The presence of search units is inferred automatically by tracking the body, lifting the tracks to 3D and comparing to models trained using motion capture data. Our models of short time scale limb behaviour are built using labelled motion capture set. We show results for a large range of queries applied to a collection of complex motion and activity. We compare with discriminative methods applied to tracker data; our method offers significantly improved performance. We show experimental evidence that our method is robust to view direction and is unaffected by some important changes of clothing.
Keywords:Human action recognition  Video retrieval  Activity  HMM  Motion capture
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号