首页 | 本学科首页   官方微博 | 高级检索  
     

基于动作图的视角无关动作识别
引用本文:杨跃东,郝爱民,褚庆军,赵沁平,王莉莉.基于动作图的视角无关动作识别[J].软件学报,2009,20(10):2679-2691.
作者姓名:杨跃东  郝爱民  褚庆军  赵沁平  王莉莉
作者单位:1. 北京航空航天大学,虚拟现实技术与系统国家重点实验室,北京,100191
2. 国家教育部考试中心,北京,100084
基金项目:Supported by the National High-Tech Research and Development Plan of China under Grant Nos.2006AA01Z333, 2007AA01Z337 (国家高技术研究发展计划(863)); the China High-Tech Olympics Project under Grant No.Z0005191041211 (中国科技奥运专项)
摘    要:针对视角无关的动作识别,提出加权字典向量描述方法和动作图识别模型.将视频中的局部兴趣点特征和全局形状描述有机结合,形成加权字典向量的描述方法,该方法既具有兴趣点抗噪声强的优点,又可克服兴趣点无法识别静态动作的缺点.根据运动捕获、点云等三维运动数据构建能量曲线,提取关键姿势,生成基本运动单元,并通过自连接、向前连接和向后连接3种连接方式构成有向图,称为本质图.本质图向各个方向投影,根据节点近邻规则建立的有向图称为动作图.通过Na?ve Bayes训练动作图模型,采用Viterbi算法计算视频与动作图的匹配度,根据最大匹配度标定视频序列.动作图具有多角度投影和投影平滑过渡等特点,因此可识别任意角度、任意运动方向的视频序列.实验结果表明,该算法具有较好的识别效果,可识别单目视频、多目视频和多动作视频.

关 键 词:动作识别  角度无关  动作图  兴趣点  Na?ve  Bayes
收稿时间:2008/7/31 0:00:00
修稿时间:6/9/2009 12:00:00 AM

View-Invariant Action Recognition Based on Action Graphs
YANG Yue-Dong,HAO Ai-Min,CHU Qing-Jun,ZHAO Qin-Ping and WANG Li-Li.View-Invariant Action Recognition Based on Action Graphs[J].Journal of Software,2009,20(10):2679-2691.
Authors:YANG Yue-Dong  HAO Ai-Min  CHU Qing-Jun  ZHAO Qin-Ping and WANG Li-Li
Abstract:This paper proposes a weighted codebook vector representation and an action graph model for view-invariant human action recognition. A video is represented as a weighted codebook vector combining dynamic interest points and static shapes. This combined representation has strong noise robusticity and high classification performance on static actions. Several 3D key poses are extracted from the motion capture data or points cloud data, and a set of primitive motion segments are generated. A directed graph called Essential Graph is built of these segments according to self-link, forward-link and back-link. Action Graph is generated from the essential graph projected from a wide range of viewpoints. This paper uses Na?ve Bayes to train a statistical model for each node. Given an unlabeled video, Viterbi algorithm is used for computing the match score between the video and the action graph. The video is then labeled based on the maximum score. Finally, the algorithm is tested on the IXMAS dataset, and the CMU motion capture library. The experimental results demonstrate that this algorithm can recognize the view-invariant actions and achieve high recognition rates.
Keywords:action recognition  view-invariant  action graph  interest point  Na?ve Bayes
本文献已被 万方数据 等数据库收录!
点击此处可从《软件学报》浏览原始摘要信息
点击此处可从《软件学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号