首页 | 本学科首页   官方微博 | 高级检索  
     


Gestural cue analysis in automated semantic miscommunication annotation
Authors:Masashi Inoue  Mitsunori Ogihara  Ryoko Hanada  Nobuhiro Furuyama
Affiliation:1. Graduate School of Science and Engineering, Yamagata University, Yonezawa, Japan
2. Collaborative Research Unit, National Institute of Informatics, Tokyo, Japan
3. Department of Computer Science/Center for Computational Science, The University of Miami, Miami, FL, USA
4. Graduate School of Clinical Psychology/Center for Clinical Psychology and Education, Kyoto University of Education, Kyoto, Japan
5. Information and Society Research Division, National Institute of Informatics, Tokyo, Japan
6. Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, Tokyo, Japan
Abstract:The automated annotation of conversational video by semantic miscommunication labels is a challenging topic. Although miscommunications are often obvious to the speakers as well as the observers, it is difficult for machines to detect them from the low-level features. We investigate the utility of gestural cues in this paper among various non-verbal features. Compared with gesture recognition tasks in human-computer interaction, this process is difficult due to the lack of understanding on which cues contribute to miscommunications and the implicitness of gestures. Nine simple gestural features are taken from gesture data, and both simple and complex classifiers are constructed using machine learning. The experimental results suggest that there is no single gestural feature that can predict or explain the occurrence of semantic miscommunication in our setting.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号