首页 | 本学科首页   官方微博 | 高级检索  
     


Robust visual speakingness detection using bi-level HMM
Authors:P Tiawongsombat  Mun-Ho Jeong  Joo-Seop Yun  Bum-Jae You  Sang-Rok Oh
Affiliation:1. HCI & Robotics, University of Science and Technology (UST), South Korea;2. School of Robotics, Kwangwoon University, South Korea;3. Gyeongbuk Research Institute of Vehicle Embedded Technology (GIVET), South Korea;4. Center for Cognitive Robotics Research, Korea Institute of Science and Technology (KIST), 39-1 Hawolgok 2 Dong, Sungbuk Gu, 136-791 Seoul, South Korea
Abstract:Visual voice activity detection (V-VAD) plays an important role in both HCI and HRI, affecting both the conversation strategy and sync between humans and robots/computers. The typical speakingness decision of V-VAD consists of post-processing for signal smoothing and classification using thresholding. Several parameters, ensuring a good trade-off between hit rate and false alarm, are usually heuristically defined. This makes the V-VAD approaches vulnerable to noisy observation and changes of environment conditions, resulting in poor performance and robustness to undesired frequent speaking state changes. To overcome those difficulties, this paper proposes a new probabilistic approach, naming bi-level HMM and analyzing lip activity energy for V-VAD in HRI. The designing idea is based on lip movement and speaking assumptions, embracing two essential procedures into a single model. A bi-level HMM is an HMM with two state variables in different levels, where state occurrence in a lower level conditionally depends on the state in an upper level. The approach works online with low-resolution image and in various lighting conditions, and has been successfully tested in 21 image sequences (22,927 frames). It achieved over 90% of probabilities of detection, in which it brought improvements of almost 20% compared to four other V-VAD approaches.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号