首页 | 本学科首页   官方微博 | 高级检索  
     


An automatic algorithm for semantic object generation and temporal tracking
Affiliation:1. Center for Future Media, University of Electronic Science and Technology of China(UESTC), Chengdu 611731, PR China;2. School of Astronautics and Aeronautics, University of Electronic Science and Technology of China(UESTC), Chengdu 611731, PR China;3. School of Information and Software Engineering, University of Electronic Science and Technology of China(UESTC), Chengdu 611731, PR China;1. Graduate School of System Informatics, Kobe University, 1-1 Rokkodai, Nada, Kobe 657-8501, Japan;2. Faculty of Engineering, University of Tokushima, 2-1 Minamijosanjima Tokushima, 770-8506, Japan;1. Department of Electrical and Computer Engineering, Colorado State University, Fort Collins, CO, USA;2. Department of Computer Science, Colorado School of Mines, Golden, CO, USA
Abstract:Automatic semantic video object extraction is an important step for providing content-based video coding, indexing and retrieval. However, it is very difficult to design a generic semantic video object extraction technique, which can provide variant semantic video objects by using the same function. Since the presence and absence of persons in an image sequence provide important clues about video content, automatic face detection and human being generation are very attractive for content-based video database applications. For this reason, we propose a novel face detection and semantic human object generation algorithm. The homogeneous image regions with accurate boundaries are first obtained by integrating the results of color edge detection and region growing procedures. The human faces are detected from these homogeneous image regions by using skin color segmentation and facial filters. These detected faces are then used as object seed for semantic human object generation. The correspondences of the detected faces and semantic human objects along time axis are further exploited by a contour-based temporal tracking procedure.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号