首页 | 本学科首页   官方微博 | 高级检索  
     


Multi-modal tag localization for mobile video search
Authors:Rui?Zhang  Sheng?Tang  Email author" target="_blank">Wu?LiuEmail author  Yongdong?Zhang  Jintao?Li
Affiliation:1.Institute of Computing Technology,Chinese Academy of Sciences,Beijing,China;2.Beijing Key Laboratory of Intelligent Telecommunications Software and Multimedia,Beijing University of Posts and Telecommunications,Beijing,China;3.University of Chinese Academy of Sciences,Beijing,China
Abstract:Given the tremendous growth of mobile videos, video tag localization, which localizes the relevant video clips for an associated semantic tag, is becoming increasingly important to influence users browsing and searching experience. However, most existing approaches adopt and depend to large degree on carefully selected visual features, which are manually designed by experts and do not take multi-modality into consideration. Aiming to take into account complementarity of different modalities, in this paper, we propose a multi-modal tag localization framework by exploiting deep learning to learn visual, auditory, and semantic features of videos for tag localization. Furthermore, we showcase that the framework can be applied to two novel mobile video search applications: (1) automatic time-code-level tags generation and (2) query-dependent video thumbnail selection. Extensive experiments on the public dataset show that the proposed approach achieves promising results, which obtains \(7.6~\%\) improvement beyond the state-of-the-arts. Finally, the subjective evaluation of usability demonstrates the proposed applications can significantly improve the user’s mobile video search experience.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号