Multimedia event detection with multimodal feature fusion and temporal concept localization |
| |
Authors: | Sangmin Oh Scott McCloskey Ilseo Kim Arash Vahdat Kevin J Cannons Hossein Hajimirsadeghi Greg Mori A G Amitha Perera Megha Pandey Jason J Corso |
| |
Affiliation: | 1. Kitware Inc., Clifton Park, New York, USA 2. Honeywell Labs, Minneapolis, USA 3. School of Computing Science, Simon Fraser University, Burnaby, Canada 4. Department of Computer Science and Engineering, SUNY at Buffalo, Buffalo, USA
|
| |
Abstract: | We present a system for multimedia event detection. The developed system characterizes complex multimedia events based on a large array of multimodal features, and classifies unseen videos by effectively fusing diverse responses. We present three major technical innovations. First, we explore novel visual and audio features across multiple semantic granularities, including building, often in an unsupervised manner, mid-level and high-level features upon low-level features to enable semantic understanding. Second, we show a novel Latent SVM model which learns and localizes discriminative high-level concepts in cluttered video sequences. In addition to improving detection accuracy beyond existing approaches, it enables a unique summary for every retrieval by its use of high-level concepts and temporal evidence localization. The resulting summary provides some transparency into why the system classified the video as it did. Finally, we present novel fusion learning algorithms and our methodology to improve fusion learning under limited training data condition. Thorough evaluation on a large TRECVID MED 2011 dataset showcases the benefits of the presented system. |
| |
Keywords: | |
本文献已被 SpringerLink 等数据库收录! |
|