首页 | 本学科首页   官方微博 | 高级检索  
     


Learning Bimodal Structure in Audio–Visual Data
Abstract: A novel model is presented to learn bimodally informative structures from audio–visual signals. The signal is represented as a sparse sum of audio–visual kernels. Each kernel is a bimodal function consisting of synchronous snippets of an audio waveform and a spatio–temporal visual basis function. To represent an audio–visual signal, the kernels can be positioned independently and arbitrarily in space and time. The proposed algorithm uses unsupervised learning to form dictionaries of bimodal kernels from audio–visual material. The basis functions that emerge during learning capture salient audio–visual data structures. In addition, it is demonstrated that the learned dictionary can be used to locate sources of sound in the movie frame. Specifically, in sequences containing two speakers, the algorithm can robustly localize a speaker even in the presence of severe acoustic and visual distracters.
Keywords:
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号