首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
3.
We present a system for multimedia event detection. The developed system characterizes complex multimedia events based on a large array of multimodal features, and classifies unseen videos by effectively fusing diverse responses. We present three major technical innovations. First, we explore novel visual and audio features across multiple semantic granularities, including building, often in an unsupervised manner, mid-level and high-level features upon low-level features to enable semantic understanding. Second, we show a novel Latent SVM model which learns and localizes discriminative high-level concepts in cluttered video sequences. In addition to improving detection accuracy beyond existing approaches, it enables a unique summary for every retrieval by its use of high-level concepts and temporal evidence localization. The resulting summary provides some transparency into why the system classified the video as it did. Finally, we present novel fusion learning algorithms and our methodology to improve fusion learning under limited training data condition. Thorough evaluation on a large TRECVID MED 2011 dataset showcases the benefits of the presented system.  相似文献   

4.
Multimedia event detection (MED) is a challenging problem because of the heterogeneous content and variable quality found in large collections of Internet videos. To study the value of multimedia features and fusion for representing and learning events from a set of example video clips, we created SESAME, a system for video SEarch with Speed and Accuracy for Multimedia Events. SESAME includes multiple bag-of-words event classifiers based on single data types: low-level visual, motion, and audio features; high-level semantic visual concepts; and automatic speech recognition. Event detection performance was evaluated for each event classifier. The performance of low-level visual and motion features was improved by the use of difference coding. The accuracy of the visual concepts was nearly as strong as that of the low-level visual features. Experiments with a number of fusion methods for combining the event detection scores from these classifiers revealed that simple fusion methods, such as arithmetic mean, perform as well as or better than other, more complex fusion methods. SESAME’s performance in the 2012 TRECVID MED evaluation was one of the best reported.  相似文献   

5.
6.
近年来,视频换脸技术发展迅速。该技术可被用于伪造视频来影响政治行动和获得不当利益,从而给社会带来严重危害,目前已经引起了各国政府和舆论的广泛关注。本文通过分析现有的主流视频换脸生成技术和检测技术,指出当前主流的生成方法在时域和空域中均具有伪造痕迹和生成损失。而当前基于神经网络检测合成人脸视频的算法大部分方法只考虑了空域的单幅图像特征,并且在实际检测中有明显的过拟合问题。针对目前检测方法的不足,本文提出一种高效的基于时空域结合的检测算法。该方法同时对视频换脸生成结果在空域与时域中的伪造痕迹进行捕捉,其中,针对单帧的空域特征设计了全卷积网络模块,该模块采用3D卷积结构,能够精确地提取视频帧阵列中每帧的伪造痕迹;针对帧阵列的时域特征设计了卷积长短时记忆网络模块,该模块能够检测伪造视频帧之间的时序伪造痕迹;最后,根据特征分类设计特征网络金字塔网络结构,该结构能够融合不同尺寸的时空域特征,通过多尺度融合来提高分类效果,并减少过拟合现象。与现有方法相比,该方法在训练中的收敛效果和分类效果方面有明显优势。除此之外,我们在保证检测准确率的前提下采用较少的参数,相比现有结构而言训练效率更高。  相似文献   

7.
8.
目的 立体视频能提供身临其境的逼真感而越来越受到人们的喜爱,而视觉显著性检测可以自动预测、定位和挖掘重要视觉信息,可以帮助机器对海量多媒体信息进行有效筛选。为了提高立体视频中的显著区域检测性能,提出了一种融合双目多维感知特性的立体视频显著性检测模型。方法 从立体视频的空域、深度以及时域3个不同维度出发进行显著性计算。首先,基于图像的空间特征利用贝叶斯模型计算2D图像显著图;接着,根据双目感知特征获取立体视频图像的深度显著图;然后,利用Lucas-Kanade光流法计算帧间局部区域的运动特征,获取时域显著图;最后,将3种不同维度的显著图采用一种基于全局-区域差异度大小的融合方法进行相互融合,获得最终的立体视频显著区域分布模型。结果 在不同类型的立体视频序列中的实验结果表明,本文模型获得了80%的准确率和72%的召回率,且保持了相对较低的计算复杂度,优于现有的显著性检测模型。结论 本文的显著性检测模型能有效地获取立体视频中的显著区域,可应用于立体视频/图像编码、立体视频/图像质量评价等领域。  相似文献   

9.
基于混沌特征的运动模式分割和动态纹理分类   总被引:1,自引:0,他引:1  
王勇  胡士强 《自动化学报》2014,40(4):604-614
采用混沌理论对动态纹理中的像素值序列建模,提取动态纹理中的像素值序列的相关特征量,将视频用特征向量矩阵表示. 通过均值漂移(Mean shift)算法对矩阵中的特征向量聚类,实现对视频中的运动模式分割. 然后,采用地球移动距离(Earth mover’s distance,EMD)度量不同视频的差异,对动态纹理视频分类. 本文对多个数据库测试表明:1)分割算法可以分割出视频中不同的运动模式;2)提出的特征向量可以很好地描述动态纹理系统;3)分类算法可以对动态纹理视频分类,且对视频中噪声干扰具有一定的鲁棒性.  相似文献   

10.
11.
This paper presents an unified approach in analyzing and structuring the content of videotaped lectures for distance learning applications. By structuring lecture videos, we can support topic indexing and semantic querying of multimedia documents captured in the traditional classrooms. Our goal in this paper is to automatically construct the cross references of lecture videos and textual documents so as to facilitate the synchronized browsing and presentation of multimedia information. The major issues involved in our approach are topical event detection, video text analysis and the matching of slide shots and external documents. In topical event detection, a novel transition detector is proposed to rapidly locate the slide shot boundaries by computing the changes of text and background regions in videos. For each detected topical event, multiple keyframes are extracted for video text detection, super-resolution reconstruction, binarization and recognition. A new approach for the reconstruction of high-resolution textboxes based on linear interpolation and multi-frame integration is also proposed for the effective binarization and recognition. The recognized characters are utilized to match the video slide shots and external documents based on our proposed title and content similarity measures.  相似文献   

12.
With the proliferation of digital cameras and mobile devices, people are taking much more photos than ever before. However, these photos can be redundant in content and varied in quality. Therefore there is a growing need for tools to manage the photo collections. One efficient photo management way is photo collection summarization which segments the photo collection into different events and then selects a set of representative and high quality photos (key photos) from those events. However, existing photo collection summarization methods mainly consider the low-level features for photo representation only, such as color, texture, etc, while ignore many other useful features, for example high-level semantic feature and location. Moreover, they often return fixed summarization results which provide little flexibility. In this paper, we propose a multi-modal and multi-scale photo collection summarization method by leveraging multi-modal features, including time, location and high-level semantic features. We first use Gaussian mixture model to segment photo collection into events. With images represented by those multi-modal features, our event segmentation algorithm can generate better performance since the multi-modal features can better capture the inhomogeneous structure of events. Next we propose a novel key photo ranking and selection algorithm to select representative and high quality photos from the events for summarization. Our key photo ranking algorithm takes the importance of both events and photos into consideration. Furthermore, our photo summarization method allows users to control the scale of event segmentation and number of key photos selected. We evaluate our method by extensive experiments on four photo collections. Experimental results demonstrate that our method achieves better performance than previous photo collection summarization methods.  相似文献   

13.
In this paper, a subspace-based multimedia data mining framework is proposed for video semantic analysis, specifically video event/concept detection, by addressing two basic issues, i.e., semantic gap and rare event/concept detection. The proposed framework achieves full automation via multimodal content analysis and intelligent integration of distance-based and rule-based data mining techniques. The content analysis process facilitates the comprehensive video analysis by extracting low-level and middle-level features from audio/visual channels. The integrated data mining techniques effectively address these two basic issues by alleviating the class imbalance issue along the process and by reconstructing and refining the feature dimension automatically. The promising experimental performance on goal/corner event detection and sports/commercials/building concepts extraction from soccer videos and TRECVID news collections demonstrates the effectiveness of the proposed framework. Furthermore, its unique domain-free characteristic indicates the great potential of extending the proposed multimedia data mining framework to a wide range of different application domains.  相似文献   

14.
《Artificial Intelligence》2007,171(8-9):586-605
In this paper, we model multi-agent events in terms of a temporally varying sequence of sub-events, and propose a novel approach for learning, detecting and representing events in videos. The proposed approach has three main steps. First, in order to learn the event structure from training videos, we automatically encode the sub-event dependency graph, which is the learnt event model that depicts the conditional dependency between sub-events. Second, we pose the problem of event detection in novel videos as clustering the maximally correlated sub-events using normalized cuts. The principal assumption made in this work is that the events are composed of a highly correlated chain of sub-events that have high weights (association) within the cluster and relatively low weights (disassociation) between the clusters. The event detection does not require prior knowledge of the number of agents involved in an event and does not make any assumptions about the length of an event. Third, we recognize the fact that any abstract event model should extend to representations related to human understanding of events. Therefore, we propose an extension of CASE representation of natural languages that allows a plausible means of interface between users and the computer. We show results of learning, detection, and representation of events for videos in the meeting, surveillance, and railroad monitoring domains.  相似文献   

15.
近几年,随着计算机硬件设备的不断更新换代和深度学习技术的不断发展,新出现的多媒体篡改工具可以让人们更容易地对视频中的人脸进行篡改。使用这些新工具制作出的人脸篡改视频几乎无法被肉眼所察觉,因此我们急需有效的手段来对这些人脸篡改视频进行检测。目前流行的视频人脸篡改技术主要包括以自编码器为基础的Deepfake技术和以计算机图形学为基础的Face2face技术。我们注意到人脸篡改视频里人脸区域的帧间差异要明显大于未被篡改的视频中人脸区域的帧间差异,因此视频相邻帧中人脸图像的差异可以作为篡改检测的重要线索。在本文中,我们提出一种新的基于帧间差异的人脸篡改视频检测框架。我们首先使用一种基于传统手工设计特征的检测方法,即基于局部二值模式(Local binary pattern,LBP)/方向梯度直方图(Histogram of oriented gradient,HOG)特征的检测方法来验证该框架的有效性。然后,我们结合一种基于深度学习的检测方法,即基于孪生网络的检测方法进一步增强人脸图像特征表示来提升检测效果。在FaceForensics++数据集上,基于LBP/HOG特征的检测方法有较高的检测准确率,而基于孪生网络的方法可以达到更高的检测准确率,且该方法有较强的鲁棒性;在这里,鲁棒性指一种检测方法可以在三种不同情况下达到较高的检测准确率,这三种情况分别是:对视频相邻帧中人脸图像差异用两种不同方式进行表示、提取三种不同间隔的帧对来计算帧间差异以及训练集与测试集压缩率不同。  相似文献   

16.
Hashing is a common solution for content-based multimedia retrieval by encoding high-dimensional feature vectors into short binary codes. Previous works mainly focus on image hashing problem. However, these methods can not be directly used for video hashing, as videos contain not only spatial structure within each frame, but also temporal correlation between successive frames. Several researchers proposed to handle this by encoding the extracted key frames, but these frame-based methods are time-consuming in real applications. Other researchers proposed to characterize the video by averaging the spatial features of frames and then the existing hashing methods can be adopted. Unfortunately, the sort of “video” features does not take the correlation between frames into consideration and may lead to the loss of the temporal information. Therefore, in this paper, we propose a novel unsupervised video hashing framework via deep neural network, which performs video hashing by incorporating the temporal structure as well as the conventional spatial structure. Specially, the spatial features of videos are obtained by utilizing convolutional neural network, and the temporal features are established via long-short term memory. After that, the time series pooling strategy is employed to obtain the single feature vector for each video. The obtained spatio-temporal feature can be applied to many existing unsupervised hashing methods. Experimental results on two real datasets indicate that by employing the spatio-temporal features, our hashing method significantly improves the performance of existing methods which only deploy the spatial features, and meanwhile obtains higher mean average precision compared with the state-of-the-art video hashing methods.  相似文献   

17.
Recently, newly invented features (e.g. Fisher vector, VLAD) have achieved state-of-the-art performance in large-scale video analysis systems that aims to understand the contents in videos, such as concept recognition and event detection. However, these features are in high-dimensional representations, which remarkably increases computation costs and correspondingly deteriorates the performance of subsequent learning tasks. Notably, the situation becomes even worse when dealing with large-scale video data where the number of class labels are limited. To address this problem, we propose a novel algorithm to compactly represent huge amounts of unconstrained video data. Specifically, redundant feature dimensions are removed by using our proposed feature selection algorithm. Considering unlabeled videos that are easy to obtain on the web, we apply this feature selection algorithm in a semi-supervised framework coping with a shortage of class information. Different from most of the existing semi-supervised feature selection algorithms, our proposed algorithm does not rely on manifold approximation, i.e. graph Laplacian, which is quite expensive for a large number of data. Thus, it is possible to apply the proposed algorithm to a real large-scale video analysis system. Besides, due to the difficulty of solving the non-smooth objective function, we develop an efficient iterative approach to seeking the global optimum. Extensive experiments are conducted on several real-world video datasets, including KTH, CCV, and HMDB. The experimental results have demonstrated the effectiveness of the proposed algorithm.  相似文献   

18.
This paper presents an approach for event detection and annotation of broadcast soccer video. It benefits from the fact that occurrence of some audiovisual features demonstrates remarkable patterns for detection of semantic events. However, the goal of this paper is to propose a flexible system that can be able to be used with minimum reliance on predefined sequences of features and domain knowledge derivative structures. To achieve this goal, we design a fuzzy rule-based reasoning system as a classifier which adopts statistical information from a set of audiovisual features as its crisp input values and produces semantic concepts corresponding to the occurred events. A set of tuples is created by discretization and fuzzification of continuous feature vectors derived from the training data. We extract the hidden knowledge among the tuples and correlation between the features and related events by constructing a decision tree (DT). A set of fuzzy rules is generated by traversing each path from root toward leaf nodes of constructed DT. These rules are inserted in fuzzy rule base of designed fuzzy system and employed by fuzzy inference engine to perform decision-making process and predict the occurred events in input video. Experimental results conducted on a large set of broadcast soccer videos demonstrate the effectiveness of the proposed approach.  相似文献   

19.
Adverse weather conditions such as snow, fog or heavy rain greatly reduce the visual quality of outdoor surveillance videos. Video quality enhancement can improve the visual quality of surveillance videos providing clearer images with more details to better meet human perception needs and also improve video analytics performance. Existing work in this area mainly focuses on the quality enhancement for high-resolution videos or still images, but few algorithms are developed for enhancing surveillance videos, which normally have low resolution, high noises and compression artifacts. In addition, for snow or rain conditions, the image quality of near-field view is degraded by the obscuration of apparent snowflakes or raindrops, while the quality of far-field view is degraded by the obscuration of fog-like snowflakes or raindrops. Very few video quality enhancement algorithms have been developed to handle both problems. In this paper, we propose a novel video quality enhancement algorithm for see-through snow, fog or heavy rain. Our algorithm not only improves human visual perception experiences for video surveillance, but also reveal more video contents for better video content analyses. The proposed algorithm handles both near-field and far-field snow/rain effects by proposed a two-step approach: (1) the near-field enhancement algorithm identifies obscuration pixels by snow or rain in the near-field view and removes these pixels as snowflakes or raindrops; different from state-of-the-art methods, our proposed algorithm in this step can detect snowflakes on foreground objects or background, and apply different methods to fill in the removed regions. (2) The far-field enhancement algorithm restores the image’s contrast information not only to reveal more details in the far-field view, but also to enhance the overall image’s quality; in this step, the proposed algorithm adaptively enhances the global and local contrast, which is inspired on the human visual system, and accounts for the perceptual sensitivity to noises, compression artifacts, and the texture of image content. From our extensive testing, the proposed approach significantly improves the visual quality of surveillance videos by removing snow/fog/rain effects.  相似文献   

20.
Abnormality detection in crowded scenes plays a very important role in automatic monitoring of surveillance feeds. Here we present a novel framework for abnormality detection in crowd videos. The key idea of the approach is that rarely or sparsely occurring events correspond to abnormal activities, while the regularly or commonly occurring events correspond to the normal activities. Each input video is represented using feature matrices that capture the nature of activity taking place while maintaining the spatial and temporal structure of the video. The feature matrices are decomposed into their low-rank and sparse components where sparse component corresponds to the abnormal activities. The approach does not require any explicit modeling of crowd behavior or training, but the information from training data can be seamlessly incorporated if it is available. The estimation is further improved by ensuring temporal and spatial coherence of sparse component across the videos using a Kalman filter-like framework. This not only results in reduction of outliers and noise but also fills missing regions in the sparse component. Localization of the anomalies is obtained as a by-product of the proposed approach. Evaluation on the UMN and UCSD datasets and comparisons with several state-of-the-art crowd abnormality detection approaches shows the effectiveness of the proposed approach. We also show results on a challenging crowd dataset created as part of this effort, with videos downloaded from the web.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号