首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 54 毫秒
1.
Multimedia event detection (MED) is a challenging problem because of the heterogeneous content and variable quality found in large collections of Internet videos. To study the value of multimedia features and fusion for representing and learning events from a set of example video clips, we created SESAME, a system for video SEarch with Speed and Accuracy for Multimedia Events. SESAME includes multiple bag-of-words event classifiers based on single data types: low-level visual, motion, and audio features; high-level semantic visual concepts; and automatic speech recognition. Event detection performance was evaluated for each event classifier. The performance of low-level visual and motion features was improved by the use of difference coding. The accuracy of the visual concepts was nearly as strong as that of the low-level visual features. Experiments with a number of fusion methods for combining the event detection scores from these classifiers revealed that simple fusion methods, such as arithmetic mean, perform as well as or better than other, more complex fusion methods. SESAME’s performance in the 2012 TRECVID MED evaluation was one of the best reported.  相似文献   

2.
视频标注是指利用语义索引信息标注视频内容,其目的是方便检索视频。现有视频标注工作使用的视觉底层特征,较难直接用来标注体育视频中的人体专业动作。针对此问题,使用视频图像序列中二维人体关节点特征,建立专业动作知识库来标注体育视频中的专业动作。采用动态规划算法比较视频之间的人体动作差异,并融入协同训练学习算法进行体育视频的半自动标注。以网球比赛视频为测试数据进行实验,结果表明,该算法的动作标注正确率达到81.4%,与现有算法的专业动作标注相比,提高了30.5%。  相似文献   

3.
Recent advances in digital video compression and networks have made video more accessible than ever. However, the existing content-based video retrieval systems still suffer from the following problems. 1) Semantics-sensitive video classification problem because of the semantic gap between low-level visual features and high-level semantic visual concepts; 2) Integrated video access problem because of the lack of efficient video database indexing, automatic video annotation, and concept-oriented summary organization techniques. In this paper, we have proposed a novel framework, called ClassView, to make some advances toward more efficient video database indexing and access. 1) A hierarchical semantics-sensitive video classifier is proposed to shorten the semantic gap. The hierarchical tree structure of the semantics-sensitive video classifier is derived from the domain-dependent concept hierarchy of video contents in a database. Relevance analysis is used for selecting the discriminating visual features with suitable importances. The Expectation-Maximization (EM) algorithm is also used to determine the classification rule for each visual concept node in the classifier. 2) A hierarchical video database indexing and summary presentation technique is proposed to support more effective video access over a large-scale database. The hierarchical tree structure of our video database indexing scheme is determined by the domain-dependent concept hierarchy which is also used for video classification. The presentation of visual summary is also integrated with the inherent hierarchical video database indexing tree structure. Integrating video access with efficient database indexing tree structure has provided great opportunity for supporting more powerful video search engines.  相似文献   

4.
In this work we are concerned with detecting non-collaborative videos in video sharing social networks. Specifically, we investigate how much visual content-based analysis can aid in detecting ballot stuffing and spam videos in threads of video responses. That is a very challenging task, because of the high-level semantic concepts involved; of the assorted nature of social networks, preventing the use of constrained a priori information; and, which is paramount, of the context-dependent nature of non-collaborative videos. Content filtering for social networks is an increasingly demanded task: due to their popularity, the number of abuses also tends to increase, annoying the user and disrupting their services. We propose two approaches, each one better adapted to a specific non-collaborative action: ballot stuffing, which tries to inflate the popularity of a given video by giving “fake” responses to it, and spamming, which tries to insert a non-related video as a response in popular videos. We endorse the use of low-level features combined into higher-level features representation, like bag-of-visual-features and latent semantic analysis. Our experiments show the feasibility of the proposed approaches.  相似文献   

5.
图像语义自动标注及其粒度分析方法   总被引:1,自引:0,他引:1  
缩小图像低层视觉特征与高层语义之间的鸿沟, 以提高图像语义自动标注的精度, 进而快速满足用户检索图像的需求,一直是图像语义自动标注研究的关键. 粒度分析方法是一种层次的、重要的数据分析方法, 为复杂问题的求解提供了新的思路. 图像理解与分析的粒度不同, 图像语义标注的精度则不同, 检索的效率及准确度也就不同. 本文对目前图像语义自动标注模型的方法进行综述和分析, 阐述了粒度分析方法的思想、模型及其在图像语义标注过程中的应用, 探索了以粒度分析为基础的图像语义自动标注方法并给出进一步的研究方向.  相似文献   

6.
7.
缩小图像低层视觉特征与高层语义之间的鸿沟,以提高图像语义自动标注的精度,是研究大规模图像数据管理的关键。提出一种融合多特征的深度学习图像自动标注方法,将图像视觉特征以不同权重组合成词包,根据输入输出变量优化深度信念网络,完成大规模图像数据语义自动标注。在通用Corel图像数据集上的实验表明,融合多特征的深度学习图像自动标注方法,考虑图像不同特征的影响,提高了图像自动标注的精度。  相似文献   

8.
基于深度学习的图像检索系统   总被引:2,自引:0,他引:2  
基于内容的图像检索系统关键的技术是有效图像特征的获取和相似度匹配策略.在过去,基于内容的图像检索系统主要使用低级的可视化特征,无法得到满意的检索结果,所以尽管在基于内容的图像检索上花费了很大的努力,但是基于内容的图像检索依旧是计算机视觉领域中的一个挑战.在基于内容的图像检索系统中,存在的最大的问题是“语义鸿沟”,即机器从低级的可视化特征得到的相似性和人从高级的语义特征得到的相似性之间的不同.传统的基于内容的图像检索系统,只是在低级的可视化特征上学习图像的特征,无法有效的解决“语义鸿沟”.近些年,深度学习技术的快速发展给我们提供了希望.深度学习源于人工神经网络的研究,深度学习通过组合低级的特征形成更加抽象的高层表示属性类别或者特征,以发现数据的分布规律,这是其他算法无法实现的.受深度学习在计算机视觉、语音识别、自然语言处理、图像与视频分析、多媒体等诸多领域取得巨大成功的启发,本文将深度学习技术用于基于内容的图像检索,以解决基于内容的图像检索系统中的“语义鸿沟”问题.  相似文献   

9.
10.
11.
12.
We present a system for multimedia event detection. The developed system characterizes complex multimedia events based on a large array of multimodal features, and classifies unseen videos by effectively fusing diverse responses. We present three major technical innovations. First, we explore novel visual and audio features across multiple semantic granularities, including building, often in an unsupervised manner, mid-level and high-level features upon low-level features to enable semantic understanding. Second, we show a novel Latent SVM model which learns and localizes discriminative high-level concepts in cluttered video sequences. In addition to improving detection accuracy beyond existing approaches, it enables a unique summary for every retrieval by its use of high-level concepts and temporal evidence localization. The resulting summary provides some transparency into why the system classified the video as it did. Finally, we present novel fusion learning algorithms and our methodology to improve fusion learning under limited training data condition. Thorough evaluation on a large TRECVID MED 2011 dataset showcases the benefits of the presented system.  相似文献   

13.
The method based on Bag-of-visual-Words (BoW) deriving from local keypoints has recently appeared promising for video annotation. Visual word weighting scheme has critical impact to the performance of BoW method. In this paper, we propose a new visual word weighting scheme which is referred as emerging patterns weighting (EP-weighting). The EP-weighting scheme can efficiently capture the co-occurrence relationships of visual words and improve the effectiveness of video annotation. The proposed scheme firstly finds emerging patterns (EPs) of visual keywords in training dataset. And then an adaptive weighting assignment is performed for each visual word according to EPs. The adjusted BoW features are used to train classifiers for video annotation. A systematic performance study on TRECVID corpus containing 20 semantic concepts shows that the proposed scheme is more effective than other popular existing weighting schemes.  相似文献   

14.
The recent development of large-scale multimedia concept ontologies has provided a new momentum for research in the semantic analysis of multimedia repositories. Different methods for generic concept detection have been extensively studied, but the question of how to exploit the structure of a multimedia ontology and existing inter-concept relations has not received similar attention. In this paper, we present a clustering-based method for modeling semantic concepts on low-level feature spaces and study the evaluation of the quality of such models with entropy-based methods. We cover a variety of methods for assessing the similarity of different concepts in a multimedia ontology. We study three ontologies and apply the proposed techniques in experiments involving the visual and semantic similarities, manual annotation of video, and concept detection. The results show that modeling inter-concept relations can provide a promising resource for many different application areas in semantic multimedia processing.  相似文献   

15.
针对自动图像标注中底层特征和高层语义之间的鸿沟问题,提出一种基于随机点积图的图像标注改善算法。该算法首先采用图像底层特征对图像候选标注词建立语义关系图,然后利用随机点积图对其进行随机重构,从而挖掘出训练图像集中丢失的语义关系,最后采用重启式随机游走算法,实现图像标注改善。该算法结合了图像的底层特征与高层语义,有效降低了图像集规模变小对标注的影响。在3种通用图像库上的实验证明了该算法能够有效改善图像标注,宏F值与微平均F值最高分别达到0.784与0.743。  相似文献   

16.
在基于内容的图像检索与计算机视觉研究领域中,如何将底层的视觉特征与高层的语义信息相联系,即如何有效地根据图像的底层特征提取其表达的语义概念是备受关注的难题之一。特别是当图像包含了多个语义概念时,问题就变得更为棘手了。本文中,我们提出一种基于图像底层特征值频繁模式的语义概念标注方法,针对图像分块的特点实现了一组有效的模式挖掘算法,并设计了标注规则的生成算法。权威的真实数据集上的实验表明我们的方法在对含有多个语义概念的图像进行概念标注时要比之前的一些算法效果更好。  相似文献   

17.
在基于语义的视频检索系统中,为了弥补视频底层特征与高层用户需求之间的差异,提出了时序概率超图模型。它将时间序列因素融入到模型的构建中,在此基础上提出了一种基于时序概率超图模型的视频多语义标注框架(TPH-VMLAF)。该框架结合视频时间相关性,通过使用基于时序概率超图的镜头多标签半监督分类学习算法对视频镜头进行多语义标注。标注过程中同时解决了已标注视频数据不足和多语义标注的问题。实验结果表明,该框架提高了标注的精确度,表现出了良好的性能。  相似文献   

18.
19.
陈祉宏  冯志勇  贾宇 《计算机应用》2011,31(9):2518-2521
为了弥补图像底层特征到高层语义之间的语义鸿沟,提出一种基于视觉焦点权重模型和词相关性的图像标注方法。由于人们对图像的认识过程中,对焦点区域有比较多的关注,因此可以通过视觉焦点权重模型计算图像各区域的视觉焦点权重来提取图像的焦点区域。同时焦点区域的标注词和其他区域的标注词在逻辑上是相关的,因此通过WordNet根据词汇相关性确定图像的最终标注向量。实验结果表明,通过该方法能提高图像自动语义标注的准确率。  相似文献   

20.
Semantic filtering and retrieval of multimedia content is crucial for efficient use of the multimedia data repositories. Video query by semantic keywords is one of the most difficult problems in multimedia data retrieval. The difficulty lies in the mapping between low-level video representation and high-level semantics. We therefore formulate the multimedia content access problem as a multimedia pattern recognition problem. We propose a probabilistic framework for semantic video indexing, which call support filtering and retrieval and facilitate efficient content-based access. To map low-level features to high-level semantics we propose probabilistic multimedia objects (multijects). Examples of multijects in movies include explosion, mountain, beach, outdoor, music etc. Semantic concepts in videos interact and to model this interaction explicitly, we propose a network of multijects (multinet). Using probabilistic models for six site multijects, rocks, sky, snow, water-body forestry/greenery and outdoor and using a Bayesian belief network as the multinet we demonstrate the application of this framework to semantic indexing. We demonstrate how detection performance can be significantly improved using the multinet to take interconceptual relationships into account. We also show how the multinet can fuse heterogeneous features to support detection based on inference and reasoning  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号