首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
2.
By introducing the concept detection results to the retrieval process, concept-based video retrieval (CBVR) has been successfully used for semantic content-based video retrieval application. However, how to select and fuse the appropriate concepts for a specific query is still an important but difficult issue. In this paper, we propose a novel and effective concept selection method, named graph-based multi-space semantic correlation propagation (GMSSCP), to explore the relationship between the user query and concepts for video retrieval application. Compared with traditional methods, GMSSCP makes use of a manifold-ranking algorithm to collectively explore the multi-layered relationships between the query and concepts, and the expansion result is more robust to noises. Parallel to this, GMSSCP has a query-adapting property, which can enhance the process of concept correlation propagation and selection with strong pertinence of query cues. Furthermore, it can dynamically update the unified propagation graph by flexibly introducing the multi-modal query cues as additional nodes, and is not only effective for automatic retrieval but also appropriate for the interactive case. Encouraging experimental results on TRECVID datasets demonstrate the effectiveness of GMSSCP over the state-of-the-art concept selection methods. Moreover, we also apply it to the interactive retrieval system??VideoMap and gain an excellent performance and user experience.  相似文献   

3.
This paper presents the semantic pathfinder architecture for generic indexing of multimedia archives. The semantic pathfinder extracts semantic concepts from video by exploring different paths through three consecutive analysis steps, which we derive from the observation that produced video is the result of an authoring-driven process. We exploit this authoring metaphor for machine-driven understanding. The pathfinder starts with the content analysis step. In this analysis step, we follow a data-driven approach of indexing semantics. The style analysis step is the second analysis step. Here, we tackle the indexing problem by viewing a video from the perspective of production. Finally, in the context analysis step, we view semantics in context. The virtue of the semantic pathfinder is its ability to learn the best path of analysis steps on a per-concept basis. To show the generality of this novel indexing approach, we develop detectors for a lexicon of 32 concepts and we evaluate the semantic pathfinder against the 2004 NIST TRECVID video retrieval benchmark, using a news archive of 64 hours. Top ranking performance in the semantic concept detection task indicates the merit of the semantic pathfinder for generic indexing of multimedia archives.  相似文献   

4.
This paper describes a novel method for browsing a large video collection. It links various forms of related video fragments together as threads. These threads are based on query results, the timeline as well as visual and semantic similarity. We design two interfaces which use threads as the basis for browsing. One interface shows a minimal set of threads, and the other as many as fit on the screen. To evaluate both interfaces we perform a regular user study, a study based on user simulation, and we participated in the interactive video retrieval task of the TRECVID benchmark. The results indicate that the use of threads in interactive video retrieval is beneficial. Furthermore, we found that in general the query result and the timeline are the most important threads, but having several additional threads improves the performance as it encourages people to explore new dimensions.   相似文献   

5.
A number of researchers have been building high-level semantic concept detectors such as outdoors, face, building, to help with semantic video retrieval. Our goal is to examine how many concepts would be needed, and how they should be selected and used. Simulating performance of video retrieval under different assumptions of concept detection accuracy, we find that good retrieval can be achieved even when detection accuracy is low, if sufficiently many concepts are combined. We also derive suggestions regarding the types of concepts that would be most helpful for a large concept lexicon. Since our user study finds that people cannot predict which concepts will help their query, we also suggest ways to find the best concepts to use. Ultimately, this paper concludes that "concept-based" video retrieval with fewer than 5000 concepts, detected with a minimal accuracy of 10% mean average precision is likely to provide high accuracy results in broadcast news retrieval.  相似文献   

6.
Multimedia content has been growing quickly and video retrieval is regarded as one of the most famous issues in multimedia research. In order to retrieve a desirable video, users express their needs in terms of queries. Queries can be on object, motion, texture, color, audio, etc. Low-level representations of video are different from the higher level concepts which a user associates with video. Therefore, query based on semantics is more realistic and tangible for end user. Comprehending the semantics of query has opened a new insight in video retrieval and bridging the semantic gap. However, the problem is that the video needs to be manually annotated in order to support queries expressed in terms of semantic concepts. Annotating semantic concepts which appear in video shots is a challenging and time-consuming task. Moreover, it is not possible to provide annotation for every concept in the real world. In this study, an integrated semantic-based approach for similarity computation is proposed with respect to enhance the retrieval effectiveness in concept-based video retrieval. The proposed method is based on the integration of knowledge-based and corpus-based semantic word similarity measures in order to retrieve video shots for concepts whose annotations are not available for the system. The TRECVID 2005 dataset is used for evaluation purpose, and the results of applying proposed method are then compared against the individual knowledge-based and corpus-based semantic word similarity measures which were utilized in previous studies in the same domain. The superiority of integrated similarity method is shown and evaluated in terms of Mean Average Precision (MAP).  相似文献   

7.
基于多模态信息挖掘融合的视频检索技术   总被引:1,自引:0,他引:1  
基于内容的多媒体检索特别是视频检索,由于多媒体数据本身具有复杂的语义,所以极大地提高了检索的难度.算法着眼于视频本身挖掘出充分的资源信息并且将这些信息加以融合来提高视频检索的性能.基于这种思想,提出一种多模态视频检索模型以及相应的手动式搜索和交互式搜索的算法方案.搜索策略在TRECVID视频检索比赛中取得了不错的成绩.  相似文献   

8.
Describing visual contents in videos by semantic concepts is an effective and realistic approach that can be used in video applications such as annotation, indexing, retrieval and ranking. In these applications, video data needs to be labelled with some known set of labels or concepts. Assigning semantic concepts manually is not feasible due to the large volume of ever-growing video data. Hence, automatic semantic concept detection of videos is a hot research area. Recently Deep Convolutional Neural Networks (CNNs) used in computer vision tasks are showing remarkable performance. In this paper, we present a novel approach for automatic semantic video concept detection using deep CNN and foreground driven concept co-occurrence matrix (FDCCM) which keeps foreground to background concept co-occurrence values, built by exploiting concept co-occurrence relationship in pre-labelled TRECVID video dataset and from a collection of random images extracted from Google Images. To deal with the dataset imbalance problem, we have extended this approach by making a fusion of two asymmetrically trained deep CNNs and used FDCCM to further improve concept detection. The performance of the proposed approach is compared with state-of-the-art approaches for the video concept detection over the widely used TRECVID data set and is found to be superior to existing approaches.  相似文献   

9.
Automatic video annotation is to bridge the semantic gap and facilitate concept based video retrieval by detecting high level concepts from video data. Recently, utilizing context information has emerged as an important direction in such domain. In this paper, we present a novel video annotation refinement approach by utilizing extrinsic semantic context extracted from video subtitles and intrinsic context among candidate annotation concepts. The extrinsic semantic context is formed by identifying a set of key terms from video subtitles. The semantic similarity between those key terms and the candidate annotation concepts is then exploited to refine initial annotation results, while most existing approaches utilize textual information heuristically. Similarity measurements including Google distance and WordNet distance have been investigated for such a refinement purpose, which is different with approaches deriving semantic relationship among concepts from given training datasets. Visualness is also utilized to discriminate individual terms for further refinement. In addition, Random Walk with Restarts (RWR) technique is employed to perform final refinement of the annotation results by exploring the inter-relationship among annotation concepts. Comprehensive experiments on TRECVID 2005 dataset have been conducted to demonstrate the effectiveness of the proposed annotation approach and to investigate the impact of various factors.  相似文献   

10.
基于内容的视频查询系统研究   总被引:1,自引:0,他引:1       下载免费PDF全文
由于多媒体数据库管理和检索的效率直接决定了人们利用多媒体数据信息的效率,因此随着MPEG-7标准的提出,基于内容的图象/视频存储和检索已成为研究的热点.为了快速地对视频进行浏览和检索,在研究基于内容的视频数据库管理和检索等热点问题的基础上,首先使用MPEG-7视觉内容描述子和语义描述子来构建视频数据库的语义结构,并结合底层视觉特征和高层语义特征,采用相关反馈机制和半自动权重更新体制来对视频数据库进行管理和检索;然后采用语法分析器来支持自然语言查询;最后在此基础上实现了基于内容的视频数据库的管理和查询系统.实验证明,该系统能够有效地对视频数据进行管理和检索,并且具有一定的智能性和适应性.  相似文献   

11.
Automatic semantic concept detection in video is important for effective content-based video retrieval and mining and has gained great attention recently. In this paper, we propose a general post-filtering framework to enhance robustness and accuracy of semantic concept detection using association and temporal analysis for concept knowledge discovery. Co-occurrence of several semantic concepts could imply the presence of other concepts. We use association mining techniques to discover such inter-concept association relationships from annotations. With discovered concept association rules, we propose a strategy to combine associated concept classifiers to improve detection accuracy. In addition, because video is often visually smooth and semantically coherent, detection results from temporally adjacent shots could be used for the detection of the current shot. We propose temporal filter designs for inter-shot temporal dependency mining to further improve detection accuracy. Experiments on the TRECVID 2005 dataset show our post-filtering framework is both efficient and effective in improving the accuracy of semantic concept detection in video. Furthermore, it is easy to integrate our framework with existing classifiers to boost their performance.  相似文献   

12.
A Multimodal and Multilevel Ranking Scheme for Large-Scale Video Retrieval   总被引:2,自引:0,他引:2  
A critical issue of large-scale multimedia retrieval is how to develop an effective framework for ranking the search results. This problem is particularly challenging for content-based video retrieval due to some issues such as short text queries, insufficient sample learning, fusion of multimodal contents, and large-scale learning with huge media data. In this paper, we propose a novel multimodal and multilevel (MMML) ranking framework to attack the challenging ranking problem of content-based video retrieval. We represent the video retrieval task by graphs and suggest a graph based semi-supervised ranking (SSR) scheme, which can learn with small samples effectively and integrate multimodal resources for ranking smoothly. To make the semi-supervised ranking solution practical for large-scale retrieval tasks, we propose a multilevel ranking framework that unifies several different ranking approaches in a cascade fashion. We have conducted empirical evaluations of our proposed solution for automatic search tasks on the benchmark testbed of TRECVID2005. The promising empirical results show that our ranking solutions are effective and very competitive with the state-of-the-art solutions in the TRECVID evaluations.  相似文献   

13.
The application of machine learning techniques to image and video search has been shown to boost the performance of multimedia retrieval systems, and promises to lead to more generalized semantic search approaches. In particular, the availability of large training collections allows model-driven search using a substantial number of semantic concepts. The training collections are obtained in a manual annotation process where human raters review images and assign predefined semantic concept labels. Besides being prone to human error, manual image annotation is biased by the view of the individual annotator because visual information almost always leaves room for ambiguity. Ideally, several independent judgments are obtained per image, and the inter-rater agreement is assessed. While disagreement between ratings bears valuable information on the annotation quality, it complicates the task of clearly classifying rated images based on multiple judgments. In the absence of a gold standard, evaluating multiple judgments and resolving disagreement between raters is not trivial. In this paper, we present an approach using latent structure analysis to solve this problem. We apply latent class modeling to the annotation data collected during the TRECVID 2005 Annotation Forum, and demonstrate how to use this statistic to clearly classify each image on the basis of varying numbers of ratings. We use latent class modeling to quantify the annotation quality and discuss the results in comparison with the well-known Kappa inter-rater agreement measure.  相似文献   

14.
Bridging the Gap: Query by Semantic Example   总被引:4,自引:0,他引:4  
A combination of query-by-visual-example (QBVE) and semantic retrieval (SR), denoted as query-by-semantic-example (QBSE), is proposed. Images are labeled with respect to a vocabulary of visual concepts, as is usual in SR. Each image is then represented by a vector, referred to as a semantic multinomial, of posterior concept probabilities. Retrieval is based on the query-by-example paradigm: the user provides a query image, for which 1) a semantic multinomial is computed and 2) matched to those in the database. QBSE is shown to have two main properties of interest, one mostly practical and the other philosophical. From a practical standpoint, because it inherits the generalization ability of SR inside the space of known visual concepts (referred to as the semantic space) but performs much better outside of it, QBSE produces retrieval systems that are more accurate than what was previously possible. Philosophically, because it allows a direct comparison of visual and semantic representations under a common query paradigm, QBSE enables the design of experiments that explicitly test the value of semantic representations for image retrieval. An implementation of QBSE under the minimum probability of error (MPE) retrieval framework, previously applied with success to both QBVE and SR, is proposed, and used to demonstrate the two properties. In particular, an extensive objective comparison of QBSE with QBVE is presented, showing that the former significantly outperforms the latter both inside and outside the semantic space. By carefully controlling the structure of the semantic space, it is also shown that this improvement can only be attributed to the semantic nature of the representation on which QBSE is based.  相似文献   

15.
Community structure as an interesting property of network has attracted wide attention from many research fields. In this paper, we exploit the visual community structure in visual-temporal correlation network and utilize it to improve interactive video retrieval. Firstly, we propose a hierarchical community-based feedback algorithm. By re-ranking the video shots through diffusion processes respectively on the inter-community and intra-community level, the feedback algorithm can make full use of the limited user feedback. Furthermore, since it avoids entire graph computation, the feedback algorithm can make quick responses to user feedback, which is particularly important for the large video collections. Secondly, we propose a community-based visualization interface called VideoMap. By organizing the video shots following the community structure, the VideoMap presents a comprehensive and informative view of the whole dataset to facilitate users’ access. Moreover, the VideoMap can help users to quickly locate the potential relevant regions and make active annotation according to the distribution of labeled samples on the VideoMap. Experiments on TRECVID 2009 search dataset demonstrate the efficiency of the feedback algorithm and the effectiveness of the visualization interface.  相似文献   

16.
We present a system for multimedia event detection. The developed system characterizes complex multimedia events based on a large array of multimodal features, and classifies unseen videos by effectively fusing diverse responses. We present three major technical innovations. First, we explore novel visual and audio features across multiple semantic granularities, including building, often in an unsupervised manner, mid-level and high-level features upon low-level features to enable semantic understanding. Second, we show a novel Latent SVM model which learns and localizes discriminative high-level concepts in cluttered video sequences. In addition to improving detection accuracy beyond existing approaches, it enables a unique summary for every retrieval by its use of high-level concepts and temporal evidence localization. The resulting summary provides some transparency into why the system classified the video as it did. Finally, we present novel fusion learning algorithms and our methodology to improve fusion learning under limited training data condition. Thorough evaluation on a large TRECVID MED 2011 dataset showcases the benefits of the presented system.  相似文献   

17.
18.
基于高层语义的视频检索研究   总被引:1,自引:0,他引:1       下载免费PDF全文
视频语义检索的研究是目前研究的热点之一。现有的视频检索系统技术多是基于底层特征的、非语义层次的检索。与人类思维中所能理解的高层语义概念相去甚远,这严重影响视频检索的实际效果。如何跨越底层特征和高层语义的鸿沟,用高层语义概念进行视频检索是当前研究的重点。通过对视频内容的语义理解、语义分析、语义提取的简要概述,试图构造一种视频语义检索模型。  相似文献   

19.
本文针对互联网视频检索技术的发展,阐述了目前主流视频搜索引擎的技术现状,分析了互联网视频检索的关键技术,特别是对于视频特征的提取技术。本文的创新点是提出了一种通用的基于内容的静态语义视频检索方法,该方法可以弥补基于文本视频检索的有关不足,并且在TRECVID的视频概念检索数据的静态语义概念中得到验证,运行稳定。  相似文献   

20.
传统的视频检索大多采用基于关键词的方法,难以获得让用户满意的查准率和查全率。为此提出一种基于本体的视频检索技术,该技术借助于领域本体,以其基本概念为关键词通过互联网图像搜索引擎在线获取样本图像组,提取SIFT特征建立图像特征词典,抽取图像特征直方图并计算相似度,辅助完成视频的自动标注,初始化视频检索库;同时,借助于领域本体,对从用户的查询输入中抽取的关键词进行语义扩展,将以扩展概念集进行检索的结果返回给用户,以此实现基于本体的视频检索。最后,结合实例对该算法进行实现和分析,表明了该方法的可行性和有效性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号