首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
Multimodal Retrieval is a well-established approach for image retrieval. Usually, images are accompanied by text caption along with associated documents describing the image. Textual query expansion as a form of enhancing image retrieval is a relatively less explored area. In this paper, we first study the effect of expanding textual query on both image and its associated text retrieval. Our study reveals that judicious expansion of textual query through keyphrase extraction can lead to better results, either in terms of text-retrieval or both image and text-retrieval. To establish this, we use two well-known keyphrase extraction techniques based on tf-idf and KEA. While query expansion results in increased retrieval efficiency, it is imperative that the expansion be semantically justified. So, we propose a graph-based keyphrase extraction model that captures the relatedness between words in terms of both mutual information and relevance feedback. Most of the existing works have stressed on bridging the semantic gap by using textual and visual features, either in combination or individually. The way these text and image features are combined determines the efficacy of any retrieval. For this purpose, we adopt Fisher-LDA to adjudge the appropriate weights for each modality. This provides us with an intelligent decision-making process favoring the feature set to be infused into the final query. Our proposed algorithm is shown to supersede the previously mentioned keyphrase extraction algorithms for query expansion significantly. A rigorous set of experiments performed on ImageCLEF-2011 Wikipedia Retrieval task dataset validates our claim that capturing the semantic relation between words through Mutual Information followed by expansion of a textual query using relevance feedback can simultaneously enhance both text and image retrieval.  相似文献   

2.
事件信息抽取是信息抽取任务中的一种,旨在识别并提出一个事件的触发词和元素.由于容易受到数据稀疏的影响,事件要素的抽取是中文事件抽取任务中的一个难点,研究的重点在于特征工程的构建.中文语法相较英文要复杂许多,所以捕获英文文本特征的方法在中文任务中效果并不明显,而目前常用的神经网络模型仅考虑了上下文信息,不能兼顾词法和句法...  相似文献   

3.
Textual Data Mining to Support Science and Technology Management   总被引:10,自引:0,他引:10  
This paper surveys applications of data mining techniques to large text collections, and illustrates how those techniques can be used to support the management of science and technology research. Specific issues that arise repeatedly in the conduct of research management are described, and a textual data mining architecture that extends a classic paradigm for knowledge discovery in databases is introduced. That architecture integrates information retrieval from text collections, information extraction to obtain data from individual texts, data warehousing for the extracted data, data mining to discover useful patterns in the data, and visualization of the resulting patterns. At the core of this architecture is a broad view of data mining—the process of discovering patterns in large collections of data—and that step is described in some detail. The final section of the paper illustrates how these ideas can be applied in practice, drawing upon examples from the recently completed first phase of the textual data mining program at the Office of Naval Research. The paper concludes by identifying some research directions that offer significant potential for improving the utility of textual data mining for research management applications.  相似文献   

4.
5.
中文文本的信息自动抽取和相似检索机制   总被引:1,自引:0,他引:1  
目前信息抽取成为提供高质量信息服务的重要手段,提出面向中文文本信息的自动抽取和相似检索机制,其基本思想是将用户兴趣表示为语义模板,对关键字进行概念扩充,通过搜索引擎获得初步的候选文本集合,在概念触发机制和部分分析技术基础上,利用语义关系到模板槽的映射机制,填充文本语义模板,形成结构化文本数据库.基于文本数据表述的模糊性,给出用户查询与文本语义模板的相似关系,实现了相似检索,可以更加全面地满足用户的信息需求.  相似文献   

6.
自动从视频图像中提取文字信息,对于监控视频图像内容、添加视频标签和建立视频图像检索系统,有重要的意义。文字检测是文字信息提取系统的前端,是文字信息提取中最关键的一步。近年来,视频图像文字信息检测领域有了新的重要的发展,综述从基于区域和基于纹理的文字检测方法进行归纳、比较和分析,概括了近年来文字检测技术的主要进展。此外,为了突出综合性方法的重要性,对其专门进行了总结。最后对视频图像中的文字检测技术的难点进行总结,并对其发展趋势进行展望。  相似文献   

7.
Automatic indexing and content-based retrieval of captioned images   总被引:2,自引:0,他引:2  
Srihari  R.K. 《Computer》1995,28(9):49-56
  相似文献   

8.
目的 服装检索方法是计算机视觉与自然语言处理领域的研究热点,其包含基于内容与基于文本的两种查询模态。然而传统检索方法通常存在检索效率低的问题,且很少研究关注服装在风格上的相似性。为解决这些问题,本文提出深度多模态融合的服装风格检索方法。方法 提出分层深度哈希检索模型,基于预训练的残差网络ResNet(residual network)进行迁移学习,并把分类层改造成哈希编码层,利用哈希特征进行粗检索,再用图像深层特征进行细检索。设计文本分类语义检索模型,基于LSTM(long short-term memory)设计文本分类网络以提前分类缩小检索范围,再以基于doc2vec提取的文本嵌入语义特征进行检索。同时提出相似风格上下文检索模型,其参考单词相似性来衡量服装风格相似性。最后采用概率驱动的方法量化风格相似性,并以最大化该相似性的结果融合方法作为本文检索方法的最终反馈。结果 在Polyvore数据集上,与原始ResNet模型相比,分层深度哈希检索模型的top5平均检索精度提高11.6%,检索速度提高2.57 s/次。与传统文本分类嵌入模型相比,本文分类语义检索模型的top5查准率提高29.96%,检索速度提高16.53 s/次。结论 提出的深度多模态融合的服装风格检索方法获得检索精度与检索速度的提升,同时进行了相似风格服装的检索使结果更具有多样性。  相似文献   

9.
Sentence similarity based on semantic nets and corpus statistics   总被引:3,自引:0,他引:3  
Sentence similarity measures play an increasingly important role in text-related research and applications in areas such as text mining, Web page retrieval, and dialogue systems. Existing methods for computing sentence similarity have been adopted from approaches used for long text documents. These methods process sentences in a very high-dimensional space and are consequently inefficient, require human input, and are not adaptable to some application domains. This paper focuses directly on computing the similarity between very short texts of sentence length. It presents an algorithm that takes account of semantic information and word order information implied in the sentences. The semantic similarity of two sentences is calculated using information from a structured lexical database and from corpus statistics. The use of a lexical database enables our method to model human common sense knowledge and the incorporation of corpus statistics allows our method to be adaptable to different domains. The proposed method can be used in a variety of applications that involve text knowledge representation and discovery. Experiments on two sets of selected sentence pairs demonstrate that the proposed method provides a similarity measure that shows a significant correlation to human intuition.  相似文献   

10.
Query-Biased Summarization Based on Lexical Chaining   总被引:1,自引:0,他引:1  
Recently, the prevalence of information retrieval engines has created an important application of the automatic summarization as the display of retrieval results, whereby the user can quickly and accurately judge the relevance of texts returned as a result of a query. Here, rather than producing a generic summary, the summary that reflects the user's topic of interest (information need) expressed in the query would be considered as more suitable. This type of summary is often called query-biased summary.
In this paper we present a method for producing query-biased summaries using lexical chains. Lexical chains are sequences of words that are in lexical cohesion relation with each other, and tend to indicate fragments of a text that form a semantic unit. Using lexical chains would enable to produce more coherent and readable summaries than previous approaches to query-biased summarization.
To evaluate the effectiveness of our method, a task-based evaluation scheme is adopted. The results from the experiments show that query-biased summaries by lexical chains outperform others in the accuracy of subjects' relevance judgments.  相似文献   

11.
The keyphrases of a text entity are a set of words or phrases that concisely describe the main content of that text. Automatic keyphrase extraction plays an important role in natural language processing and information retrieval tasks such as text summarization, text categorization, full-text indexing, and cross-lingual text reuse. However, automatic keyphrase extraction is still a complicated task and the performance of the current keyphrase extraction methods is low. Automatic discovery of high-quality and meaningful keyphrases requires the application of useful information and suitable mining techniques. This paper proposes Topical and Structural Keyphrase Extractor (TSAKE) for the task of automatic keyphrase extraction. TSAKE combines the prior knowledge about the input langue learned by an N-gram topical model (TNG) with the co-occurrence graph of the input text to form some topical graphs. Different from most of the recent keyphrase extraction models, TSAKE uses the topic model to weight the edges instead of the nodes of the co-occurrence graph. Moreover, while TNG represents the general topics of the language, TSAKE applies network analysis techniques to each topical graph to detect finer grained sub-topics and extract more important words of each sub-topic. The use of these informative words in the ranking process of the candidate keyphrases improves the quality of the final keyphrases proposed by TSAKE. The results of our experiment studies conducted on three manually annotated datasets show the superiority of the proposed model over three baseline techniques and six state-of-the-art models.  相似文献   

12.
一种新的彩色图象文字提取算法   总被引:3,自引:0,他引:3  
文字信息在描述图象内容时起着重要的作用,因此文字提取及识别是基于内容视频检索的关键技术。提出了一个从彩色图象背景中提取文字的快速而有效的算法。由于文本字符串的对比度较高,首先用一个改进的sobel算子将彩色图象变换为二值的边缘图象,再对该边缘图象进行涂抹处理,然后基于候选文本区的特征从不同复杂度的彩色图象中提取文本信息,最后将提取出的文本输入到文字识别(OCR)引擎,识别结果证明了此方法的有效性。  相似文献   

13.
刘彤  倪维健 《计算机科学》2015,42(10):275-280, 286
各种专业领域中的文档往往具有显著的结构化特征,即一篇文档往往是由具有不同表达功能的相对固定的多个文本字段构成,同时这些字段蕴含了相关的领域知识。针对专业文档的结构化和领域化特征,设计了一种面向结构化领域文档的信息检索模型。在该模型中,首先对领域文档集进行挖掘以构建能够反映领域知识的结构化模型,之后以此为基础设计了结构化文档检索算法来为用户查询返回相关的领域文档。选择一类典型的领域文档——农技处方开展了应用研究,利用一份现实的农技处方文档数据集将提出的方法与传统的信息检索方法进行了实验对比分析,并开发了农技处方检索原型系统。  相似文献   

14.
跨语言句子语义相似度计算旨在计算不同语言句子之间的语义相似程度。近年来,前人提出了基于神经网络的跨语言句子语义相似度模型,这些模型多数使用卷积神经网络来捕获文本的局部语义信息,缺少对句子中远距离单词之间语义相关信息的获取。该文提出一种融合门控卷积神经网络和自注意力机制的神经网络结构,用于获取跨语言文本句子中的局部和全局语义相关关系,从而得到文本的综合语义表示。在SemEval-2017多个数据集上的实验结果表明,该文提出的模型能够从多个方面捕捉句子间的语义相似性,结果优于基准方法中基于纯神经网络的模型方法。  相似文献   

15.
In knowledge discovery in a text database, extracting and returning a subset of information highly relevant to a user's query is a critical task. In a broader sense, this is essentially identification of certain personalized patterns that drives such applications as Web search engine construction, customized text summarization and automated question answering. A related problem of text snippet extraction has been previously studied in information retrieval. In these studies, common strategies for extracting and presenting text snippets to meet user needs either process document fragments that have been delimitated a priori or use a sliding window of a fixed size to highlight the results. In this work, we argue that text snippet extraction can be generalized if the user's intention is better utilized. It overcomes the rigidness of existing approaches by dynamically returning more flexible start-end positions of text snippets, which are also semantically more coherent. This is achieved by constructing and using statistical language models which effectively capture the commonalities between a document and the user intention. Experiments indicate that our proposed solutions provide effective personalized information extraction services.  相似文献   

16.
With the advancement of content-based retrieval technology, the importance of semantics for text information contained in images attracts many researchers. An algorithm which will automatically locate the textual regions in the input image will facilitate  相似文献   

17.
张静  俞辉 《计算机应用》2008,28(1):199-201,
针对包含复杂语义信息的视频检索的需要,提出了一种基于关系代数的多模态信息融合视频检索模型,该模型充分利用视频包含的文本、图像、高层语义概念等多模态特征,构造了对应于多个视频特征的查询模块,并创新地使用关系代数表达式对查询得到的多模态信息进行融合。实验表明,该模型能够充分发挥多模型视频检索及基于关系代数表达式的融合策略在复杂语义视频检索中的优势,得到较好的查询结果。  相似文献   

18.
Abstract

Retrieving relevant information from Twitter is always a challenging task given its vocabulary mismatch, sheer volume and noise. Representing the content of text tweets is a critical part of any microblog retrieval model. For this reason, deep neural networks can be used for learning good representations of text data and then conduct to a better matching. In this paper, we are interested in improving both representation and retrieval effectiveness in microblogs. For that, a Hybrid-Deep Neural-Network-based text representation model is proposed to extract effective features’ representations for clustering oriented microblog retrieval. HDNN combines recurrent neural network and feedforward neural network architectures. Specifically, using a bi-directional LSTM, we first generate a deep contextualized word representation which incorporates character n-grams form FasText. However, these contextual embedded existing in a high-dimensional space are not all important. Some of them are redundant, correlated and sometimes noisy making the learning models over-fitting, complex and less interpretable. To deal with these problems, we proposed a Hybrid-Regularized-Autoencoder-based method which combines autoencoder with Elastic Net regularization for an effective unsupervised feature selection and extraction. Our experimental results show that the performance of clustering and especially information retrieval in microblogs depend heavily on features’ representation.  相似文献   

19.
20.
In image retrieval, global features related to color or texture are commonly used to describe the image content. The problem with this approach is that these global features cannot capture all parts of the image having different characteristics. Therefore, local computation of image information is necessary. By using salient points to represent local information, more discriminative features can be computed. In this paper, we compare a wavelet-based salient point extraction algorithm with two corner detectors using the criteria: repeatability rate and information content. We also show that extracting color and texture information in the locations given by our salient points provides significantly improved results in terms of retrieval accuracy, computational complexity, and storage space of feature vectors as compared to global feature approaches.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号