首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
信息检索中相关文档的排序一直是一个至关重要的问题。本文提出一种基于主题词对的文档重排方法,使得检索结果在保持召回率的前提下提高精确率。主题词对意指能够共同表征同一主题的两个词语,其中一个来自于查询,另一个来自于文档,两者之间具有紧密的联系。本文中,主题词对的选择采用概率潜在语义索引的方法,并根据主题词对在文档中的分布状况对其进行重排。对NTCIR-5中文信息检索的文档集合进行测试,采用trec标准评估方法,结果表明采用该方法使得精确率在rigid和relax结果集上分别提高了53.6% 和55.8%。  相似文献   

2.
查询扩展是提高检索效果的有效方法,传统的查询扩展方法大都以单个查询词的相关性来扩展查询词,没有充分考虑词项之间、文档之间以及查询之间的相关性,使得扩展效果不佳。针对此问题,该文首先通过分别构造词项子空间和文档子空间的Markov网络,用于提取出最大词团和最大文档团,然后根据词团与文档团的映射关系将词团分为文档依赖和非文档依赖词团,并构建基于文档团依赖的Markov网络检索模型做初次检索,从返回的检索结果集合中构造出查询子空间的Markov网络,用于提取出最大查询团,最后,采用迭代的方法计算文档与查询的相关概率,并构建出最终的基于迭代方法的多层Markov网络信息检索模型。实验结果表明 该文的模型能较好地提高检索效果。  相似文献   

3.
Genetic Mining of HTML Structures for Effective Web-Document Retrieval   总被引:3,自引:1,他引:3  
Web-documents have a number of tags indicating the structure of texts. Text segments marked by HTML tags have specific meaning which can be utilized to improve the performance of document retrieval systems. In this paper, we present a machine learning approach to mine the structure of HTML documents for effective Web-document retrieval. A genetic algorithm is described that learns the importance factors of HTML tags which are used to re-rank the documents retrieved by standard weighting schemes. The proposed method has been evaluated on artificial text sets and a large-scale TREC document collection. Experimental evidence supports that the tag weights are well trained by the proposed algorithm in accordance with the importance factors for retrieval, and indicates that the proposed approach significantly improves the performance in retrieval accuracy. In particular, the use of the document-structure mining approach tends to move relevant documents to upper ranks, which is especially important in interactive Web-information retrieval environments.  相似文献   

4.
闫蓉  高光来 《计算机应用》2016,36(8):2099-2102
针对传统伪相关反馈(PRF)算法扩展源质量不高使得检索效果不佳的问题,提出一种基于检索结果的排序模型(REM)。首先,该模型从初检结果中选择排名靠前的文档作为伪相关文档集;然后,以用户查询意图与伪相关文档集中各文档的相关度最大化、并且各文档之间相似性最小化作为排序原则,将伪相关文档集中各文档进行重排序;最后,将排序后排名靠前的文档作为扩展源进行二次反馈。实验结果表明,与两种传统伪反馈方法相比,该排序模型能获得与用户查询意图相关的反馈文档,可有效地提高检索效果。  相似文献   

5.
Applying EuroWordNet to Cross-Language Text Retrieval   总被引:1,自引:0,他引:1  
We discuss ways in which EuroWordNet (EWN) can be used in multilingual information retrieval activities, focusing on two approaches to Cross-Language Text Retrieval that use the EWN database as a large-scale multilingual semantic resource. The first approach indexes documents and queries in terms of the EuroWordNet Inter-Lingual-Index, thus turning term weighting and query/document matching into language-independent tasks. The second describes how the information in the EWN database could be integrated with a corpus-based technique, thus allowing retrieval of domain-specific terms that may not be present in our multilingual database. Our objective is to show the potential of EuroWordNet as a promising alternative to existing approaches to Cross-Language Text Retrieval.  相似文献   

6.
One of the most important research topics in Information Retrieval is term weighting for document ranking and retrieval, such as TFIDF, BM25, etc. We propose a term weighting method that utilizes past retrieval results consisting of the queries that contain a particular term, retrieval documents, and their relevance judgments. A term’s Discrimination Power(DP) is based on the difference degree of the term’s average weights obtained from between relevant and non-relevant retrieved document sets. The difference based DP performs better compared to ratio based DP introduced in the previous research. Our experimental result shows that a term weighting scheme based on the discrimination power method outperforms a TF*IDF based scheme.  相似文献   

7.
In this paper, we present a framework for a feedback process to implement a highly accurate document retrieval system. In the system, a document vector space is created dynamically to implement retrieval processing. The retrieval accuracy of the system depends on the vector space. When the vector space is created based on a specific purpose and interest of a user, highly accurate retrieval results can be obtained. In this paper, we present a method for analyzing and personalizing the vector space according to the purposes and interests of users. In order to optimize the document vector space, we defined and implemented functions for the operations of adding, deleting and weighting the terms that were used to create the vector space. By exploiting effectively and dynamically the classified-document information related to the queries, our methods allow users to retrieve relevant documents for their interests and purposes. Even if the search results of the initial retrieval space are not appropriate, by applying the proposed feedback operations, our proposed method effectively improves the search results. We also implemented an experimental search system for semantic document retrieval. Several experimental results including comparisons of our method with the traditional relevance feedback method is presented to clarify how retrieval accuracy was improved by the feedback process and how accurately documents that satisfied the purpose and interests of users were extracted.  相似文献   

8.
Current approaches to index weighting for information retrieval from texts are based on statistical analysis of the texts' contents. A key shortcoming of these indexing schemes, which consider only the terms in a document, is that they cannot extract semantically exact indexes that represent the semantic content of a document. To address this issue, we proposed a new indexing formalism that considers not only the terms in a document, but also the concepts. In the proposed method, concepts are extracted by exploiting clusters of terms that are semantically related, referred to as concept clusters. Through experiments on the TREC-2 collection of Wall Street Journal documents, we show that the proposed method outperforms an indexing method based on term frequency (TF), especially in regard to the highest-ranked documents. Moreover, the index term dimension was 53.3% lower for the proposed method than for the TF-based method, which is expected to significantly reduce the document search time in a real environment.  相似文献   

9.
Abstract

The aim of probabilistic models is to define a retrieval strategy within which documents can be optimally ranked according to their relevance probability, with respect to a given request. In this scheme, the underlying probabilities are estimated according to a history of past queries along with their relevance judgments. Having evolved over the last twenty years, these estimations allow us to take both document frequency and within-document frequency into account.

In the current study, we suggest representing documents not only by index term vectors as proposed by previous probabilistic models but also by considering relevance hypertext links. These relationships, which provide additional evidence on document content, are established according to requests and relevance judgments, and may improve the ranking of the retrieved records, in a sequence most likely to fulfill user intent. Thus, to enhance retrieval effectiveness, our learning retrieval scheme should modify: (1) the weight assigned to each indexing term, (2) the importance attached of each search term, and (3) the relationships between documents. Using a simple additive scheme applied after a ranked list of documents has been determined, with the aid of a probabilistic retrieval strategy, our proposed solution is well suited to a hypertext system. Based on the CACM test collection which includes 3,204 documents and the CISI corpus (1,460 documents), we have built a hypertext and evaluated our proposed retrieval scheme. The retrieval effectiveness of this approach presents interesting results.  相似文献   

10.
DF还是IDF?主特征模型在Web信息检索中的使用   总被引:11,自引:0,他引:11  
张敏  马少平  宋睿华 《软件学报》2005,16(5):1012-1020
Web信息检索的难点之一就是简短、模糊的用户查询与存在大量冗余和噪声的文档之间的不匹配.对Web文档信息特征进行分析,提出Web文档主特征词、主特征域和主特征空间的概念,在该空间上使用文档频度DF(document frequency)信息而非传统意义上的IDF(inverse document frequency)信息进行权值计算,并给出一个改进的相似度计算模型.使用该模型在10G和19G的两个大规模Web文档集合上进行了3组标准测试.比较实验表明,与传统IDF思想相比,在各项评价指标上,DF相关的主特征权值计算方法都能始终较大幅度地提高系统性能,最大达到18.6%的性能改善.  相似文献   

11.
Term weighting is a strategy that assigns weights to terms to improve the performance of sentiment analysis and other text mining tasks. In this paper, we propose a supervised term weighting scheme based on two basic factors: Importance of a term in a document (ITD) and importance of a term for expressing sentiment (ITS), to improve the performance of analysis. For ITD, we explore three definitions based on term frequency. Then, seven statistical functions are employed to learn the ITS of each term from training documents with category labels. Compared with the previous unsupervised term weighting schemes originated from information retrieval, our scheme can make full use of the available labeling information to assign appropriate weights to terms. We have experimentally evaluated the proposed method against the state-of-the-art method. The experimental results show that our method outperforms the method and produce the best accuracy on two of three data sets.  相似文献   

12.
This paper presents a multi-level matching method for document retrieval (DR) using a hybrid document similarity. Documents are represented by multi-level structure including document level and paragraph level. This multi-level-structured representation is designed to model underlying semantics in a more flexible and accurate way that the conventional flat term histograms find it hard to cope with. The matching between documents is then transformed into an optimization problem with Earth Mover’s Distance (EMD). A hybrid similarity is used to synthesize the global and local semantics in documents to improve the retrieval accuracy. In this paper, we have performed extensive experimental study and verification. The results suggest that the proposed method works well for lengthy documents with evident spatial distributions of terms.  相似文献   

13.
向量空间模型是信息检索中的重要模型,传统的向量空间模型考虑了特征项在目标文档中的出现频率和文档频率,但并未考虑特征项出现在文本中的位置这一重要信息。针对这一问题,文章在将文档以文档对象模型表示的基础上,根据特征项出现的位置不同,对特征项的权重额外附加一个不同的系数,以反映不同位置上的特征项在表达文档主旨上的能力差异,以期改善返回文档的排序质量,改进用户的检索工作。通过模拟实验,验证了该方法相比于传统VSM在改进检索效果上的优势。  相似文献   

14.
文本情感分析领域内的特征加权一般考虑两个影响因子:特征在文档中的重要性(ITD)和特征在表达情感上的重要性(ITS)。结合该领域内两种分类准确率较高的监督特征加权算法,提出了一种新的ITS算法。新算法同时考虑特征在一类文档集里的文档频率(在特定的文档集里,出现某个特征的文档数量)及其占总文档频率的比例,使主要出现且大量出现在同一类文档集里的特征获得更高的ITS权值。实验证明,新算法能提高文本情感分类的准确率。  相似文献   

15.
Wang  Tao  Cai  Yi  Leung  Ho-fung  Lau  Raymond Y. K.  Xie  Haoran  Li  Qing 《Knowledge and Information Systems》2021,63(9):2313-2346

In text categorization, Vector Space Model (VSM) has been widely used for representing documents, in which a document is represented by a vector of terms. Since different terms contribute to a document’s semantics in various degrees, a number of term weighting schemes have been proposed for VSM to improve text categorization performance. Much evidence shows that the performance of a term weighting scheme often varies across different text categorization tasks, while the mechanism underlying variability in a scheme’s performance remains unclear. Moreover, existing schemes often weight a term with respect to a category locally, without considering the global distribution of a term’s occurrences across all categories in a corpus. In this paper, we first systematically examine pros and cons of existing term weighting schemes in text categorization and explore the reasons why some schemes with sound theoretical bases, such as chi-square test and information gain, perform poorly in empirical evaluations. By measuring the concentration that a term distributes across all categories in a corpus, we then propose a series of entropy-based term weighting schemes to measure the distinguishing power of a term in text categorization. Through extensive experiments on five different datasets, the proposed term weighting schemes consistently outperform the state-of-the-art schemes. Moreover, our findings shed new light on how to choose and develop an effective term weighting scheme for a specific text categorization task.

  相似文献   

16.
Most of the common techniques in text retrieval are based on the statistical analysis terms (words or phrases). Statistical analysis of term frequency captures the importance of the term within a document only. Thus, to achieve a more accurate analysis, the underlying model should indicate terms that capture the semantics of text. In this case, the model can capture terms that represent the concepts of the sentence, which leads to discovering the topic of the document. In this paper, a new concept-based retrieval model is introduced. The proposed concept-based retrieval model consists of conceptual ontological graph (COG) representation and concept-based weighting scheme. The COG representation captures the semantic structure of each term within a sentence. Then, all the terms are placed in the COG representation according to their contribution to the meaning of the sentence. The concept-based weighting analyzes terms at the sentence and document levels. This is different from the classical approach of analyzing terms at the document level only. The weighted terms are then ranked, and the top concepts are used to build a concept-based document index for text retrieval. The concept-based retrieval model can effectively discriminate between unimportant terms with respect to sentence semantics and terms which represent the concepts that capture the sentence meaning. Experiments using the proposed concept-based retrieval model on different data sets in text retrieval are conducted. The experiments provide comparison between traditional approaches and the concept-based retrieval model obtained by the combined approach of the conceptual ontological graph and the concept-based weighting scheme. The evaluation of results is performed using three quality measures, the preference measure (bpref), precision at 10 documents retrieved (P(10)) and the mean uninterpolated average precision (MAP). All of these quality measures are improved when the newly developed concept-based retrieval model is used, confirming that such model enhances the quality of text retrieval.  相似文献   

17.
Term frequency–inverse document frequency (TF–IDF), one of the most popular feature (also called term or word) weighting methods used to describe documents in the vector space model and the applications related to text mining and information retrieval, can effectively reflect the importance of the term in the collection of documents, in which all documents play the same roles. But, TF–IDF does not take into account the difference of term IDF weighting if the documents play different roles in the collection of documents, such as positive and negative training set in text classification. In view of the aforementioned text, this paper presents a novel TF–IDF‐improved feature weighting approach, which reflects the importance of the term in the positive and the negative training examples, respectively. We also build a weighted voting classifier by iteratively applying the support vector machine algorithm and implement one‐class support vector machine and Positive Example Based Learning methods used for comparison. During classifying, an improved 1‐DNF algorithm, called 1‐DNFC, is also adopted, aiming at identifying more reliable negative documents from the unlabeled examples set. The experimental results show that the performance of term frequency inverse positive–negative document frequency‐based classifier outperforms that of TF–IDF‐based one, and the performance of weighted voting classifier also exceeds that of one‐class support vector machine‐based classifier and Positive Example Based Learning‐based classifier. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

18.
An unsolved problem in logic-based information retrieval is how to obtain automatically logical representations for documents and queries. This problem limits the impact of logical models for information retrieval because their full expressive power cannot be harnessed. In this paper we propose a method for producing logical document representations which goes further than other simplistic “bag-of-words” approaches. The suggested procedure adopts popular information retrieval heuristics, such as document length corrections and global term distribution. This work includes a report of several experiments applying partial document representations in the context of a propositional model of information retrieval. The benefits of this expressive framework, powered by the new logical indexing approach, become apparent in the evaluation.  相似文献   

19.
相对于传统的基于整个文档的检索模型来说,基于句子的信息检索模型将每个文档表示成为bag of sentences,并从这些句子中生成问题相对每个文档的产生概率,并对文档进行排序。该模型的优势在于能够在句子的层面灵活地结合词语相关性的信息,以改进排序结果。经对该模型与传统的统计语言模型进行的比较实验表明,本模型有效地提高了排序精度。  相似文献   

20.
Since engineering design is heavily informational, engineers want to retrieve existing engineering documents accurately during the product development process. However, engineers have difficulties searching for documents because of low retrieval accuracy. One of the reasons for this is the limitation of existing document ranking approaches, in which relationships between terms in documents are not considered to assess the relevance of the retrieved documents. Therefore, we propose a new ranking approach that provides more correct evaluation of document relevance to a given query. Our approach exploits domain ontology to consider relationships among terms in the relevance scoring process. Based on domain ontology, the semantics of a document are represented by a graph (called Document Semantic Network) and, then, proposed relation-based weighting schemes are used to evaluate the graph to calculate the document relevance score. In our ranking approach, user interests and searching intent are also considered in order to provide personalized services. The experimental results show that the proposed approach outperforms existing ranking approaches. A precisely represented semantics of a document as a graph and multiple relation-based weighting schemes are important factors underlying the notable improvement.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号