首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
一种新的基于多启发式的特征选择算法   总被引:25,自引:1,他引:24  
朱颢东  钟勇 《计算机应用》2009,29(3):849-851
在查询扩展方法中,如果通过查询结果中关键词的上下文来计算候选关键词的权重,将权重大的词作为查询扩展词,其候选关键词来源于文档中关键词的上下文,这种方法存在主题漂移的问题。为了解决这个问题,提出一种将初始查询结果过滤,只选择与源文档语境相似的搜索结果,来帮助选择查询扩展词的方法。实验结果表明该方法能获得更合适的查询扩展词。  相似文献   

2.
语义查询扩展中,关键一步是扩展词的选择方法和扩展词权重的计算。提出一种改进的LCA(局部上下文分析法):OLCA(Optimize Local Context Analysis)。OLCA应用于分权重的多关键字查询中,结合WordNet概念树,从语义和实际查询语料两方面对初始查询词进行扩展,并根据初始查询词中多个关键词的位置,结合扩展候选集中词间关系计算修正各扩展词的权重。实验证明,与单独基于统计或基于语义的查询扩展方法相比,其查准率和查全率均有较大提高。  相似文献   

3.
查询扩展是信息检索中优化查询的一种有效方法。针对信息检索中用户查询关键词与文档标引词不匹配的问题,提出一种基于局部类别分析和遗传算法的查询优化算法。该算法分两个阶段实现:第1阶段对用户提交的查询Qold进行扩展,采用基于局部类别分析的查询扩展方法选择查询扩展词构成新查询Qnew;第2阶段对新查询Qnew进行权重分配,采用遗传算法对扩展后的查询进行权重调整得到最优查询向量,再次对测试集中的文档进行二次检索。实验结果表明,该算法比单独使用局部上下文分析算法、局部类别分析算法均有更优的检索性能。  相似文献   

4.
在信息检索过程中,因查询词短少而引起的检索歧义性是影响检索效率的主要原因之一,而查询扩展方法和本体扩展方法能有效改善这一问题.提出一种基于本体和局部上下文分析的查询扩展方法:本体扩展根据本体推理规则对短查询词进行推理,得到与查询词有逻辑关联的推理结果集,为查询词加入了标准化的关联信息.局部上下文分析通过对文档库的分析,在与用户查询词最相关的前m篇文档中抽取与用户查询词最相关的n个扩展词,为查询词加入了统计扩展信息.将两部分扩展查询词合并,再通过扩展查询词相关度计算对搜索结果集进行排序.该方法结合了这两种方法的各自优势,从语义角度扩展关键词.实验分析表明,该方法能有效提高检索查全率和查准率.  相似文献   

5.
针对信息检索中存在的词不匹配问题,提出一种基于频繁项集和相关性的局部反馈查询扩展算法。设计查询扩展模型和扩展词权重计算方法,从前列n篇初检文档中,挖掘同时含有查询词项、非查询词项的频繁项集,在该频繁项集中提取非查询词项作为候选扩展词,计算每个候选扩展词与整个查询的相关性,并根据该相关性得到最终的扩展词,以此实现查询扩展。实验结果表明,该算法能有效提高信息检索的性能。  相似文献   

6.
实体链接任务主要包括命名实体识别、查询扩展、候选实体选择、特征抽取和排序。本文针对查询词的扩展,提出了一种基于词向量的扩展方法。该方法利用连续词袋(Continuous bag-of-words,CBOW)模型训练语料中词语的词向量,然后将距离查询词近的词作为扩展词。词向量从语料中挖掘出词与词之间的语义相关性是对基于规则的查询扩展方法的补充,以此来召回候选实体。在特征抽取时,把文档之间的潜在狄利克雷分布(Latent Dirichlet allocation, LDA)的主题相似性作为特征之一。在计算文档相似性时,不再以高频词作为向量的维度,而是以基于词向量的相关词作为向量维度,由此得到文档的语义相似性特征 。最后利用基于单文档方法的排序学习模型把查询词链接到相应的候选实体。实验结果表明利用该方法能使F1值达到0.71,具有较好的效果。  相似文献   

7.
在垃圾短信检索中所使用的关键词与短信文本集中的词不匹配,从而影响检索效果。为此,提出一种基于上下文查询词扩展的检索方法,该方法根据关键词出现的上下文信息进行查询词扩展选择,同时考虑查询扩展词与整个查询语句及查询词的位置关系。选取3 000条短信文本进行实验,结果表明该方法能提高平均查准率。  相似文献   

8.
由于自然语言本身的歧义性和多样性,少数几个关键词难以表达真实的信息需求。查询扩展技术通过挖掘原始查询项的潜在信息,有效地增强了检索系统的理解能力。该文在上下文分析方法计算公式中加入了句子权重概念,即假设由原始查询项返回的句子越重要,则其中出现的词与查询项越相关。同时进一步假设,句中的词与查询项的位置关系与依赖关系也是选取扩展词的重要依据。为此,该文分别提出基于句子权重与位置上下文分析方法(Sentence Weight&Position-based Context Analysis,SWPCA),以及基于句子权重与依赖关系上下文分析方法(Sentence Weight&Dependency-based Context Analysis,SWDCA)。并将这两种查询扩展技术应用于TREC的定义类问题回答,数据显示这两种方法均取得不错成绩,而SWDCA性能更好。  相似文献   

9.
词项权重已经广泛应用于信息检索模型中,针对传统的词项独立性假设的词袋模型的问题,本文将基于词重要性的词项权重的计算方法应用于Markov网络查询扩展模型中。该词项权重的计算方法须先建立文档的词项图,然后根据词项图得到词项的共现矩阵和词项间的概率转移矩阵,最后利用Markov链的计算方法得到词的权重。将得到的词项权重代入Markov网络扩展模型中,在5个标准数据集上的实验结果表明,采用基于词重要性的Markov网络查询扩展模型的检索结果优于传统的基于词袋的检索结果。  相似文献   

10.
乔亚男  齐勇  史椸  侯迪  王晓 《计算机科学》2009,36(7):197-201
传统的信息检索模型假设查询中的关键词之间是并列关系,但用户的需求往往应该被抽象为一系列的关键词组,组内的关键词间具有更为紧密的语义关系,这就是定义的临近词检索问题.提出了基于权重矩阵的临近词检索问题解决框架,该框架将文档和查询抽象化为文档的权重矩阵表示和查询权重矩阵,通过计算两个矩阵间的相似度来实现临近词检索.实验结果证明,针对临近词检索问题,传统的信息检索模型只是一种简化问题的解决方案,权重矩阵框架从理论上和形式上更加契合临近词检索问题,查准率得到了显著的提高.  相似文献   

11.
Keyword search enables web users to easily access XML data without understanding the complex data schemas. However, the native ambiguity of keyword search makes it arduous to select qualified relevant results matching keywords. To solve this problem, researchers have made much effort on establishing ranking models distinguishing relevant and irrelevant passages, such as the highly cited TF*IDF and BM25. However, these statistic based ranking methods mostly consider term frequency, inverse document frequency and length as ranking factors, ignoring the distribution and connection information between different keywords. Hence, these widely used ranking methods are powerless on recognizing irrelevant results when they are with high term frequency, indicating a performance limitation. In this paper, a new searching system XDist is accordingly proposed to attack the problems aforementioned. In XDist, we firstly use the semantic query model maximal lowest common ancestor (MAXLCA) to recognize the returned results of a given query, and then these candidate results are ranked by BM25. Especially, XDist re-ranks the top several results by a combined distribution measurement (CDM) which considers four measure criterions: term proximity, intersection of keyword classes, degree of integration among keywords and quantity variance of keywords. The weights of the four measures in CDM are trained by a listwise learning to optimize method. The experimental results on the evaluation platform of INEX show that the re-ranking method CDM can effectively improve the performance of the baseline BM25 by 22% under iP[0.01] and 18% under MAiP. Also the semantic model MAXLCA and the search engine XDist perform the best in their respective related fields.  相似文献   

12.
The radial basis function (RBF) centers play different roles in determining the classification capa- bility of a Gaussian radial basis function neural network (GRBFNN) and should hold different width values. However, it is very hard and time-consuming to optimize the centers and widths at the same time. In this paper, we introduce a new insight into this problem. We explore the impact of the definition of widths on the selection of the centers, propose an optimization algorithm of the RBF widths in order to select proper centers from the center candidate pool, and improve the classification performance of the GRBFNN. The design of the objective function of the optimization algorithm is based on the local mapping capability of each Gaussian RBF. Further, in the design of the objective function, we also handle the imbalanced problem which may occur even when different local regions have the same number of examples. Finally, the recursive orthogonal least square (ROLS) and genetic algorithm (GA), which are usually adopted to optimize the RBF centers, are separately used to select the centers from the center candidates with the initialized widths, in order to testify the validity of our proposed width initialization strategy on the selection of centers. Our experimental results show that, compared with the heuristic width setting method, the width optimization strategy makes the selected cen- ters more appropriate, and improves the classification performance of the GRBFNN. Moreover, the GRBFNN constructed by our method can attain better classification performance than the RBF LS-SVM, which is a state-of-the-art classifier.  相似文献   

13.
基于查询扩展词条加权的文本检索研究   总被引:1,自引:1,他引:0  
本文分析了关键词检索文本,由于其查询词没有扩展导致检全率低;而概念检索文本虽然部分有检索词扩展,但是查询词权重与原查询词没有区分.为此,本文利用词条间的语义相似度,提出一种查询扩展词条权重计算方法--展开减小法,并将查询词以及扩展词经展开减小法计算权重后构建向量空间模型检索文本.实验表明,构建的检索模型检索文本,其综合...  相似文献   

14.
查询扩展是提高检索效果的有效方法,传统的查询扩展方法大都以单个查询词的相关性来扩展查询词,没有充分考虑词项之间、文档之间以及查询之间的相关性,使得扩展效果不佳。针对此问题,该文首先通过分别构造词项子空间和文档子空间的Markov网络,用于提取出最大词团和最大文档团,然后根据词团与文档团的映射关系将词团分为文档依赖和非文档依赖词团,并构建基于文档团依赖的Markov网络检索模型做初次检索,从返回的检索结果集合中构造出查询子空间的Markov网络,用于提取出最大查询团,最后,采用迭代的方法计算文档与查询的相关概率,并构建出最终的基于迭代方法的多层Markov网络信息检索模型。实验结果表明 该文的模型能较好地提高检索效果。  相似文献   

15.
This paper considers the use of text signatures, fixed-length bit string representations of document content, in an experimental information retrieval system: such signatures may be generated from the list of keywords characterising a document or a query. A file of documents may be searched in a bit-serial parallel computer, such as the ICL Distributed Array Processor, using a two-level retrieval strategy in which a comparison of a query signature with the file of document signatures provides a simple and efficient means of identifying those few documents that need to undergo a computationally demanding, character matching search. Text retrieval experiments using three large collections of documents and queries demonstrate the efficiency of the suggested approach.  相似文献   

16.
The Web is a source of valuable information, but the process of collecting, organizing, and effectively utilizing the resources it contains is difficult. We describe CorpusBuilder, an approach for automatically generating Web search queries for collecting documents matching a minority concept. The concept used for this paper is that of text documents belonging to a minority natural language on the Web. Individual documents are automatically labeled as relevant or nonrelevant using a language filter, and the feedback is used to learn what query lengths and inclusion/exclusion term-selection methods are helpful for finding previously unseen documents in the target language. Our system learns to select good query terms using a variety of term scoring methods. Using odds ratio scores calculated over the documents acquired was one of the most consistently accurate query-generation methods. To reduce the number of estimated parameters, we parameterize the query length using a Gamma distribution and present empirical results with learning methods that vary the time horizon used when learning from the results of past queries. We find that our system performs well whether we initialize it with a whole document or with a handful of words elicited from a user. Experiments applying the same approach to multiple languages are also presented showing that our approach generalizes well across several languages regardless of the initial conditions.  相似文献   

17.
Because of users’ growing utilization of unclear and imprecise keywords when characterizing their information need, it has become necessary to expand their original search queries with additional words that best capture their actual intent. The selection of the terms that are suitable for use as additional words is in general dependent on the degree of relatedness between each candidate expansion term and the query keywords. In this paper, we propose two criteria for evaluating the degree of relatedness between a candidate expansion word and the query keywords: (1) co-occurrence frequency, where more importance is attributed to terms occurring in the largest possible number of documents where the query keywords appear; (2) proximity, where more importance is assigned to terms having a short distance from the query terms within documents. We also employ the strength Pareto fitness assignment in order to satisfy both criteria simultaneously. The results of our numerical experiments on MEDLINE, the online medical information database, show that the proposed approach significantly enhances the retrieval performance as compared to the baseline.  相似文献   

18.
语义查询扩展中词语-概念相关度的计算   总被引:16,自引:0,他引:16  
田萱  杜小勇  李海华 《软件学报》2008,19(8):2043-2053
在基于语义的查询扩展中,为了找到描述查询需求语义的相关概念,词语.概念相关度的计算是语义查询扩展中的关键一步.针对词语.概念相关度的计算,提出一种K2CM(keyword to concept method)方法.K2CM方法从词语.文档.概念所属程度和词语.概念共现程度两个方面来计算词语.概念相关度问语.文档.概念所属程度来源于标注的文档集中词语对概念的所属关系,即词语出现在若干文档中而文档被标注了若干概念.词语.概念共现程度是在词语概念对的共现性基础上增加了词语概念对的文本距离和文档分布特征的考虑.3种不同类型数据集上的语义检索实验结果表明,与传统方法相比,基于K2CM的语义查询扩展可以提高查询效果.  相似文献   

19.
Keyword search is the most popular technique of searching information from XML (eXtensible markup language) document. It enables users to easily access XML data without learning the structure query language or studying the complex data schemas. Existing traditional keyword query methods are mainly based on LCA (lowest common ancestor) semantics, in which the returned results match all keywords at the granularity of elements. In many practical applications, information is often uncertain and vague. As a result, how to identify useful information from fuzzy data is becoming an important research topic. In this paper, we focus on the issue of keyword querying on fuzzy XML data at the granularity of objects. By introducing the concept of “object tree”, we propose the query semantics for keyword query at object-level. We find the minimum whole matching result object trees which contain all keywords and the partial matching result object trees which contain partial keywords, and return the root nodes of these result object trees as query results. For effectively and accurately identifying the top-K answers with the highest scores, we propose a score mechanism with the consideration of tf*idf document relevance, users’ preference and possibilities of results. We propose a stack-based algorithm named object-stack to obtain the top-K answers with the highest scores. Experimental results show that the object-stack algorithm outperforms the traditional XML keyword query algorithms significantly, and it can get high quality of query results with high search efficiency on the fuzzy XML document.  相似文献   

20.
Many techniques have been proposed to address the problem of mocap data retrieval by using a short motion as input, and they are commonly categorized as content-based retrieval. However, it is difficult for users who do not have equipments to create mocap data samples to take advantage of them. On the contrary, simple retrieval methods which only require text as input can be used by everyone. Nevertheless, not only that it is not clear how to measure mocap data relevance in regard to textual search queries, but the search results will also be limited to the mocap data samples, the annotations of which contain the words in the search query. In this paper, the authors propose a novel method that builds on the TF (term frequency) and IDF (inverse document frequency) weights, commonly used in text document retrieval, to measure mocap data relevance in regard to textual search queries. We extract segments from mocap data samples and regard these segments as words in text documents. However, instead of using IDF which prioritizes infrequent segments, we opt to use DF (document frequency) to prioritize frequent segments. Since motions are not required as input, everybody will be able to take advantage of our approach, and we believe that our work also opens up possibilities for applying developed text retrieval methods in mocap data retrieval.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号