首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 171 毫秒
1.
关于提高文献的检索效率,在科技文献检索过程中,传统的基于关键词匹配的检索方法缺乏对知识的理解和处理,只能检索出包含关键词的文献,而不能检索出与关键词语义相似的文献,因而检索结果在查全率和查准率都无法满足检索者的要求.将模糊粗糙集理论引入信息检索当中,对信息检索模型的缺陷进行了改进.首先用传统的互信息函数计算标引词之间的语义关联权重,构建出模糊近似空间;然后用TF - IDF方法获得文档的模糊向量表示,在计算标引词重要度权重时,不但考虑了标引词出现的频度,还考虑位置因素,查询的模糊向量表示完全由用户的兴趣确定;最后用模糊近似空间对关键词进行概念扩展,挖掘出相似概念类,计算文档和查询模糊表示的上、下近似集,文档和查询的匹配不再是关键词匹配,而是利用布尔逻辑的合取、析取公式对上、下近似集进行模糊匹配,并返回按相似度值排序的检索结果.仿真测试表明,方法能提高科技文档检索的性能,能对科技文献进行概念意义上的检索.  相似文献   

2.
针对经典粗糙集模型难以分类标引空间以及体现类间关联的缺陷,将条件概率关系结合粗糙集理论引入信息检索,提出一种基于概率粗糙集的信息检索模型。定义标引词空间的条件概率关系,自动挖掘概念相似类形成概念空间。定义文档与查询、文档与文档间语义贴近度的计算方法。根据贴近度实现检索匹配结果的排序输出。仿真实例表明了该方法的可行性和有效性。  相似文献   

3.
杨为民  李龙澍 《计算机工程》2011,37(15):134-136
传统信息检索模型的信息获取准确率较低。针对该问题,提出一种基于场论的高精度信息检索模型,在考虑标引词之间相关性的前提下,以规范化标引词的数量作为文档信息量,构建一个文档信息场。该信息场以文档间相互作用的场力大小度量文档的相关性,由此得到更精确的检索结果。实验结果验证了该模型相对于同类模型的优越性。  相似文献   

4.
搜索引擎结果中Web文档的排序研究   总被引:1,自引:0,他引:1  
信息检索结果中,如何对检索结果进行排序在很大程度上影响了用户所得到的检索结果。对现存典型的词频统计排序技术和超链分析排序技术进行了分析,并借助向量空间模型,提出了一种基于概念语义的查询词-文档相似度排序方法。  相似文献   

5.
一种基于语义特征的Web文档检索方法   总被引:2,自引:0,他引:2  
Web文档聚类在Web信息检索中起着重要的作用。文中提出了一种新的Web文档聚类和检索算法。该算法采用有序聚类的方法,根据Web文档的物理结构概括其语义段落和提取相应的语义特征,并以此作为文档检索的基础;在此基础上,根据用户的检索要求直接在文档的语义段落层次计算其相似性,大大提高了检索的精度和效率。实验结果表明,文中提出的算法具有一定的实用性。  相似文献   

6.
潜在语义分析权重计算的改进   总被引:12,自引:0,他引:12  
自从潜在语义分析方法诞生以来,被广泛应用于信息检索、文本分类、自动问答系统等领域中。潜在语义分析的一个重要过程是对词语文档矩阵作加权转换,加权函数直接影响潜在语义分析结果的优劣。本文首先总结了传统的、已成熟的权重计算方法,包括局部权重部分和词语全局权重部分,随后指出已有方法的不足之处,并对权重计算方法进行扩展,提出文档全局权重的概念。在最后的实验中,提出了一种新的检验潜在语义分析结果优劣的方法———文档自检索矩阵,实验结果证明改进后的权重计算方法提高了检索效率。  相似文献   

7.
有效地检索HTML文档   总被引:22,自引:1,他引:21  
WWW上的资源大多以HTML格式的文档存储,同普通文档不同,THML文档的标签特性使得它具有一定的结构我们采取了一种检索,它扩展了传统的传统检索,利用HTML文档结构提高了在WWW环境下的检索和率。本文介绍了HTML的结构以及传统的向量空间信息检索提出了运用聚族方法为标符合分组;最后详细讨论了如何利用文棣结构扩展加权架,使得检索词能更贴切地描述文档,以提高检索的准确性。  相似文献   

8.
在许多信息检索任务中,为了进一步提高检索性能,通常需要对检索到的文档进行重新排序,目前的排序学习方法主要集中在损失函数的构造上,而没有考虑特征之间的关系。该文将多通道深度卷积神经网络作用于文档列表排序学习方法,即ListCNN,实现了信息检索的精确重排序。由于从文档中提取的多个特征中有一些特征具有局部相关性和冗余性,因此,文中使用卷积神经网络来重新提取特征,以提高列表方法的性能。ListCNN架构考虑了原始文档特征的局部相关性,能够有效地重新提取代表性特征。在公共数据集LETOR 4.0上对ListCNN进行实验验证,结果表明其性能优于已有文档列表方法。  相似文献   

9.
基于本体的Web智能检索研究   总被引:1,自引:0,他引:1       下载免费PDF全文
尹焕亮  孙四明  张峰 《计算机工程》2009,35(23):44-46,4
针对传统的基于关键词信息检索方式存在的问题,提出一种基于领域本体的语义检索模型,在建立本体概念与文档内容关联关系的基础上,对用户的查询输入预处理,利用本体计算两者的相似程度,给出与查询请求相关的排序后的文档。通过搭建基于本体的Web智能检索原型系统,验证了该模型的有效性。  相似文献   

10.
针对XML文档集的关键词检索结果排序   总被引:1,自引:0,他引:1       下载免费PDF全文
探讨了针对XML文档集中只与内容相关的关键词检索结果的排序问题,针对XML文档特征提出了一种新的排序模型,它不同于面向Web的XML网页的搜索结果的排序。设计了满足这种排序模型的倒排列表索引结构和搜索引擎的体系结构。  相似文献   

11.
中文Web文本的特征获取与分类   总被引:16,自引:0,他引:16  
许建潮  胡明 《计算机工程》2005,31(8):24-25,39
已有许多方法用于英文网页的特征抽取,相对而言适合于中文网页的方法还不多。该文设计了一个综合考虑位置,频率和词长3个因素的中文Web文本词权重的计算公式,提出了一种用变长度染色体遗传算法提取Web文本特征的方法。实验表明该方法在降低特征矢量数方面是有效的。  相似文献   

12.
Semantic search has been one of the motivations of the semantic Web since it was envisioned. We propose a model for the exploitation of ontology-based knowledge bases to improve search over large document repositories. In our view of information retrieval on the semantic Web, a search engine returns documents rather than, or in addition to, exact values in response to user queries. For this purpose, our approach includes an ontology-based scheme for the semiautomatic annotation of documents and a retrieval system. The retrieval model is based on an adaptation of the classic vector-space model, including an annotation weighting algorithm, and a ranking algorithm. Semantic search is combined with conventional keyword-based retrieval to achieve tolerance to knowledge base incompleteness. Experiments are shown where our approach is tested on corpora of significant scale, showing clear improvements with respect to keyword-based search  相似文献   

13.
DF还是IDF?主特征模型在Web信息检索中的使用   总被引:11,自引:0,他引:11  
张敏  马少平  宋睿华 《软件学报》2005,16(5):1012-1020
Web信息检索的难点之一就是简短、模糊的用户查询与存在大量冗余和噪声的文档之间的不匹配.对Web文档信息特征进行分析,提出Web文档主特征词、主特征域和主特征空间的概念,在该空间上使用文档频度DF(document frequency)信息而非传统意义上的IDF(inverse document frequency)信息进行权值计算,并给出一个改进的相似度计算模型.使用该模型在10G和19G的两个大规模Web文档集合上进行了3组标准测试.比较实验表明,与传统IDF思想相比,在各项评价指标上,DF相关的主特征权值计算方法都能始终较大幅度地提高系统性能,最大达到18.6%的性能改善.  相似文献   

14.
The diffusion of the World Wide Web (WWW) and the consequent increase in the production and exchange of textual information demand the development of effective information retrieval systems. The HyperText Markup Language (HTML) constitues a common basis for generating documents over the internet and the intranets. By means of the HTML the author is allowed to organize the text into subparts delimited by special tags; these subparts are then visualized by the HTML browser in distinct ways, i.e. with distinct typographical formats. In this paper a model for indexing HTML documents is proposed which exploits the role of tags in encoding the importance of their delimited text. Central to our model is a method to compute the significance degree of a term in a document by weighting the term instances according to the tags in which they occur. The indexing model proposed is based on a contextual weighted representation of the document under consideration, by means of which a set of (normalized) numerical weights is assigned to the various tags containing the text. The weighted representation is contextual in the sense that the set of numerical weights assigned to the various tags and the respective text depend (other than on the tags themselves) on the particular document considered. By means of the contextual weighted representation our indexing model reflects not only the general syntactic structure of the HTML language but also the information conveyed by the particular way in which the author instantiates that general structure in the document under consideration. We discuss two different forms of contextual weighting: the first is based on a linear weighted representation and is closer to the standard model of universal (i.e. non contextual) weighting; the second is based on a more complex non linear weighted representation and has a number of novel and interesting features.  相似文献   

15.
Liu  Mengchi  Ling  Tok Wang 《World Wide Web》2001,4(1-2):49-77
Most documents available over the Web conform to the HTML specification. Such documents are hierarchically structured in nature. The existing data models for the Web either fail to capture the hierarchical structure within the documents or can only provide a very low level representation of such hierarchical structure. How to represent and query HTML documents at a higher level is an important issue. In this paper, we first propose a novel conceptual model for HTML. This conceptual model has only a few simple constructs but is able to represent the complex hierarchical structure within HTML documents at a level that is close to human conceptualization/visualization of the documents. We also describe how to convert HTML documents based on this conceptual model. Using the conceptual model and conversion method, one can capture the essence (i.e., semistructure) of HTML documents in a natural and simple way. Based on this conceptual model, we then present a rule–based language to query HTML documents over the Internet. This language provides a simple but very powerful way to query both intra–document structures and inter–document structures and allows the query results to be restructured. Being rule–based, it naturally supports negation and recursion and therefore is more expressive than SQL–based languages. A logical semantics is also provided.  相似文献   

16.
XML技术在化学深层网数据提取中的应用   总被引:1,自引:1,他引:0  
Internet上的化学数据库是宝贵的化学信息资源,如何有效地利用这些数据是化学深层网所要解决的问题。本文总结了化学深层网的特点,基于XML技术实现从数据库检索返回的半结构化HTML页面中提取数据的目标,使之成为可供程序直接调用做进一步计算的数据。在数据提取过程中,先采用JTidy规范化HTML,得到格式上完整、内容无误的XHTML文档,利用包含着XPath路径语言的XSLT数据转换模板实现数据转换和提取。其中XPath表达式的优劣决定了XSLT数据转换模板能否长久有效地提取化学数据,文中着重介绍了如何编辑健壮的XPath表达式,强调了XPath表达式应利用内容和属性特征实现对源树中数据的定位,并尽可能地降低表达式之间的耦合度,前瞻性地预测化学站点可能出现的变化并在XSLT数据转换模板中采取相应的措施以提高表达式的长期有效性。为创建化学深层网数据提取的XSLT数据提取模板提供方法指导。  相似文献   

17.
吕锋  余丽 《微机发展》2007,17(6):53-55
文中介绍了三种常用的Web数据抽取的方法:直接解析HTML文档的方法,基于XML的方法(也称作为分析HTML层次结构的方法)以及基于概念建模的方法。重点研究其中的基于XML的数据抽取方法,基本做法是将原始的HTML文档通过一个过滤器检查并修改HTML文档的语法结构,从而形成一篇基于XML的XHTML,然后利用XML工具来处理这些HTML文档。实现了从非结构化的HTML文档向结构化的XML文档转化的预处理过程,给在Web挖掘中使用传统的数据抽取方法进行数据抽取创造了有利条件。  相似文献   

18.
The Web contains a large amount of documents and an increasing quantity of structured data in the form of RDF triples. Many of these triples are annotations associated with documents. While structured queries constitute the principal means to retrieve structured data, keyword queries are typically used for document retrieval. Clearly, a form of hybrid search that seamlessly integrates these formalisms to query both textual and structured data can address more complex information needs. However, hybrid search on the large scale Web environment faces several challenges. First, there is a need for repositories that can store and index a large amount of semantic data as well as textual data in documents, and manage them in an integrated way. Second, methods for hybrid query answering are needed to exploit the data from such an integrated repository. These methods should be fast and scalable, and in particular, they shall support flexible ranking schemes to return not all but only the most relevant results. In this paper, we present CE2, an integrated solution that leverages mature information retrieval and database technologies to support large scale hybrid search. For scalable and integrated management of data, CE2 integrates off-the-shelf database solutions with inverted indexes. Efficient hybrid query processing is supported through novel data structures and algorithms which allow advanced ranking schemes to be tightly integrated. Furthermore, a concrete ranking scheme is proposed to take features from both textual and structured data into account. Experiments conducted on DBpedia and Wikipedia show that CE2 can provide good performance in terms of both effectiveness and efficiency.  相似文献   

19.
One of the most important research topics in Information Retrieval is term weighting for document ranking and retrieval, such as TFIDF, BM25, etc. We propose a term weighting method that utilizes past retrieval results consisting of the queries that contain a particular term, retrieval documents, and their relevance judgments. A term’s Discrimination Power(DP) is based on the difference degree of the term’s average weights obtained from between relevant and non-relevant retrieved document sets. The difference based DP performs better compared to ratio based DP introduced in the previous research. Our experimental result shows that a term weighting scheme based on the discrimination power method outperforms a TF*IDF based scheme.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号