首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 328 毫秒
1.
随着互联网的迅速发展,Web逐步成为知识获取的重要资源。部分整体关系获取是知识获取中的重要组成部分。该文提出了一种利用搜索引擎从Web中获取部分整体关系的方法。首先构造一种基于部分整体关系分类的意图查询,利用意图查询可以有针对性地从Web中获取尽可能多的包含部分整体关系语料。然后根据网页中的HTML标记和意图查询的格式过滤语料,并从中抽取候选部分整体关系,最后基于部分整体关系在自然语言表述中的特点和汉语的构词规律,提出用于验证候选部分整体关系的度量标准。实验结果表明,该方法取得了较高的准确率和F值。在前20个结果中准确率为86%,最优F值为64%。  相似文献   

2.
余蕾  曹存根 《计算机科学》2007,34(2):161-165
互联网网页中存在大量的专业知识。如何从这些资源中获取知识已经成为10多年来的一个重要的研究课题。概念和概念间的关系是知识的基本组成部分,因此如何获取并验证概念,成为从文本到知识的过程中的重要步骤。本文提出并实现了一种自动从Web语料中获取概念的方法,该方法利用了规则、统计、上下文信息等多种方法和信息。实验结果表明,该方法达到了较好的效果。  相似文献   

3.
姜琳  李宇  卢汉  曹存根 《计算机科学》2007,34(12):151-156
文本知识获取(Knowledge acquisition from text,简称KAT)是知识工程中的一个重要研究课题。重点研究如何从大规模Web网页文本中获取地理实体概念及其位置关系知识,本文首先介绍了如何自动和半自动地获取这些地理实体概念及其位置关系的文法模式,建立文法模式库;然后基于文法模式库获取例句来抽取候选概念并进行概念验证;最后利用基于图论的方法构造位置关系图,利用地理领域特定规则进行分析验证。作为统一概念图管理下概念空间的一个重要组成部分,地理实体概念及其位置关系本身不仅是知识库的一个重要部分,还可为知识库中其它领域的知识提供支持。  相似文献   

4.
基于多标注源的Deep Web查询结果自动标注   总被引:2,自引:1,他引:1  
Deep Web查询结果的语义标注,是Deep Web数据集成的关键问题之一。提出基于多标注源的Deep Web查询结果自动标注框架,根据不同的特征设计多个标注器。基于搜索引擎的标注器,扩展AI领域常用的问答技术,构造验证查询并提交到搜索引擎,利用返回结果选择最合适的词汇用于标注,有效提高了标注的查准率和查全率。多个领域Web数据库的测试证明了该方法的有效性。  相似文献   

5.
孙琳  王忠民  李鑫 《计算机应用》2006,26(Z2):169-171
为了改进Web检索中用户信息获取体验,提出了一种有效的查询建议方法--LDART,该方法应用于Web search用户交互,提供良好的智能化的人机接口.该方法结合了目前基于文档和基于日志的研究方法从日志中抽取查询主题,从Web上获取相关文档集,使用对象过滤的方法生成事务,通过关联规则挖掘的方法抽取关系.将得到的相关主题规则应用于真实的搜索引擎并设计了评价模型,通过实验结果表明这种方法能够为用户提供高相关度的查询主题.  相似文献   

6.
部分整体关系是一种基础而重要的语义关系,从文本中自动获取部分整体关系是知识工程的一项基础性研究课题。该文提出了一种基于图的从Web中获取部分整体关系的方法,首先利用部分整体关系模式从Google下载语料,然后用并列结构模式从中匹配出部分概念对,据此形成图,用层次聚类算法对该图进行自动聚类,使正确的部分概念聚集在一起。在层次聚类基础上,我们挖掘并列结构的特性、图的特点和汉语的语言特点,采用惩罚逗号边、去除低频边、奖励环路、加重相同后缀和前缀等5种方法调整图中边的权重,在不损失层次聚类的高准确率条件下,大幅提高了召回率。  相似文献   

7.
简称是自然语言词汇的重要组成部分,其获取是自然语言处理中的一个基本而又关键的问题。提出了一种根据汉语全称从Web中获取对应汉语简称的方法。该方法包括获取和验证两个步骤。获取步骤通过选择查询模式从Web上获得候选简称集合。为了验证候选简称,定义了全简称关系约束,分别定性和定量地表示全称和对应简称之间的约束,构建了全简称关系图来表示所有全称和简称之间的联系,在验证过程中,先分别用约束公理和关系图对候选简称进行过滤,再用约束函数对候选简称分类,并以分类类别、语料标记和约束函数值作为属性构建决策树,利用决策树对候选简称进行验证。实验结果表明,获取方法的最终准确率为94.63%,召回率为84.09%,验证方法的准确率为94.81%。  相似文献   

8.
识别搜索引擎用户的查询意图在信息检索领域是备受关注的研究内容。文中提出一种融合多类特征识别Web查询意图的方法。将Web查询意图识别作为一个分类问题,并从不同类型的资源包括查询文本、搜索引擎返回内容及Web查询日志中抽取出有效的分类特征。在人工标注的真实Web查询语料上采用文中方法进行查询意图识别实验,实验结果显示文中采用的各类特征对于提高查询意图识别的效果皆有一定帮助,综合使用这些特征进行查询意图识别,88。5%的测试查询获得准确的意图识别结果。  相似文献   

9.
李彦志  朱红梅 《软件》2020,(4):10-13
针对Neo4j知识图谱Web可视化问题,本文以农药知识图谱为例,研究其Web可视化方法。该方法在分析农药知识图谱结构模型的基础上,搭建了基于Flask的农药知识图谱的查询网站,连接neo4j图数据库,从网页获取查询条件,生成用Cypher语言描述的查询语句,通过py2neo对农药知识图谱进行查询,构造网站动态图数据,利用cytoscape.js实现对知识图谱查询结果的Web可视化。  相似文献   

10.
Deep Web蕴藏着海量信息,现有的搜索引擎很难挖掘到其中的内容。如何充分地获取Deep Web中有价值的信息成为一个难题。提出了基于语义相似度计算的Deep Web数据查询方法,该方法通过语义相似度计算作为中间件,计算出关键词和数据库属性词典对应列的相似度,从而将关键词的搜索范围限制在一个(或多个)相关领域,最后生成相应的SQL查询语句。试验证明,该方法能够有效地提高基于Deep Web的数据查询效率。  相似文献   

11.
A Knowledge-Based Approach to Effective Document Retrieval   总被引:3,自引:0,他引:3  
This paper presents a knowledge-based approach to effective document retrieval. This approach is based on a dual document model that consists of a document type hierarchy and a folder organization. A predicate-based document query language is proposed to enable users to precisely and accurately specify the search criteria and their knowledge about the documents to be retrieved. A guided search tool is developed as an intelligent natural language oriented user interface to assist users formulating queries. Supported by an intelligent question generator, an inference engine, a question base, and a predicate-based query composer, the guided search collects the most important information known to the user to retrieve the documents that satisfy users' particular interests. A knowledge-based query processing and search engine is devised as the core component in this approach. Algorithms are developed for the search engine to effectively and efficiently retrieve the documents that match the query.  相似文献   

12.
Research in content-based 3D retrieval has already started, and several approaches have been proposed which use in different manner a similarity assessment to match the shape of the query against the shape of the objects in the database. However, the success of these solutions are far from the success obtained by their textual counterparts. A major drawback of most existing 3D retrieval solutions is their inability to support partial queries, that is, a query which does not need to be formulated by specifying a whole query shape, but just a part of it, for example a detail of its overall shape, just like documents are retrieved by specifying words and not whole texts. Recently, researchers have focused their investigation on 3D retrieval which is solved by partial shape matching. However, at the extent of our knowledge, there is still no 3D search engine that provides an indexing of the 3D models based on all the interesting subparts of the models. In this paper we present a novel approach to 3D shape retrieval that uses a collection-aware shape decomposition combined with a shape thesaurus and inverted indexes to describe and retrieve 3D models using part-in-whole matching. The proposed method clusters similar segments obtained trough a multilevel decomposition of models, constructing from such partition the shape thesaurus. Then, to retrieve a model containing a sub-part similar to a given query, instead of looking on a large set of subparts or executing partial matching between the query and all models in the collection, we just perform a fast global matching between the query and the few entries in the thesaurus. With this technique we overcame the time complexity problems associated with partial queries in large collections.  相似文献   

13.
网络信息检索在当前互联网社会得到了广泛应用,但是其检索准确性却不容乐观,究其原因是割裂了检索关键词之间的概念联系。从一类限定领域的用户需求入手,以搜索引擎作为网络语料资源的访问接口,综合利用规则与统计的方法,生成查询需求的语义概念图。可将其作为需求分析的结果,导引后续的语义检索过程,提高用户查询与返回结果的相关性。实验结果表明,生成方法是有效可行的,对基于概念图的语义检索有一定的探索意义。  相似文献   

14.
Kwong  Linus W.  Ng  Yiu-Kai 《World Wide Web》2003,6(3):281-303
To retrieve Web documents of interest, most of the Web users rely on Web search engines. All existing search engines provide query facility for users to search for the desired documents using search-engine keywords. However, when a search engine retrieves a long list of Web documents, the user might need to browse through each retrieved document in order to determine which document is of interest. We observe that there are two kinds of problems involved in the retrieval of Web documents: (1) an inappropriate selection of keywords specified by the user; and (2) poor precision in the retrieved Web documents. In solving these problems, we propose an automatic binary-categorization method that is applicable for recognizing multiple-record Web documents of interest, which appear often in advertisement Web pages. Our categorization method uses application ontologies and is based on two information retrieval models, the Vector Space Model (VSM) and the Clustering Model (CM). We analyze and cull Web documents to just those applicable to a particular application ontology. The culling analysis (i) uses CM to find a virtual centroid for the records in a Web document, (ii) computes a vector in a multi-dimensional space for this centroid, and (iii) compares the vector with the predefined ontology vector of the same multi-dimensional space using VSM, which we consider the magnitudes of the vectors, as well as the angle between them. Our experimental results show that we have achieved an average of 90% recall and 97% precision in recognizing Web documents belonged to the same category (i.e., domain of interest). Thus our categorization discards very few documents it should have kept and keeps very few it should have discarded.  相似文献   

15.
A common task of Web users is querying structured information from Web pages. For realizing this interesting scenario we propose a novel query processor for systematically discovering instances of semantic relations in Web search results and joining these relation instances into complex result tuples with conjunctive queries. Our query processor transforms a structured user query into keyword queries that are submitted to a search engine, forwards search results to a relation extractor, and then combines relations into complex result tuples. The processor automatically learns discriminative and effective keywords for different types of semantic relations. Thereby, our query processor leverages the index of a search engine to query potentially billions of pages. Unfortunately, relation extractors may fail to return a relation for a result tuple. Moreover, user defined data sources may not return at least k complete result tuples. Therefore we propose an adaptive routing model based on information theory for retrieving missing attributes of incomplete result tuples. The model determines the most promising next incomplete tuple and attribute type for returning any-k complete result tuples at any point during the query execution process. We report a thorough experimental evaluation over multiple relation extractors. Our query processor returns complete result tuples while processing only very few Web pages.  相似文献   

16.
集成搜索引擎的文本数据库选择   总被引:8,自引:0,他引:8  
用户需要检索的信息往往分散存储在多个搜索多个搜索引擎各自的数据库里,对普通用户而言,访问多个搜索引擎并从返回的结果中分辨出确实有网页是一件费时费力的工作,集成搜索引擎则可以提供给用户一个同时记问多个搜索引擎人集成环境,集成搜索引擎能将其接收到的用户查询提交给底层的多个搜索引擎进行搜索,作为一种搜索工具,集成搜索引擎具有如WEB查询覆盖面比传统引擎更大,引警有更好的可扩展性等优点,讨论了解决集成搜索引擎的数据库选择问题的多种技术,针对用户提交的查询要求,通过数据库选择可以选定最有可能返回有用信息的底层搜索引擎。  相似文献   

17.
More people than ever before have access to information with the World Wide Web; information volume and number of users both continue to expand. Traditional search methods based on keywords are not effective, resulting in large lists of documents, many of which unrelated to users’ needs. One way to improve information retrieval is to associate meaning to users’ queries by using ontologies, knowledge bases that encode a set of concepts about one domain and their relationships. Encoding a knowledge base using one single ontology is usual, but a document collection can deal with different domains, each organized into an ontology. This work presents a novel way to represent and organize knowledge, from distinct domains, using multiple ontologies that can be related. The model allows the ontologies, as well as the relationships between concepts from distinct ontologies, to be represented independently. Additionally, fuzzy set theory techniques are employed to deal with knowledge subjectivity and uncertainty. This approach to organize knowledge and an associated query expansion method are integrated into a fuzzy model for information retrieval based on multi-related ontologies. The performance of a search engine using this model is compared with another fuzzy-based approach for information retrieval, and with the Apache Lucene search engine. Experimental results show that this model improves precision and recall measures.  相似文献   

18.
刘高军  方晓  段建勇 《计算机应用》2020,40(11):3192-3197
随着互联网时代的到来,搜索引擎开始被普遍使用。在针对冷门数据时,由于用户的搜索词范围过小,搜索引擎无法检索出需要的数据,此时查询扩展系统可以有效辅助搜索引擎来提供可靠服务。基于全局文档分析的查询扩展方法,提出结合神经网络模型与包含语义信息的语料的语义相关模型,来更深层地提取词语间的语义信息。这些深层语义信息可以为查询扩展系统提供更加全面有效的特征支持,从而分析词语间的可扩展关系。在近义词林、语言知识库“HowNet”义原标注信息等语义数据中抽取局部可扩展词分布,利用神经网络模型的深度挖掘能力将语料空间中每一个词语的局部可扩展词分布拟合成全局可扩展词分布。在与分别基于语言模型和近义词林的查询扩展方法对比实验中,使用基于语义相关模型的查询扩展方法拥有较高的查询扩展效率;尤其针对冷门搜索数据时,语义相关模型的查全率比对比方法分别提高了11.1个百分点与5.29个百分点。  相似文献   

19.
刘高军  方晓  段建勇 《计算机应用》2005,40(11):3192-3197
随着互联网时代的到来,搜索引擎开始被普遍使用。在针对冷门数据时,由于用户的搜索词范围过小,搜索引擎无法检索出需要的数据,此时查询扩展系统可以有效辅助搜索引擎来提供可靠服务。基于全局文档分析的查询扩展方法,提出结合神经网络模型与包含语义信息的语料的语义相关模型,来更深层地提取词语间的语义信息。这些深层语义信息可以为查询扩展系统提供更加全面有效的特征支持,从而分析词语间的可扩展关系。在近义词林、语言知识库“HowNet”义原标注信息等语义数据中抽取局部可扩展词分布,利用神经网络模型的深度挖掘能力将语料空间中每一个词语的局部可扩展词分布拟合成全局可扩展词分布。在与分别基于语言模型和近义词林的查询扩展方法对比实验中,使用基于语义相关模型的查询扩展方法拥有较高的查询扩展效率;尤其针对冷门搜索数据时,语义相关模型的查全率比对比方法分别提高了11.1个百分点与5.29个百分点。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号