首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
The present article is concerned with the problem of automaticdatabase population via information extraction (IE) from webpages obtained from heterogeneous sources, such as those retrievedby a domain crawler. Specifically, we address the task of fillingsingle multi-field templates from individual documents, a commonscenario that involves free-format documents with the same communicativegoal such as job adverts, CVs, or meeting/seminar announcements.We discuss challenges that arise in this scenario and proposesolutions to them at different levels of the processing of webpage content. Our main focus is on the issue of informationextraction, which we address with a two-step machine learningapproach that first aims to determine segments of a page thatare likely to contain relevant facts and then delimits specificnatural language expressions with which to fill template fields.We also present a range of techniques for the enrichment ofweb pages with semantic annotations, such as recognition ofnamed entities, domain terminology and coreference resolution,and examine their effect on the information extraction method.We evaluate the developed IE system on the task of automaticallypopulating a database with information on language resourcesavailable on the web.  相似文献   

2.
半结构化网页中多记录信息的自动抽取方法   总被引:1,自引:0,他引:1  
朱明  王庆伟 《计算机仿真》2005,22(12):95-98
从多记录网页中准确的自动抽取出需要的信息,是Web信息处理中的一个重要研究课题。针对现有方法对噪声敏感的缺点,该文提出了基于记录子树的最大相似度发现记录模式的思想,以在同类记录的表现模式存在一定差异的情况下正确识别记录。在此基础上,实现了多记录网页自动抽取系统,该系统可以从多个学术论文检索网站中,自动获取结果网页,并自动抽取其中的记录。对常见论文检索网站的实验表明了该系统具有较好的有效性和准确性。  相似文献   

3.
一种基于语义匹配的Web信息提取方法研究   总被引:1,自引:0,他引:1  
为了较好地解决信息过量难以消化、汉语词的歧义划分、Web信息形式不一致并且难以辨识的问题,文章提出了一种基于语义匹配的Web信息提取方法。该方法融合了网页分类、汉语分词、语义信息匹配方法,并给出了一种义素相似度,进而提出了一种基于语义的信息匹配方法来识别和提取网页信息项。基于这种Web信息提取方法的网上药品信息监管系统Web-MIND能够提取出网上药品广告的信息项,并具有较高的准确率。  相似文献   

4.
Learning Information Extraction Rules for Semi-Structured and Free Text   总被引:47,自引:0,他引:47  
Soderland  Stephen 《Machine Learning》1999,34(1-3):233-272
A wealth of on-line text information can be made available to automatic processing by information extraction (IE) systems. Each IE application needs a separate set of rules tuned to the domain and writing style. WHISK helps to overcome this knowledge-engineering bottleneck by learning text extraction rules automatically.WHISK is designed to handle text styles ranging from highly structured to free text, including text that is neither rigidly formatted nor composed of grammatical sentences. Such semi-structured text has largely been beyond the scope of previous systems. When used in conjunction with a syntactic analyzer and semantic tagging, WHISK can also handle extraction from free text such as news stories.  相似文献   

5.
网页数据自动抽取系统   总被引:6,自引:0,他引:6  
在Internet中存在着大量的半结构化的HTML网页。为了使用这些丰富的网页数据,需要将这些数据从网页中重新抽取出来。该文介绍了一种新的基于树状结构的信息提取方法和一个自动产生包装器的系统DAE(DOMbasedAutomaticExtraction),将HTML网页数据转换为XML数据,在提取的过程中基本上不需要人工干预,因而实现了抽取过程的自动化。该方法可以应用于信息搜索agent中,或者应用于数据集成系统中等。  相似文献   

6.
基于语义角色和概念图的信息抽取模型   总被引:3,自引:0,他引:3  
杨选选  张蕾 《计算机应用》2010,30(2):411-414
传统的信息抽取方法由于缺少语义信息的支持,抽取的准确率不高。针对这个问题提出了一种基于语义理解的信息抽取方法。一方面,把语义角色标注的浅层语义信息转换成概念图,无歧义地将抽取信息所包含的基本语义形式化;另一方面,通过概念图的相似度计算区分场景,并使用语义角色获取抽取模式,以提高抽取质量。实验结果表明,该方法取得了较好的效果。  相似文献   

7.
针对通用搜索引擎缺乏对网页内容的时态表达式的准确抽取及语义查询支持,提出时态语义相关度算法(TSRR)。在通用搜索引擎基础上添加了时态信息抽取和时态信息排序功能,通过引入时态正则表达式规则,抽取查询关键词和网页文档中的时态点或时态区间等时态表达式,综合计算网页内容的文本相关度和时态语义相关度,从而得到网页的最终排序评分。实验表明,应用TSRR算法可以准确而有效地匹配与时态表达式相关的关键词查询。  相似文献   

8.
基于多知识的Web网页信息抽取方法   总被引:10,自引:1,他引:9  
从Web网页中自动抽取所需要的信息内容,是互联网信息智能搜取的一个重要研究课题,为有效解决网页信息抽取所需的信息描述知识获取问题,这里提出了一个种基于多知识的Web网页信息抽取方法(简称MKIE方法)。该方法将网页信息抽取所需的知识分为二类,一类是描绘网页内容本身表示特点,以及识别各网信息对象的确定模式知识,另一类则描述网页信息记录块,以及各网页信息对象的非确定模式知识,MKIE方法根据前一类知识,动态分析获得后一类知识;并利用这两类知识,最终完全从信息内容类似担其表现形式各异的网页中,抽取出所需要的信息,美大学教员论文网页信息抽取实验结果表明,MKIE方法具有较强的网而信息自动识别与抽取能力。  相似文献   

9.
10.
在基于包装器的Web信息提取工作中,抽取规则占有重要的地位。由于网页经常改版,使得抽取规则需要不断更新,且手工生成抽取规则是一项费时费力的工作。为此,提出一种自动生成抽取规则的方法,通过扫描HTML源码,生成带语义信息的TABLE树,用以识别网页中的数据表格,并在此基础上利用贪心算法自动生成抽取规则。实验结果表明,该方法具有较高的准确率和F指数,且对于识别出的表格具有较高的规则生成率。  相似文献   

11.
传统网络爬虫为基于关键字检索的通用搜索引擎服务,无法抓取网页类别信息,给文本聚类和话题检测带来计算效率和准确度问题。本文提出基于站点分层结构的网页分类与抽取,通过构建虚拟站点层次分类树并抽取真实站点分层结构,设计并实现了面向分层结构的网页抓取;对于无分类信息的站点,给出了基于标题的网页分类技术,包括领域知识库构建和基于《知网》的词语语义相似度计算。实验结果表明,该方法具有良好的分类效果。  相似文献   

12.
Content in numerous Web data sources, designed primarily for human consumption, are not directly amenable to machine processing. Automated semantic analysis of such content facilitates their transformation into machine-processable and richly structured semantically annotated data. This paper describes a learning-based technique for semantic analysis of schematic data which are characterized by being template-generated from backend databases. Starting with a seed set of hand-labeled instances of semantic concepts in a set of Web pages, the technique learns statistical models of these concepts using light-weight content features. These models direct the annotation of diverse Web pages possessing similar content semantics. The principles behind the technique find application in information retrieval and extraction problems. Focused Web browsing activities require only selective fragments of particular Web pages but are often performed using bookmarks which fetch the contents of the entire page. This results in information overload for users of constrained interaction modality devices such as small-screen handheld devices. Fine-grained information extraction from Web pages, which are typically performed using page specific and syntactic expressions known as wrappers, suffer from lack of scalability and robustness. We report on the application of our technique in developing semantic bookmarks for retrieving targeted browsing content and semantic wrappers for robust and scalable information extraction from Web pages sharing a semantic domain. This work has been conducted while the author was at Stony Brook University.  相似文献   

13.
基于Web企业竞争对手情报自动搜集平台   总被引:4,自引:1,他引:4  
从互联网中准确有效及时地自动搜索出需要的信息,是Web信息处理中的一个重要研究课题。本文在所提出的基于搜索路径Web网页搜索和基于多知识网页信息抽取方法基础上,给出基于Web企业竞争对手情报自动收集平台的实现方法,该平台可以有效地从多个企业门户网站中,自动搜索出所需要的目标网页,并能够从目标网页中自动抽取其中多记录信息。本文利用该平台进行了企业人才招聘信息的自动搜索实验。实验结果证实了该平台在信息自动搜集方面的有效性和准确性。  相似文献   

14.
Web information may currently be acquired by activating search engines. However, our daily experience is not only that web pages are often either redundant or missing but also that there is a mismatch between information needs and the web's responses. If we wish to satisfy more complex requests, we need to extract part of the information and transform it into new interactive knowledge. This transformation may either be performed by hand or automatically. In this article we describe an experimental agent-based framework skilled to help the user both in managing achieved information and in personalizing web searching activity. The first process is supported by a query-formulation facility and by a friendly structured representation of the searching results. On the other hand, the system provides a proactive support to the searching on the web by suggesting pages, which are selected according to the user's behavior shown in his navigation activity. A basic role is played by an extension of a classical fuzzy-clustering algorithm that provides a prototype-based representation of the knowledge extracted from the web. These prototypes lead both the proactive suggestion of new pages, mined through web spidering, and the structured representation of the searching results. © 2007 Wiley Periodicals, Inc. Int J Int Syst 22: 1101–1122, 2007.  相似文献   

15.
一种基于树结构的Web数据自动抽取方法   总被引:8,自引:2,他引:8  
介绍了一种基于树结构的自动从HTML页面中抽取数据的方法.在HTML页面的树形结构之上,提出了基于语义块的HTML页面结构模型:HTML页面中的数据值主要存在于语义块中,不同的HTML页面的主要区别在于语义块的区别.基于语义块的结构模型,自动抽取通过4个步骤完成:通过HTML页面比较发现语义块;区分语义块中数据值的角色;推导数据模式和推导抽取规则.在实际HTML页面上的实验已经证明,这种方法能够达到较高的正确率,同时,随着文档的增大,方法也能够保证线性的时间复杂度.  相似文献   

16.
针对大多数网页除了正文信息外,还包括导航、广告和免责声明等噪声信息的问题。为了提高网页正文抽取的准确性,提出了一种基于文本块密度和标签路径覆盖率的抽取方法(CETD-TPC),结合网页文本块密度特征和标签路径特征的优点,设计了融合两种特征的新特征,利用新特征抽取网页中的最佳文本块,最后,抽取该文本块中的正文内容。该方法有效地解决了网页正文中噪声块信息过滤和短文本难以抽取的问题,且无需训练和人工处理。在CleanEval数据集和从知名网站上随机选取的新闻网页数据集上的实验结果表明,CETD-TPC方法在不同数据源上均具有很好的适用性,抽取性能优于CETR、CETD和CEPR等算法。  相似文献   

17.
In this paper we present a graphical software system that provides an automatic support to the extraction of information from web pages. The underlying extraction technique exploits the visual appearance of the information in the document, and is driven by the spatial relations occurring among the elements in the page. However, the usual information extraction modalities based on the web page structure can be used in our framework, too. The technique has been integrated within the Spatial Relation Query (SRQ) tool. The tool is provided with a graphical front-end which allows one to define and manage a library of spatial relations, and to use a SQL-like language for composing queries driven by these relations and by further semantic and graphical attributes.  相似文献   

18.
Mining semantic relations between concepts underlies many fundamental tasks including natural language processing, web mining, information retrieval, and web search. In order to describe the semantic relation between concepts, in this paper, the problem of automatically generating spatial temporal relation graph (STRG) of semantic relation between concepts is studied. The spatial temporal relation graph of semantic relation between concepts includes relation words, relation sentences, relation factor, relation graph, faceted feature, temporal feature, and spatial feature. The proposed method can automatically generate the spatial temporal relation graph (STRG) of semantic relation between concepts, which is different from the manually generated annotation repository such as WordNet and Wikipedia. Moreover, the proposed method does not need any prior knowledge such as ontology or the hierarchical knowledge base such as WordNet. Empirical experiments on real dataset show that the proposed algorithm is effective and accurate.  相似文献   

19.
从大规模非结构化文本中自动地抽取有用信息是自然语言处理和人工智能的一个重要目标。开放式信息抽取在高效挖掘网络文本信息方面已成为必然趋势,按关系参数可分为二元、多元实体关系抽取,该文按此路线对典型方法的现状和存在问题进行分析与总结。目前多数开放式实体关系抽取仍是浅层语义处理,对隐含关系抽取很少涉及。采用马尔科夫逻辑、本体结构推理等联合推理方法可综合多种特征,有效推断细微完整信息,为深入理解文本打开新局面。  相似文献   

20.
基于扩展DOM树的Web页面信息抽取   总被引:1,自引:0,他引:1  
随着Internet的发展,Web页面提供的信息量日益增长,信息的密集程度也不断增强.多数Web页面包含多个信息块,它们布局紧凑,在HTML语法上具有类似的模式.针对含有多信息块的Web页面,提出一种信息抽取的方法:首先创建扩展的DOM(Document Object Model)树,将页面抽取成离散的信息条;然后根据扩展DOM树的层次结构,并结合必要的视觉特性和语义信息对离散化的信息条重新整合;最后确定包含信息块的子树,深度遍历DOM树实现信息抽取.该算法能对多信息块的Web页面进行信息抽取.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号