首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
针对大多数网页除了正文信息外,还包括导航、广告和免责声明等噪声信息的问题。为了提高网页正文抽取的准确性,提出了一种基于文本块密度和标签路径覆盖率的抽取方法(CETD-TPC),结合网页文本块密度特征和标签路径特征的优点,设计了融合两种特征的新特征,利用新特征抽取网页中的最佳文本块,最后,抽取该文本块中的正文内容。该方法有效地解决了网页正文中噪声块信息过滤和短文本难以抽取的问题,且无需训练和人工处理。在CleanEval数据集和从知名网站上随机选取的新闻网页数据集上的实验结果表明,CETD-TPC方法在不同数据源上均具有很好的适用性,抽取性能优于CETR、CETD和CEPR等算法。  相似文献   

2.
根据网页文本信息的存储特点,提出一种网页文本信息抽取策略,有效地实现了对文本丰富型网页中主要文本信息的抽取工作.该抽取方法具有较强的空阃适应性和时间适应性.  相似文献   

3.
基于内容相似度的网页正文提取   总被引:6,自引:0,他引:6       下载免费PDF全文
提出一种将复杂的网页脚本进行简化并映射成一棵易于操作的树型结构的方法。该方法不依赖于DOM树,无须用HTMLparser包进行解析,而是利用文本相似度计算方法,通过计算树节点中文本内容与各级标题的相似度判定小块文本信息的有用性,由此进行网页清洗与正文抽取,获得网页文本信息,实验结果表明,该方法对正文抽取具有较高的通用性与准确率。  相似文献   

4.
针对网页噪音和网页非结构化信息抽取复杂度高的问题,提出一种基于标签路径(XPATH)聚类的文本信息抽取算法.该算法首先对网页噪音预处理,根据网页的DOM树结构进行标签路径聚类,通过自动训练的阈值和网页分割算法快速判定网页的关键部分,根据数据块中的嵌套结构获取网页文本抽取模板.对不同类型网站实验表明,该方法获得快速和较高准确度的效果.  相似文献   

5.
文章提出一种基于静态网页特征的文本信息抽取方法。该方法首先根据静态网页的URL特征判断其是否是静态网页,然后根据静态网页的结构特征和内容特征对标题和正文文本内容进行抽取.再按照统一规范将结果顺序存储便于再处理。实验结果表明,网页内容信息抽取的查全率和查准率分别为96.2%和95.9%,该方法计算量小、抽取速度快、正确率高,可实际应用于大规模的网页内容安全分析。  相似文献   

6.
一种基于模板的快速网页文本自动抽取算法*   总被引:1,自引:1,他引:0  
针对网页噪声和网页非结构化信息抽取模板生成复杂度高的问题,提出了一种快速获取非结构信息抽取模板的算法。该算法先对网页噪声进行预处理,将其DOM树结构进行标签hash映射,通过自动训练的阈值快速判定网页的主要部分,根据数据块中的嵌套结构获取网页文本抽取模板。对不同类型网站的实验表明,该方法快速且具有较高的准确度。  相似文献   

7.
罗永莲  赵昌垣 《计算机应用》2014,34(10):2865-2868
针对突发事件新闻网页语料处理问题,提出了一种基于此类新闻特点与网页标记信息的抽取和定位新闻内容的方法。该方法将网页标记与文本相似度作为机器学习的特征项,利用贝叶斯分类方法提取新闻标题。利用事件新闻的用词稳定性与网页标记的嵌套特点,减少了文本处理数量,降低了文本向量维数,在此基础上计算向量相似度以定位新闻篇首与篇尾。实验结果表明,该方法抽取标题的准确率达到86.5%,抽取正文的平均准确率在78%以上,能有效抽取新闻内容,且易于实现,对其他网页文本处理中挖掘标记信息与文本自身信息具有一定的借鉴意义。  相似文献   

8.
网页标题的正确抽取,在Web文本信息抽取领域有着重大意义。本文提出一种网页标题实时抽取方法。首先通过对目录型网页进行实时解析,接着采用基于超链接遍历的方法,并利用标题与发布时间的对应关系,最终获取对应目录型网页的URL及锚文本。若获得锚文本不是网页正文的标题,则获取主题型网页的HTML源码并构建网页DOM树。在此基础上,结合网页标题的视觉特点,深度优先遍历DOM树,正确提取网页正文标题。实验结果表明,本文提出的Web网页标题实时抽取方法,具有实现简单,准确率高等优点。   相似文献   

9.
使用特征文本密度的网页正文提取   总被引:1,自引:0,他引:1  
针对当前互联网网页越来越多样化、复杂化、非规范化的特点,提出了基于特征文本密度的网页正文提取方法。该方法将网页包含的文本根据用途和特征进行分类,并构建数学模型进行比例密度分析,从而精确地识别出主题文本。该方法的时间和空间复杂度均较低。实验显示,它能有效地抽取复杂网页以及多主题段网页的正文信息,具有很好的通用性。  相似文献   

10.
梁正友  欧杰  俞闽敏 《计算机工程》2011,37(23):276-278
在现有的网页抽取技术中,正文定位方法仅考虑网页文本信息,当正文图片信息较多、文本信息偏少时,容易出现偏差,且定位准确率较低。针对该问题,从信息论角度出发,结合网页中的文本信息图片信息,设计一种对网页中图片信息量和有效信息量的估算方法,在此基础上,提出一种基于图文信息量的网页正文定位算法。实验结果表明,该算法在不同正文文本量的情况下,均具有较高的定位准 确率。  相似文献   

11.
基于框架语义标注的自由文本信息抽取研究   总被引:1,自引:0,他引:1       下载免费PDF全文
信息抽取是从自由文本语料库构建数据库,实现信息自动收集的有效途径之一。提出了一种以框架语义标注为基础构建信息抽取规则的信息抽取方法。基于框架语义标注的信息抽取是用统一的方法来指导信息抽取过程。这种方法具有较细的处理粒度,对语义规则性强的领域有一定的普遍适用性。设计了基于框架语义的BAIE(图书内容简介信息抽取)系统,并对图书的内容简介试行信息抽取。抽取结果表明,基于框架语义的信息抽取方式有一定的可行性和适用性。  相似文献   

12.
基于规则归纳的信息抽取系统实现   总被引:2,自引:0,他引:2  
面对Web信息的迅猛增长,信息抽取技术非常适合于从大量的文档中抽取需要的事实数据。通过文档对象模型(DOM)解析以及检索、抽取、映射等规则的定义,设计并实现了一种具有规则归纳能力的信息抽取系统,用于Web信息的自动检索。在用于抽取规则归纳的框架下,还重点对用于生成抽取模式的WHISK学习算法进行了实验对比分析,结果表明系统对于单槽和多槽数据都具有不错的归纳学习能力。  相似文献   

13.
事件信息抽取是信息抽取任务中的一种,旨在识别并提出一个事件的触发词和元素.由于容易受到数据稀疏的影响,事件要素的抽取是中文事件抽取任务中的一个难点,研究的重点在于特征工程的构建.中文语法相较英文要复杂许多,所以捕获英文文本特征的方法在中文任务中效果并不明显,而目前常用的神经网络模型仅考虑了上下文信息,不能兼顾词法和句法...  相似文献   

14.
A Survey of Web Information Extraction Systems   总被引:12,自引:0,他引:12  
The Internet presents a huge amount of useful information which is usually formatted for its users, which makes it difficult to extract relevant data from various sources. Therefore, the availability of robust, flexible Information Extraction (IE) systems that transform the Web pages into program-friendly structures such as a relational database will become a great necessity. Although many approaches for data extraction from Web pages have been developed, there has been limited effort to compare such tools. Unfortunately, in only a few cases can the results generated by distinct tools be directly compared since the addressed extraction tasks are different. This paper surveys the major Web data extraction approaches and compares them in three dimensions: the task domain, the automation degree, and the techniques used. The criteria of the first dimension explain why an IE system fails to handle some Web sites of particular structures. The criteria of the second dimension classify IE systems based on the techniques used. The criteria of the third dimension measure the degree of automation for IE systems. We believe these criteria provide qualitatively measures to evaluate various IE approaches.  相似文献   

15.

The volume of electronic text in different languages, particularly on the World Wide Web, is growing significantly, and the problem of users who are restricted in the number of languages they read obtaining information from this text is becoming more widespread. This article investigates some of the issues involved in achieving multilingual information extraction (IE), describes the approach adopted in the M-LaSIE-II IE system, which addresses these problems, and presents the results of evaluating the approach against a small parallel corpus of English/French newswire texts. The approach is based on the assumption that it is possible to construct a language independent representation of concepts relevant to the domain, at least for the small well-defined domains typical of IE tasks, allowing multilingual IE to be successfully carried out without requiring full machine translation.  相似文献   

16.
17.
本文提出了一套基于本体和自然语言理解相结合的军用文书理解的解决方案.系统通过信息抽取和军标本体匹配两个模块,针对军用文书与军队标号相对应的特点,通过计算机自动处理,将军用文书转化成一种无二义性的中间格式,传递给其他系统使用,以提高指挥作战的效能.  相似文献   

18.
As the internet grows rapidly, millions of web pages are being added on a daily basis. The extraction of precise information is becoming more and more difficult as the volume of data on the internet increases. Several search engines and information fetching tools are available on the internet, all of which claim to provide the best crawling facilities. For the most part, these search engines are keyword based. This poses a problem for visually impaired people who want to get the full use from online resources available to other users. Visually impaired users require special aid to get?along with any given computer system. Interface and content management are no exception, and special tools are required to facilitate the extraction of relevant information from the internet for visually impaired users. The HOIEV (Heavyweight Ontology Based Information Extraction for Visually impaired User) architecture provides a mechanism for highly precise information extraction using heavyweight ontology and built-in vocal command system for visually impaired internet users. Our prototype intelligent system not only integrates and communicates among different tools, such as voice command parsers, domain ontology extractors and short message engines, but also introduces an autonomous mechanism of information extraction (IE) using heavyweight ontology. In this research we designed domain specific heavyweight ontology using OWL 2 (Web Ontology Language 2) and for axiom writing we used PAL (Protégé Axiom Language). We introduced a novel autonomous mechanism for IE by developing prototype software. A series of experiments were designed for the testing and analysis of the performance of heavyweight ontology in general, and our information extraction prototype specifically.  相似文献   

19.
The most fascinating advantage of the semantic web would be its capability of understanding and processing the contents of web pages automatically. Basically, the semantic web realization involves two main tasks: (1) Representation and management of a large amount of data and metadata for web contents; (2) Information extraction and annotation on web pages. On the one hand, recognition of named-entities is regarded as a basic and important problem to be solved, before deeper semantics of a web page could be extracted. On the other hand, semantic web information extraction is a language-dependent problem, which requires particular natural language processing techniques. This paper introduces VN-KIM IE, the information extraction module of the semantic web system VN-KIM that we have developed. The function of VN-KIM IE is to automatically recognize named-entities in Vietnamese web pages, by identifying their classes, and addresses if existing, in the knowledge base of discourse. That information is then annotated to those web pages, providing a basis for NE-based searching on them, as compared to the current keyword-based one. The design, implementation, and performance of VN-KIM IE are presented and discussed.  相似文献   

20.
Learning Information Extraction Rules for Semi-Structured and Free Text   总被引:47,自引:0,他引:47  
Soderland  Stephen 《Machine Learning》1999,34(1-3):233-272
A wealth of on-line text information can be made available to automatic processing by information extraction (IE) systems. Each IE application needs a separate set of rules tuned to the domain and writing style. WHISK helps to overcome this knowledge-engineering bottleneck by learning text extraction rules automatically.WHISK is designed to handle text styles ranging from highly structured to free text, including text that is neither rigidly formatted nor composed of grammatical sentences. Such semi-structured text has largely been beyond the scope of previous systems. When used in conjunction with a syntactic analyzer and semantic tagging, WHISK can also handle extraction from free text such as news stories.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号