首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 734 毫秒
1.
实现了以table标记为信息存贮特点的多记录网页信息的自动抽取工作,该抽取方法从网页信息的存贮特点入手,将有用信息定位于表格中,该抽取方法能够在一个网页中自动查找结构相同或相似的记录模式,并自动分析该记录模式的结构特点从而调用相应的抽取模式(XSLT文档)实现对信息的自动抽取。  相似文献   

2.
基于关键词聚类和节点距离的网页信息抽取   总被引:2,自引:0,他引:2  
大部分网页信息抽取方法都针对特定的网站,例如基于网站抽取规则和基于训练网页样例的方法。这些方法在某一个网站上可以很好地应用。但当遇到新的网站时,必须人为地增加抽取规则或者提供新的训练网页集。而且,当网站的模版改变时,也要重新设计这些规则或重新输入训练网页集。这些方法难以维护,因此不能应用到从大量不同的网站上进行信息抽取。本文提出了一种新的网页信息抽取方法,该方法基于特定主题的关键词组和节点距离,能够不加区分地对不同的网站页面信息自动抽取。对大量网站的网页进行信息抽取的实验显示,该方法能够不依赖网页的来源而正确和自动地抽取相关信息,并且已经成功应用到电子商务智能搜索和挖掘系统中。  相似文献   

3.
基于Web企业竞争对手情报自动搜集平台   总被引:4,自引:1,他引:4  
从互联网中准确有效及时地自动搜索出需要的信息,是Web信息处理中的一个重要研究课题。本文在所提出的基于搜索路径Web网页搜索和基于多知识网页信息抽取方法基础上,给出基于Web企业竞争对手情报自动收集平台的实现方法,该平台可以有效地从多个企业门户网站中,自动搜索出所需要的目标网页,并能够从目标网页中自动抽取其中多记录信息。本文利用该平台进行了企业人才招聘信息的自动搜索实验。实验结果证实了该平台在信息自动搜集方面的有效性和准确性。  相似文献   

4.
基于XML的网页信息自动抽取   总被引:4,自引:0,他引:4  
周津  朱明  郑全 《计算机应用》2004,24(Z1):225-227
文章提出了一种基于XML的网页信息自动抽取的方法和框架,通过利用网页中信息的结构相似性和词法相似性,自动学习出网页信息的记录模式并归纳出相应的词法模式,从而避免了繁重的人为样本收集与标记工作,也免去了人工给定模式的工作,具备很强的自动性.同时自动归纳出的词法模式还可以应用到其他网站和非结构化文本中.  相似文献   

5.
基于多知识的Web网页信息抽取方法   总被引:10,自引:1,他引:9  
从Web网页中自动抽取所需要的信息内容,是互联网信息智能搜取的一个重要研究课题,为有效解决网页信息抽取所需的信息描述知识获取问题,这里提出了一个种基于多知识的Web网页信息抽取方法(简称MKIE方法)。该方法将网页信息抽取所需的知识分为二类,一类是描绘网页内容本身表示特点,以及识别各网信息对象的确定模式知识,另一类则描述网页信息记录块,以及各网页信息对象的非确定模式知识,MKIE方法根据前一类知识,动态分析获得后一类知识;并利用这两类知识,最终完全从信息内容类似担其表现形式各异的网页中,抽取出所需要的信息,美大学教员论文网页信息抽取实验结果表明,MKIE方法具有较强的网而信息自动识别与抽取能力。  相似文献   

6.
基于重复模式的自动Web信息抽取   总被引:3,自引:2,他引:1       下载免费PDF全文
互联网上存在很多在线购物网站,抽取这类网站页面里的商品信息可以为电子商务、Web查询提供增值服务。该文针对这类网站提出一种自动的Web信息抽取方法,通过检测网页中的重复模式以及分析主题内容的特征获取网页的主题内容,该方法在抽取过程中不需要人工干预。对10个在线购物网站进行了测试,实验结果表明提出的方法是有效的。  相似文献   

7.
介绍了一种多策略联合信息抽取方法——MSCIE(Multi-Strategy Comtbination Information Extraction).MSCIE将对表格式网页的信息抽取分为基于网页结构特征分析的信息抽取和基于模式匹配的信息抽取,提出了一种对网页DoM(Document Object Moclel)树的冗余信息进行剪枝分析的方法和一种实体特征模式发现算法分别用于这两种信息抽取方法,并通过两种策略联合完成信息抽取工作.应用于互联网竞争情报监测系统中,从大量网站中抽取多种商品的供求信息,取得了较高的准确率和召回率(平均在95%以上)。  相似文献   

8.
基于子树广度的Web信息抽取   总被引:1,自引:1,他引:0       下载免费PDF全文
王权  施韶亭 《计算机工程》2009,35(3):89-90,9
提出一种新的网页信息抽取方法,基于子树的广度可不加区分地对不同科技文献网站的页面信息进行自动抽取。对大量科技文献网站进行信息抽取实验,已应用到甘肃省科技文献共享平台。实验结果证明,该方法能不依赖科技文献网页的来源而自动地抽取相关信息,并能保证较高的数据抽取回召率和查准率。  相似文献   

9.
针对搜索结果数量过多、各信息源--网页的属性值不一致的特点,提出一种具有属性融合/集成能力的搜索策略,拟建立自动搜索生成系统取代人工分拣.通过对检索出来的网页作信息抽取、对比、统计,进行集成/融合,最终提交给用户统一的信息视图,保证了信息完备性与权威性.用该方法建立的微机性能/报价检索示范系统的试运行与测试数据表明,该系统基本能够从繁重的人工检索中解脱出来,提高了自动化程度.  相似文献   

10.
面对大规模异构网页,基于视觉特征的网页信息抽取方法普遍存在通用性较差、抽取效率较低的问题。针对通用性较差的问题,该文提出了基于视觉特征的使用有监督机器学习的网页信息抽取框架WEMLVF。该框架具有良好的通用性,通过对论坛网站和新闻评论网站的信息抽取实验,验证了该框架的有效性。然后,针对视觉特征提取时间代价过高导致信息抽取效率较低的问题,该文使用WEMLVF,分别提出基于XPath和基于经典包装器归纳算法SoftMealy的自动生成信息抽取模板的方法。这两种方法使用视觉特征自动生成信息抽取模板,但模板的表达并不包含视觉特征,使得在使用模板进行信息抽取的过程中无需提取网页的视觉特征,从而既充分利用了视觉特征在信息抽取中的作用,又显著提升了信息抽取的效率,实验结果验证了这一结论。  相似文献   

11.
网页数据自动抽取系统   总被引:6,自引:0,他引:6  
在Internet中存在着大量的半结构化的HTML网页。为了使用这些丰富的网页数据,需要将这些数据从网页中重新抽取出来。该文介绍了一种新的基于树状结构的信息提取方法和一个自动产生包装器的系统DAE(DOMbasedAutomaticExtraction),将HTML网页数据转换为XML数据,在提取的过程中基本上不需要人工干预,因而实现了抽取过程的自动化。该方法可以应用于信息搜索agent中,或者应用于数据集成系统中等。  相似文献   

12.
In this paper, a model for websites is presented. The model is well-suited for the formal verification of dynamic as well as static properties of the system. A website is defined as a collection of web pages which are semantically connected in some way. External web pages (which are related pages not belonging to the website) are treated as the environment of the system. We also present the logic which is used to specify properties of websites, and illustrate the kinds of properties that can be specified and verified by using a model-checking tool on the system. In this setting, we discuss some interesting properties which often need to be checked when designing websites. We have encoded the model using the specification language Maude which allows us to use the Maude model-checking tool.  相似文献   

13.
The most fascinating advantage of the semantic web would be its capability of understanding and processing the contents of web pages automatically. Basically, the semantic web realization involves two main tasks: (1) Representation and management of a large amount of data and metadata for web contents; (2) Information extraction and annotation on web pages. On the one hand, recognition of named-entities is regarded as a basic and important problem to be solved, before deeper semantics of a web page could be extracted. On the other hand, semantic web information extraction is a language-dependent problem, which requires particular natural language processing techniques. This paper introduces VN-KIM IE, the information extraction module of the semantic web system VN-KIM that we have developed. The function of VN-KIM IE is to automatically recognize named-entities in Vietnamese web pages, by identifying their classes, and addresses if existing, in the knowledge base of discourse. That information is then annotated to those web pages, providing a basis for NE-based searching on them, as compared to the current keyword-based one. The design, implementation, and performance of VN-KIM IE are presented and discussed.  相似文献   

14.
IPSMS:一个网络舆情监控系统的设计与实现   总被引:3,自引:0,他引:3  
描述一个网络舆情监控系统IPSMS(Internet public sentiment monitoring system)。该系统试图将网络新闻及论坛、BBS上的帖子依关键词搜索,并依事件聚类,让管理者通过阅读事件可以了解正在发生或已经发生的事件,并提供自动持续追踪事件发展的功能,以协助管理者快速完整且全面地了解事件全貌。系统由网页抓取器、网页解析器及跟踪检测系统三部分组成。由于网络舆情的特点是数据量巨大,为了提高效率,系统采用了网页清理技术,并且在话题跟踪过程中使用了k-d tree方法。最后,对系统的未来工作进行了展望。  相似文献   

15.
基于Web的双语平行句对自动获取   总被引:3,自引:1,他引:2  
双语平行句对是机器翻译的重要资源,但是由于获取途径的限制,句子级平行语料库不仅数量有限而且经常集中在特定领域,很难适应真实应用的需求。该文介绍了一个基于Web的双语平行句对自动获取系统。该系统融合了现有系统的优点,对其中的关键技术进行了改进。文中提出了一种自动发现双语网站中URL命名规律的方法,改进了双语平行句对抽取技术。实验结果表明文中所提出的方法大大提高了候选双语网站发现的召回率,所获取双语平行句对的召回率为93%,准确率为96%,证明了该文方法的有效性。此外,该文还对存在于双语对照网页内部的双语平行句对的抽取方法进行了研究,取得了初步成果。  相似文献   

16.
WWW网页布局规则初探   总被引:1,自引:0,他引:1  
连入WWW(WorldWideWeb)的用户和网点数量每年成倍增长.要想使一个网点在数以百万计的网点中给访问者留下深刻的印象,设计出一个含有丰富信息、易于浏览、视觉上怡人的网页(WebPages)是不可或缺的.本文从Gestalt心理学、传统印刷术、超文本制作和人机交互学出发,尝试提出一个网络文档(WebDocuments)界面布局的设计规则,包括文本、图形、静态和动态的网络文档.本文把网络文档分成五种基本类型,然后给出了一些布局规则.  相似文献   

17.
该文提出了一种从搜索引擎返回的结果网页中获取双语网页的新方法,该方法分为两个任务。第一个任务是自动地检测并收集搜索引擎返回的结果网页中的数据记录。该步骤通过聚类的方法识别出有用的记录摘要并且为下一个任务即高质量双语混合网页的验证及其获取提供有效特征。该文中把双语混合网页的验证看作是有效的分类问题,该方法不依赖于特定领域和搜索引擎。基于从搜索引擎收集并经过人工标注的2 516条检索结果记录,该文提出的方法取得了81.3%的精确率和94.93%的召回率。  相似文献   

18.
Web information may currently be acquired by activating search engines. However, our daily experience is not only that web pages are often either redundant or missing but also that there is a mismatch between information needs and the web's responses. If we wish to satisfy more complex requests, we need to extract part of the information and transform it into new interactive knowledge. This transformation may either be performed by hand or automatically. In this article we describe an experimental agent-based framework skilled to help the user both in managing achieved information and in personalizing web searching activity. The first process is supported by a query-formulation facility and by a friendly structured representation of the searching results. On the other hand, the system provides a proactive support to the searching on the web by suggesting pages, which are selected according to the user's behavior shown in his navigation activity. A basic role is played by an extension of a classical fuzzy-clustering algorithm that provides a prototype-based representation of the knowledge extracted from the web. These prototypes lead both the proactive suggestion of new pages, mined through web spidering, and the structured representation of the searching results. © 2007 Wiley Periodicals, Inc. Int J Int Syst 22: 1101–1122, 2007.  相似文献   

19.
Accessibility of the Italian public administration web pages is ruled by the Stanca Act and in particular the Decree of the Minister issued on July 8, 2005. In this paper, an objective test is performed on the official web pages of the Italian province and region chief towns to check their compliance to the 22 technical requirements defined by the Stanca Act. A sample of 976 web pages belonging to the websites of the Italian chief towns have been downloaded in the period October–December 2012. Such a data collection has been submitted to Achecker, the worldwide recognized syntax and accessibility validation service. Several accessibility and syntax errors have been found following the automatic analysis. Such errors have been classified, a statistic has been produced, and some graphs are included to offer an immediate view of the error distribution. Moreover, the most frequent errors are pointed out and explained in detail. Although the Stanca Act has been promulgated some years ago, and contains precise indications about updating a web page to be compliant to the 22 technical requirements, all the analyzed websites are not fully compliant to the law. Updating web pages to be compliant to the Stanca Act is a slow process and some grave errors are still present, both in terms of syntax and accessibility.  相似文献   

20.
The ability to automatically detect fraudulent escrow websites is important in order to alleviate online auction fraud. Despite research on related topics, such as web spam and spoof site detection, fake escrow website categorization has received little attention. The authentic appearance of fake escrow websites makes it difficult for Internet users to differentiate legitimate sites from phonies; making systems for detecting such websites an important endeavor. In this study we evaluated the effectiveness of various features and techniques for detecting fake escrow websites. Our analysis included a rich set of fraud cues extracted from web page text, image, and link information. We also compared several machine learning algorithms, including support vector machines, neural networks, decision trees, naïve bayes, and principal component analysis. Experiments were conducted to assess the proposed fraud cues and techniques on a test bed encompassing nearly 90,000 web pages derived from 410 legitimate and fake escrow websites. The combination of an extended feature set and a support vector machines ensemble classifier enabled accuracies over 90 and 96% for page and site level classification, respectively, when differentiating fake pages from real ones. Deeper analysis revealed that an extended set of fraud cues is necessary due to the broad spectrum of tactics employed by fraudsters. The study confirms the feasibility of using automated methods for detecting fake escrow websites. The results may also be useful for informing existing online escrow fraud resources and communities of practice about the plethora of fraud cues pervasive in fake websites.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号