首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
An effective solution to automate information extraction from Web pages is represented by wrappers. A wrapper associates a Web page with an XML document that represents part of the information in that page in a machine-readable format. Most existing wrapping approaches have traditionally focused on how to generate extraction rules, while they have ignored potential benefits deriving from the use of the schema of the information being extracted in the wrapper evaluation. In this paper, we investigate how the schema of extracted information can be effectively used in both the design and evaluation of a Web wrapper. We define a clean declarative semantics for schema-based wrappers by introducing the notion of (preferred) extraction model, which is essential to compute a valid XML document containing the information extracted from a Web page. We developed the SCRAP (SChema-based wRAPper for web data) system for the proposed schema-based wrapping approach, which also provides visual support tools to the wrapper designer. Moreover, we present a wrapper generalization framework to profitably speed up the design of schema-based wrappers. Experimental evaluation has shown that SCRAP wrappers are not only able to successfully extract the required data, but also they are robust to changes that may occur in the source Web pages.  相似文献   

2.
Several commercial applications, such as online comparison shopping and process automation, require integrating information that is scattered across multiple websites or XML documents. Much research has been devoted to this problem, resulting in several research prototypes and commercial implementations. Such systems rely on wrappers that provide relational or other structured interfaces to websites. Traditionally, wrappers have been constructed by hand on a per-website basis, constraining the scalability of the system. We introduce a website structure inference mechanism called compact skeletons that is a step in the direction of automated wrapper generation. Compact skeletons provide a transformation from websites or other hierarchical data, such as XML documents, to relational tables. We study several classes of compact skeletons and provide polynomial-time algorithms and heuristics for automated construction of compact skeletons from websites. Experimental results show that our heuristics work well in practice. We also argue that compact skeletons are a natural extension of commercially deployed techniques for wrapper construction.  相似文献   

3.
Given the ever-increasing scale and diversity of information and applications on the Internet, improving the technology of information retrieval is an urgent research objective. Retrieved information is either semi-structured or unstructured in format and its sources are extremely heterogeneous. In consequence, the task of efficiently gathering and extracting information from documents can be both difficult and tedious. Given this variety of sources and formats, many choose to use mediator/wrapper architecture (Y. Papakonstantinou, A. Gupta, H. Garcia-Molina, J. Ullman, A Query Translation Scheme for Rapid Implementation of Wrappers, International Conference on Deductive and Object-Oriented Databases, Singapore, 1995), but its use demands a fast means of generating efficient wrappers.In this paper, we present a design for an automatic eXtensible Markup Language (XML)-based framework with which to generate wrappers rapidly. Wrappers created with this framework support a unified interface for a meta-search information retrieval system based on the Internet Search Service using the Common Object Request Broker Architecture (CORBA) standard. Greatly advantaged by the compatibility of CORBA and XML, a user can quickly and easily develop information-gathering applications, such as a meta-search engine or any other information source retrieval method. The two main things our design provides are a method of wrapper generation that is fast, simple, and efficient, and a wrapper generator that is CORBA and XML-compliant and that supports a unified interface.  相似文献   

4.
Web包装器将网页内容转换为XML格式,用于系统集成。进行XML转换的XSLT技术能较好地支持包装器的信息抽取和组织。本文从包含查询接口、结果模式和映射规则的包装器描述文件(XML)出发,给出了自动生成可执行代码的技术方案。包装器的执行及其生成过程完全基于XSLT技术,系统具有较强的可移植性。提出“元数据对齐”方法进行内
容辅助定位,提高了对页面变化的容忍度。原型系统的实现验证了以上技术的可行性。  相似文献   

5.
Web信息抽取引发了大规模的应用。基于包装器的Web信息抽取有两个研究领域:包装器产生和包装器平衡,提出了一种新的包装器自动平衡算法。它基于以下的观察:尽管页面有多种多样的变化方式,但是许多重要的页面特征在新页面都得到了保存,例如文本模式、注释信息和超级链接。新的算法能充分利用这些保存下来的页面特征在变化的页面中定位目标信息,并能自动修复失效的包装器。对实际Web站点信息抽取的实验表明,新的算法能有效地维持包装器的平衡以便更精确地抽取信息。  相似文献   

6.
基于J2EE的综合管理信息系统   总被引:1,自引:1,他引:0  
随着网络信息技术的发展,为提高我局政务办公的水平,实现电子政务办公与政务管理提供了技术支撑,本文介绍了一个基于J2EE的福建省综合管理信息系统的结构、功能及实现原理,并对系统中的关键技术进行了详细说明.实践证明,基于该J2EE的综合管理系统,可靠性高,稳定性强,极大地提高了我省气象部门综合政务管理能力与信息共享程度.  相似文献   

7.
目前主要是通过基于URL(Uniform Resource Locator)、关键词、图片等网页内容为特征的机器学习方法进行不良网站检测.但是,不良网站制作者也会通过更换URL,避免常见不良关键词的使用,对搜索爬虫隐藏图片等做法来规避检测,这使得基于内容的检测方法会有漏检的情况.为了更准确的检测出此类网站,本文提出了注册、解析方面的相关特征,并通过最主流的机器学习方法构建了检测模型.用模型预测新数据集,结果证明,基于解析和注册特征的检测方法可以有效的在网站集合中检测出前文提到的不良网站,并且对于一般不良也依然能够准确识别.本次研究为不良网站的检测研究提供了又一思路.  相似文献   

8.
Building intelligent Web applications using lightweight wrappers   总被引:20,自引:0,他引:20  
The Web so far has been incredibly successful at delivering information to human users. So successful actually, that there is now an urgent need to go beyond a browsing human. Unfortunately, the Web is not yet a well organized repository of nicely structured documents but rather a conglomerate of volatile HTML pages.

To address this problem, we present the World Wide Web Wrapper Factory (W4F), a toolkit for the generation of wrappers for Web sources, that offers: (1) an expressive language to specify the extraction of complex structures from HTML pages; (2) a declarative mapping to various data formats like XML; (3) some visual tools to make the engineering of wrappers faster and easier.  相似文献   


9.
网页数据自动抽取系统   总被引:6,自引:0,他引:6  
在Internet中存在着大量的半结构化的HTML网页。为了使用这些丰富的网页数据,需要将这些数据从网页中重新抽取出来。该文介绍了一种新的基于树状结构的信息提取方法和一个自动产生包装器的系统DAE(DOMbasedAutomaticExtraction),将HTML网页数据转换为XML数据,在提取的过程中基本上不需要人工干预,因而实现了抽取过程的自动化。该方法可以应用于信息搜索agent中,或者应用于数据集成系统中等。  相似文献   

10.
在分析XML与信息集成相融合的优势基础上,提出了一种利用元数据支持进行信息集成的框架,利用解析器和Wrapper技术向用户提供统一的查询接口和数据视图,借助于元数据的支持来判断查询操作的有效性.采取Xquery语言对XML文档进行集成操作,利用XSL将查询结果提交给用户浏览,较好地解决了信息集成过程中透明访问、联合查询和数据转换等问题,实现了多个民构数据源的快捷查询和快速结果展现.  相似文献   

11.
提出了一种以单片机系统负责模拟用梯指令信号的采集,以PC上位机软件对指令信号进行处理结果反馈控制的电梯控制仿真系统框架.在电梯控制仿真系统软件设计的基础上,采用模块化设计的方法,研究了基于单片机的电梯控制仿真系统的控制策略和硬件设计.该系统使用和扩展方便,为各种新型电梯控制技术的进一步研究提供平台.  相似文献   

12.
Integrating a large number of Web information sources may significantly increase the utility of the World-Wide Web. A promising solution to the integration is through the use of a Web Information mediator that provides seamless, transparent access for the clients. Information mediators need wrappers to access a Web source as a structured database, but building wrappers by hand is impractical. Previous work on wrapper induction is too restrictive to handle a large number of Web pages that contain tuples with missing attributes, multiple values, variant attribute permutations, exceptions and typos. This paper presents SoftMealy, a novel wrapper representation formalism. This representation is based on a finite-state transducer (FST) and contextual rules. This approach can wrap a wide range of semistructured Web pages because FSTs can encode each different attribute permutation as a path. A SoftMealy wrapper can be induced from a handful of labeled examples using our generalization algorithm. We have implemented this approach into a prototype system and tested it on real Web pages. The performance statistics shows that the sizes of the induced wrappers as well as the required training effort are linear with regard to the structural variance of the test pages. Our experiment also shows that the induced wrappers can generalize over unseen pages.  相似文献   

13.
14.
To populate a data warehouse specifically designed for Web data, i.e. web warehouse, it is imperative to harness relevant documents from the Web. In this paper, we describe a query mechanism called coupling query to glean relevant Web data in the context of our web warehousing system called Warehouse Of Web Data (WHOWEDA). Coupling query may be used for querying both HTML and XML documents. Some of the important features of our query mechanism are ability to query metadata, content, internal and external (hyperlink) structure of Web documents based on partial knowledge, ability to express constraints on tag attributes and tagless segment of data, ability to express conjunctive as well as disjunctive query conditions compactly, ability to control execution of a web query and preservation of the topological structure of hyperlinked documents in the query results. We also discuss how to formulate query graphically and in textual form using coupling graph and coupling text, respectively.  相似文献   

15.
Since the beginning of the Semantic Web initiative, significant efforts have been invested in finding efficient ways to publish, store, and query metadata on the Web. RDF and SPARQL have become the standard data model and query language, respectively, to describe resources on the Web. Large amounts of RDF data are now available either as stand-alone datasets or as metadata over semi-structured (typically XML) documents. The ability to apply RDF annotations over XML data emphasizes the need to represent and query data and metadata simultaneously. We propose XR, a novel hybrid data model capturing the structural aspects of XML data and the semantics of RDF, also enabling us to reason about XML data. Our model is general enough to describe pure XML or RDF datasets, as well as RDF-annotated XML data, where any XML node can act as a resource. This data model comes with the XRQ query language that combines features of both XQuery and SPARQL. To demonstrate the feasibility of this hybrid XML-RDF data management setting, and to validate its interest, we have developed an XR platform on top of well-known data management systems for XML and RDF. In particular, the platform features several XRQ query processing algorithms, whose performance is experimentally compared.  相似文献   

16.
Several techniques have been recently proposed to automatically generate Web wrappers, i.e., programs that extract data from HTML pages, and transform them into a more structured format, typically in XML. These techniques automatically induce a wrapper from a set of sample pages that share a common HTML template. An open issue, however, is how to collect suitable classes of sample pages to feed the wrapper inducer. Presently, the pages are chosen manually. In this paper, we tackle the problem of automatically discovering the main classes of pages offered by a site by exploring only a small yet representative portion of it. We propose a model to describe abstract structural features of HTML pages. Based on this model, we have developed an algorithm that accepts the URL of an entry point to a target Web site, visits a limited yet representative number of pages, and produces an accurate clustering of pages based on their structure. We have developed a prototype, which has been used to perform experiments on real-life Web sites.  相似文献   

17.
In this paper, we propose a unified interface and a flexible architecture for querying various information sources on the Internet and the WWW using both a popular object model and a data model. We propose an Integrated Information Retrieval (IIR) service based on the Common Object Service Specification (COSS) for Common Object Request Broker Architecture (CORBA) and apply the Document Type Definition (DTD) of eXtensible Markup Language (XML) to define the metadata of information sources for sharing the ontology between mediator and wrappers. The objective of using the IIR design is not only to provide programmers with a uniform interface for coding a software application that can query a variety of information sources on the Internet, but also to create a flexible and extensible environment that easily allows system developers to add new or updated wrappers to the system.  相似文献   

18.
当数据源的查询和访问受到限制时,网页包装器的设计与实现有一些新的特点。分析了维普网页的结构,描述了网页元素的标记与内容特征,给出了一组匹配维普网页内容的正则式,并用VisualC#实现了一个包装器,同时指出了实现技术中的要点与难点。实际应用表明,该文实现的全自动包装器可以精确抽取维普网页的全部检索内容。  相似文献   

19.
研究了从数据密集型Web页面中自动提取结构化数据并形成知识表示系统的问题。基于知识数据库实现动态页面获取,进行预处理后转换为XML文档,采用基于PAT-array的模式发现算法自动发现重复模式,结合基于本体的关键词库自动识别页面数据显示结构模型,利用XML的对象-关系映射技术将数据存入知识数据库,由此实现Web数据自动抽取。同时,利用知识数据库已有知识从互联网抽取新知识,达到知识数据库的自扩展。以交通信息自动抽取及混合交通出行方案生成与表示系统进行的实验表明该系统具有高抽取准确率和良好的适应性。  相似文献   

20.
在分析网站结构的基础上,把同类信息划归为一个页面组,建立相应的XML模板库,进行web信息挖掘,为实现页面信息快速查询和信息分类提供了很好的方法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号