共查询到20条相似文献,搜索用时 187 毫秒
1.
Deep Web查询接口是Web数据库的接口,其对于Deep Web数据库集成至关重要。本文根据网页表单的结构特征定义查询接口;针对非提交查询法,给出界定Deep Web查询接口的一些规则;提出提交查询法,根据链接属性的特点进行判断,找到包含查询接口的页面;采用决策树C4.5算法进行分类,并用Java语言实现Deep Web查询接口系统。 相似文献
2.
Deep Web自动分类是建立深网数据集成系统的前提和基础。提出了一种基于领域特征文本的Deep Web分类方法。首先借助本体知识对表达同一语义的不同词汇进行了概念抽象,进而给出了领域相关度的定义,并将其作为特征文本选择的量化标准,避免了人为选取的主观性和不确定性;在接口向量模型构建中,考虑了不同特征文本对于分类作用的差异,提出了一种改进的W-TFIDF权重计算方法;最后采用KNN算法对接口向量进行了分类。对比实验证明,利用所提方法选择的特征文本是准确有效的,新的特征文本权重计算方法能显著地提高分类精度,且在KNN算法中表现出较好的稳定性。 相似文献
3.
网络信息的多样性和多变性给信息的管理和过滤带来极大困难,为加快网络信息的分类速度和分类精度,提出了一种基于模糊粗糙集的Wdb文本分类方法.采用机器学习的方法:在训练阶段,首先对Web文本信息预处理,用向量空间模型表示文本,生成初始特征属性空间,并进行权值计算;然后用模糊粗糙集算法来进行信息过滤,用基于模糊租糙集的属性约简算法生成分类规则:最后利用知识库进行文档分类.在测试阶段,对未经预处理的文本直接进行关键属性匹配,经模糊粗糙因子加权后,用空间距离法分类.通过试验比较,该方法具有较好的分类效果. 相似文献
4.
5.
讨论若干Deep Web数据库分类准确性的前沿技术,建立基于词频和DOM树的文本特征提取方法模型,提出计算Deep Web数据库的基于权值的K-NN(K Nearest Neighbors)分类优化算法。利用UIUC提供的TEL-8数据集和WEKA平台的各类算法进行实验,并对分类精度、召回率和综合F-measure等测度上的分类结果进行比较。实验结果表明,该方法模型在3个指标上表现都较为突出。 相似文献
6.
随着Internet技术的快速发展,Web数据库数目庞大而且仍在快速增长。为有效组织利用深藏于Web数据库上的信息,需对其按领域进行分类和集成。Web页面上的查询接口是网络用户访问Web数据库的唯一途径,对Deep Web数据源分类可通过对查询接口分类实现。为此,提出一种基于查询接口文本VSM(Vector Space Model)的分类方法。首先,使用查询接口文本信息构建向量空间模型,然后通过典型的数据挖掘分类算法训练分类器,从而实现对查询接口所属领域进行分类。实验结果表明给出的方法具有良好的分类性能。 相似文献
7.
8.
Deep Web查询接口的判定技术研究 总被引:1,自引:0,他引:1
李齐会 《计算机与数字工程》2009,37(3):131-134
互联网的飞速发展,给人类带来了海量的可供访问信息,但是,现今搜索引擎索引的绝大部分是表层Surface Web网的信息,限于一些技术原因,搜索引擎几乎无法索引到Deep Web网中的信息。由于查询接口是Deep Web的唯一入口,但并非所有的网页表单都是查询接口,为了能充分利用Deep Web后台数据库信息,首先要找到进入Deep Web后台数据库的入口,所以对查询接口的正确判定至关重要。文中介绍了利用决策树CA.5分类算法自动判定网页表单是否为Deep Web查询接口的方法。 相似文献
9.
Deep Web蕴藏着海量信息,现有的搜索引擎很难挖掘到其中的内容。如何充分地获取Deep Web中有价值的信息成为一个难题。提出了基于语义相似度计算的Deep Web数据查询方法,该方法通过语义相似度计算作为中间件,计算出关键词和数据库属性词典对应列的相似度,从而将关键词的搜索范围限制在一个(或多个)相关领域,最后生成相应的SQL查询语句。试验证明,该方法能够有效地提高基于Deep Web的数据查询效率。 相似文献
10.
随着Web信息容量迅速膨胀,对Web文本分类已经是目前研究的热点.传统的Web文本分类对网页的预处理基本上没有考虑网页中的大量噪音,因此对分类结果有一定的影响;另一方面,文本的向量空间模型维数过高,对分类效果也存在很大的影响.提出一种基于粗糙集理论的Web文本分类方法,首先对网页进行去噪,然后对向量空间模型进行属性约简,之后构造分类器,实验表明,此方法不仅降低了维数,还提高了分类结果. 相似文献
11.
Internet上的化学数据库是宝贵的化学信息资源,如何有效地利用这些数据是化学深层网所要解决的问题。本文总结了化学深层网的特点,基于XML技术实现从数据库检索返回的半结构化HTML页面中提取数据的目标,使之成为可供程序直接调用做进一步计算的数据。在数据提取过程中,先采用JTidy规范化HTML,得到格式上完整、内容无误的XHTML文档,利用包含着XPath路径语言的XSLT数据转换模板实现数据转换和提取。其中XPath表达式的优劣决定了XSLT数据转换模板能否长久有效地提取化学数据,文中着重介绍了如何编辑健壮的XPath表达式,强调了XPath表达式应利用内容和属性特征实现对源树中数据的定位,并尽可能地降低表达式之间的耦合度,前瞻性地预测化学站点可能出现的变化并在XSLT数据转换模板中采取相应的措施以提高表达式的长期有效性。为创建化学深层网数据提取的XSLT数据提取模板提供方法指导。 相似文献
12.
Databases deepen the Web 总被引:2,自引:0,他引:2
The Web has become the preferred medium for many database applications, such as e-commerce and digital libraries. These applications store information in huge databases that users access, query, and update through the Web. Database-driven Web sites have their own interfaces and access forms for creating HTML pages on the fly. Web database technologies define the way that these forms can connect to and retrieve data from database servers. The number of database-driven Web sites is increasing exponentially, and each site is creating pages dynamically-pages that are hard for traditional search engines to reach. Such search engines crawl and index static HTML pages; they do not send queries to Web databases. The information hidden inside Web databases is called the "deep Web" in contrast to the "surface Web" that traditional search engines access easily. We expect deep Web search engines and technologies to improve rapidly and to dramatically affect how the Web is used by providing easy access to many more information resources. 相似文献
13.
Erdinç Uzun Edip Serdar Güner Yılmaz Kılıçaslan Tarık Yerlikaya Hayri Volkan Agun 《Software》2014,44(10):1181-1199
Classical Web crawlers make use of only hyperlink information in the crawling process. However, focused crawlers are intended to download only Web pages that are relevant to a given topic by utilizing word information before downloading the Web page. But, Web pages contain additional information that can be useful for the crawling process. We have developed a crawler, iCrawler (intelligent crawler), the backbone of which is a Web content extractor that automatically pulls content out of seven different blocks: menus, links, main texts, headlines, summaries, additional necessaries, and unnecessary texts from Web pages. The extraction process consists of two steps, which invoke each other to obtain information from the blocks. The first step learns which HTML tags refer to which blocks using the decision tree learning algorithm. Being guided by numerous sources of information, the crawler becomes considerably effective. It achieved a relatively high accuracy of 96.37% in our experiments of block extraction. In the second step, the crawler extracts content from the blocks using string matching functions. These functions along with the mapping between tags and blocks learned in the first step provide iCrawler with considerable time and storage efficiency. More specifically, iCrawler performs 14 times faster in the second step than in the first step. Furthermore, iCrawler significantly decreases storage costs by 57.10% when compared with the texts obtained through classical HTML stripping. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
14.
研究Web文档服务的准确性和快速性,网络信息抽取成为处理海量网络信息的重要手段,而大量异构信息的有效抽取是非常困难的,为了改进和提高系统对于海量异构网页信息的抽取查全率和查准率,提出了一种新的信息抽取的方法,算法利用了隐马尔可夫模型在处理规则知识上的优势对每个页面构建HTML树,并利用Shannon熵来定位数据域,再用Maxi-mum Likelihood方法实现隐马尔可夫模型的构建,实现对Web信息的抽取。仿真结果表明,通过对大量学术论文头部结构信息的抽取,应用算法可以使信息抽取在召回率和准确率方面有明显的提高。 相似文献
15.
This paper presents a new method that eliminates noise in Web page classification.It first describes the presentation of a Web page based on HTML tags.Then through a novel distance formula,it eliminates the noise in similarity measure.After carefully analyzing Web pages,we design an algorithm that can distinguish related hyperlinks from noisy ones,Web can utilize non-noisy hyperlinks to improve th performance of Web page classification (The AWN algorithm).For any page.we can classify it through the text and category of neighbor pages relted to the page.The experimental results show that our approach improved classification accuracy. 相似文献
16.
Many databases have become Web-accessible through form-based search interfaces (i.e., HTML forms) that allow users to specify
complex and precise queries to access the underlying databases. In general, such a Web search interface can be considered
as containing an interface schema with multiple attributes and rich semantic/meta-information; however, the schema is not
formally defined in HTML. Many Web applications, such as Web database integration and deep Web crawling, require the construction
of the schemas. In this paper, we first propose a schema model for representing complex search interfaces, and then present
a layout-expression based approach to automatically extract the logical attributes from search interfaces. We also rephrase
the identification of different types of semantic information as a classification problem, and design several Bayesian classifiers
to help derive semantic information from extracted attributes. A system, WISE-iExtractor, has been implemented to automatically
construct the schema from any Web search interfaces. Our experimental results on real search interfaces indicate that this
system is highly effective. 相似文献
17.
18.
提出一种基于图的半指导学习算法用于网页分类.采用k近邻算法构建一个带权图,图中节点为已标志或未标志的网页,连接边的权重表示类的传播概率,将网页分类问题形式化为图中类的概率传播.为有效利用图中未标志节点辅助分类,结合网页的内容信息和链接信息计算网页间的链接权重,通过已标志节点,类别信息以一定概率从已标志节点推向未标志节点.实验表明,本文提出的算法能有效改进网页分类结果. 相似文献
19.
传统搜索引擎仅可以索引浅层Web页面,然而在网络深处隐含着大量、高质量的信息,传统搜索引擎由于技术原因不能索引这些被称之为Deep Web的页面。由于查询接口是Deep Web的唯一入口,因此要获取Deep Web信息就需判定哪些网页表单是Deep Web查询接口。文中介绍了一种利用朴素贝叶斯分类算法自动判定网页表单是否为Deep Web查询接口的方法,并实验验证了该方法的有效性。 相似文献
20.
Several techniques have been recently proposed to automatically generate Web wrappers, i.e., programs that extract data from HTML pages, and transform them into a more structured format, typically in XML. These techniques automatically induce a wrapper from a set of sample pages that share a common HTML template. An open issue, however, is how to collect suitable classes of sample pages to feed the wrapper inducer. Presently, the pages are chosen manually. In this paper, we tackle the problem of automatically discovering the main classes of pages offered by a site by exploring only a small yet representative portion of it. We propose a model to describe abstract structural features of HTML pages. Based on this model, we have developed an algorithm that accepts the URL of an entry point to a target Web site, visits a limited yet representative number of pages, and produces an accurate clustering of pages based on their structure. We have developed a prototype, which has been used to perform experiments on real-life Web sites. 相似文献