首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
深网数据源的动态性、自治性和体量使第三方应用难以完全爬取所有Web数据.文中研究查询类型(仅允许Top-k查询)和查询资源约束下深网数据源爬取问题,提出基于Top-k查询约束的深网增量爬取方法,结合历史数据和领域知识,优化总体数据质量.首先基于查询树获得有效查询,利用历史数据和领域知识估计查询变化和查询代价.然后,基于估计的查询代价和数据质量,近似选择最优的查询子集最大化总体数据质量.实验表明文中方法较好地提高动态Web数据库爬取的效率和数据质量.  相似文献   

2.
Deep Web查询接口的判定技术研究   总被引:1,自引:0,他引:1  
互联网的飞速发展,给人类带来了海量的可供访问信息,但是,现今搜索引擎索引的绝大部分是表层Surface Web网的信息,限于一些技术原因,搜索引擎几乎无法索引到Deep Web网中的信息。由于查询接口是Deep Web的唯一入口,但并非所有的网页表单都是查询接口,为了能充分利用Deep Web后台数据库信息,首先要找到进入Deep Web后台数据库的入口,所以对查询接口的正确判定至关重要。文中介绍了利用决策树CA.5分类算法自动判定网页表单是否为Deep Web查询接口的方法。  相似文献   

3.
Deep Web查询接口是Web数据库的接口,其对于Deep Web数据库集成至关重要。本文根据网页表单的结构特征定义查询接口;针对非提交查询法,给出界定Deep Web查询接口的一些规则;提出提交查询法,根据链接属性的特点进行判断,找到包含查询接口的页面;采用决策树C4.5算法进行分类,并用Java语言实现Deep Web查询接口系统。  相似文献   

4.
本文详细探讨了Web数据挖掘技术在淘宝网玩具市场的应用。通过爬取淘宝网玩具市场的网页数据信息,并应用数据挖掘技术对这些数据进行分析、挖掘,发现了一些对卖家决策有指导意义的知识。  相似文献   

5.
一种基于图模型的Web数据库采样方法   总被引:5,自引:0,他引:5  
刘伟  孟小峰  凌妍妍 《软件学报》2008,19(2):179-193
Web数据库中,海量的信息隐藏在具有特定查询能力的查询接口后面,使人无法了解一个Web数据库内容的特征,比如主题的分布、更新的频率等,这就为DeepWeb数据集成带来了巨大的挑战.为了解决这个问题,提出了一种基于图模型的Web数据库采样方法,可以通过查询接口从Web数据库中以增量的方式获取近似随机的样本,即每次查询获取一定数量的样本记录,并且利用已经保存在本地的样本记录生成下一次的查询.该方法的一个重要特点是不受查询接口中属性表现形式的局限,因此是一种一般的Web数据库采样方法.在本地的模拟实验和真实Web数据库上的大量实验表明,该方法可以在较小代价下获得高质量的样本.  相似文献   

6.
基于众包的社交网络数据采集模型设计与实现   总被引:1,自引:0,他引:1  
社交网络数据信息量大、主题性强,具有巨大的数据挖掘价值,是互联网大数据的重要组成部分。针对传统搜索引擎无法利用关键字检索技术直接索引社交网络平台信息的现状,基于众包模式,采用C/S架构,设计社交网络数据采集模型,包含服务端、客户端、存储系统与主题Deep Web爬虫系统4个模块。通过主题Deep Web爬虫的分布式机器节点自动向服务器请求爬虫任务并上传爬取数据,利用Hadoop分布式文件系统对爬取数据进行快速处理并存储结果数据。实验结果表明,主题Deep Web爬虫系统配置简单,支持功能扩展和目标信息直接获取,数据采集模型具有较快的数据获取速度及较高的信息检索效率。  相似文献   

7.
聚焦爬虫是搜索引擎的网页自动获取程序,是搜索引擎发现和索引深层网(Deep web)数据的关键一步.介绍了一种聚焦爬虫,该爬虫使用PageRank算法分析网页的重要性,通过网站结构图剪枝技术及页面判断算法过滤与主题无关的URL,有效提高deep web数据集成的质量和效率.  相似文献   

8.
Deep Web中蕴含着大量高质量的数据,然而只有通过Web查询接口对Web数据库提交查询才能获取这些数据,因此,自动获取Web查询接口模式是实现Web数据库集成的关键.将Web查询接口模式的抽取过程看作一个词法分析的过程,通过构建EGLM-FA(元素分组及标签匹配有限状态自动机)来完成对Web查询接口模式的抽取.首先应用Html呈现引擎将Web查询接口所在页面进行解析,利用查询接口Form中的DOM节点及其坐标信息构建相应的NSS(节点空间结构),之后再将所有的NSS组成NSS列表,将NSS列表作为EGLM-FA的输入,进而抽取出Web查询接口的模式.  相似文献   

9.
将deep Web发掘与主题爬行技术有机地结合起来,对deep Web垂直搜索引擎系统的关键技术进行了深入研究.首先设计了deep Web主题爬行框架,它是在传统的主题爬行框架的基础上,加入了前端分类器作为爬行策略的执行机构,并对该分类器做定期的增量更新;然后使用主题爬行技术指导deep Web发掘,并且借助开源组件Lucene将主题爬行器所搜索的信息进行合理的安排,以便为检索接口提供查询服务.当用户向搜索引擎提交查询词后,Lucene缺省按照自己的相关度算法对结果进行排序.通过爬虫、索引器和查询接口的设计,实现了一个面向deep Web的垂直搜索引擎原型系统.  相似文献   

10.
传统搜索引擎仅可以索引浅层Web页面,然而在网络深处隐含着大量、高质量的信息,传统搜索引擎由于技术原因不能索引这些被称之为Deep Web的页面。由于查询接口是Deep Web的唯一入口,因此要获取Deep Web信息就需判定哪些网页表单是Deep Web查询接口。文中介绍了一种利用朴素贝叶斯分类算法自动判定网页表单是否为Deep Web查询接口的方法,并实验验证了该方法的有效性。  相似文献   

11.
Databases deepen the Web   总被引:2,自引:0,他引:2  
Ghanem  T.M. Aref  W.G. 《Computer》2004,37(1):116-117
The Web has become the preferred medium for many database applications, such as e-commerce and digital libraries. These applications store information in huge databases that users access, query, and update through the Web. Database-driven Web sites have their own interfaces and access forms for creating HTML pages on the fly. Web database technologies define the way that these forms can connect to and retrieve data from database servers. The number of database-driven Web sites is increasing exponentially, and each site is creating pages dynamically-pages that are hard for traditional search engines to reach. Such search engines crawl and index static HTML pages; they do not send queries to Web databases. The information hidden inside Web databases is called the "deep Web" in contrast to the "surface Web" that traditional search engines access easily. We expect deep Web search engines and technologies to improve rapidly and to dramatically affect how the Web is used by providing easy access to many more information resources.  相似文献   

12.
Time plays important roles in Web search, because most Web pages contain temporal information and a lot of Web queries are time-related. How to integrate temporal information in Web search engines has been a research focus in recent years. However, traditional search engines have little support in processing temporal-textual Web queries. Aiming at solving this problem, in this paper, we concentrate on the extraction of the focused time for Web pages, which refers to the most appropriate time associated with Web pages, and then we used focused time to improve the search efficiency for time-sensitive queries. In particular, three critical issues are deeply studied in this paper. The first issue is to extract implicit temporal expressions from Web pages. The second one is to determine the focused time among all the extracted temporal information, and the last issue is to integrate focused time into a search engine. For the first issue, we propose a new dynamic approach to resolve the implicit temporal expressions in Web pages. For the second issue, we present a score model to determine the focused time for Web pages. Our score model takes into account both the frequency of temporal information in Web pages and the containment relationship among temporal information. For the third issue, we combine the textual similarity and the temporal similarity between queries and documents in the ranking process. To evaluate the effectiveness and efficiency of the proposed approaches, we build a prototype system called Time-Aware Search Engine (TASE). TASE is able to extract both the explicit and implicit temporal expressions for Web pages, and calculate the relevant score between Web pages and each temporal expression, and re-rank search results based on the temporal-textual relevance between Web pages and queries. Finally, we conduct experiments on real data sets. The results show that our approach has high accuracy in resolving implicit temporal expressions and extracting focused time, and has better ranking effectiveness for time-sensitive Web queries than its competitor algorithms.  相似文献   

13.
A common task of Web users is querying structured information from Web pages. For realizing this interesting scenario we propose a novel query processor for systematically discovering instances of semantic relations in Web search results and joining these relation instances into complex result tuples with conjunctive queries. Our query processor transforms a structured user query into keyword queries that are submitted to a search engine, forwards search results to a relation extractor, and then combines relations into complex result tuples. The processor automatically learns discriminative and effective keywords for different types of semantic relations. Thereby, our query processor leverages the index of a search engine to query potentially billions of pages. Unfortunately, relation extractors may fail to return a relation for a result tuple. Moreover, user defined data sources may not return at least k complete result tuples. Therefore we propose an adaptive routing model based on information theory for retrieving missing attributes of incomplete result tuples. The model determines the most promising next incomplete tuple and attribute type for returning any-k complete result tuples at any point during the query execution process. We report a thorough experimental evaluation over multiple relation extractors. Our query processor returns complete result tuples while processing only very few Web pages.  相似文献   

14.
Deep Web contents are accessed by queries submitted to Web databases and the returned data records are enwrapped in dynamically generated Web pages (they will be called deep Web pages in this paper). Extracting structured data from deep Web pages is a challenging problem due to the underlying intricate structures of such pages. Until now, a large number of techniques have been proposed to address this problem, but all of them have inherent limitations because they are Web-page-programming-language-dependent. As the popular two-dimensional media, the contents on Web pages are always displayed regularly for users to browse. This motivates us to seek a different way for deep Web data extraction to overcome the limitations of previous works by utilizing some interesting common visual features on the deep Web pages. In this paper, a novel vision-based approach that is Web-page-programming-language-independent is proposed. This approach primarily utilizes the visual features on the deep Web pages to implement deep Web data extraction, including data record extraction and data item extraction. We also propose a new evaluation measure revision to capture the amount of human effort needed to produce perfect extraction. Our experiments on a large set of Web databases show that the proposed vision-based approach is highly effective for deep Web data extraction.  相似文献   

15.
Our current understanding of Web structure is based on large graphs created by centralized crawlers and indexers. They obtain data almost exclusively from the so-called surface Web, which consists, loosely speaking, of interlinked HTML pages. The deep Web, by contrast, is information that is reachable over the Web, but that resides in databases; it is dynamically available in response to queries, not placed on static pages ahead of time. Recent estimates indicate that the deep Web has hundreds of times more data than the surface Web. The deep Web gives us reason to rethink much of the current doctrine of broad-based link analysis. Instead of looking up pages and finding links on them, Web crawlers would have to produce queries to generate relevant pages. Creating appropriate queries ahead of time is nontrivial without understanding the content of the queried sites. The deep Web's scale would also make it much harder to cache results than to merely index static pages. Whereas a static page presents its links for all to see, a deep Web site can decide whose queries to process and how well. It can, for example, authenticate the querying party before giving it any truly valuable information and links. It can build an understanding of the querying party's context in order to give proper responses, and it can engage in dialogues and negotiate for the information it reveals. The Web site can thus prevent its information from being used by unknown parties. What's more, the querying party can ensure that the information is meant for it.  相似文献   

16.
The conventional approaches of finding related search engine queries rely on the common terms shared by two queries to measure their relatedness. However, search engine queries are usually short and the term overlap between two queries is very small. Using query terms as a feature space cannot accurately estimate relatedness. Alternative feature spaces are needed to enrich the term based search queries. In this paper, given a search query, first we extract the Web pages accessed by users from Japanese Web access logs which store the users individual and collective behavior. From these accessed Web pages we usually can get two kinds of feature spaces, i.e, content-sensitive (e.g., nouns) and content-ignorant (e.g., URLs), to enrich the expressions of search queries. Then, the relatedness between search queries can be estimated on their enriched expressions. Our experimental results show that the URL feature space produces much lower precision scores than the noun feature space which, however, is not applicable in non-text pages, dynamic pages and so on. It is crucial to improve the quality of the URL (content-ignorant) feature space since it is generally available in all types of Web pages. We propose a novel content-ignorant feature space, called Web community which is created from a Japanese Web page archive by exploiting link analysis. Experimental results show that the proposed Web community feature space generates much better results than the URL feature space.  相似文献   

17.
Many experts predict that the next huge step forward in Web information technology will be achieved by adding semantics to Web data, and will possibly consist of (some form of) the Semantic Web. In this paper, we present a novel approach to Semantic Web search, called Serene, which allows for a semantic processing of Web search queries, and for evaluating complex Web search queries that involve reasoning over the Web. More specifically, we first add ontological structure and semantics to Web pages, which then allows for both attaching a meaning to Web search queries and Web pages, and for formulating and processing ontology-based complex Web search queries (i.e., conjunctive queries) that involve reasoning over the Web. Here, we assume the existence of an underlying ontology (in a lightweight ontology language) relative to which Web pages are annotated and Web search queries are formulated. Depending on whether we use a general or a specialized ontology, we thus obtain a general or a vertical Semantic Web search interface, respectively. That is, we are actually mapping the Web into an ontological knowledge base, which then allows for Semantic Web search relative to the underlying ontology. The latter is then realized by reduction to standard Web search on standard Web pages and logically completed ontological annotations. That is, standard Web search engines are used as the main inference motor for ontology-based Semantic Web search. We develop the formal model behind this approach and also provide an implementation in desktop search. Furthermore, we report on extensive experiments, including an implemented Semantic Web search on the Internet Movie Database.  相似文献   

18.
Local searching tools search Web pages that are on or close to a Web site in order to answer specific queries. A Web query taxonomy based on WebSQL can help to enhance local searching techniques  相似文献   

19.
Web is flooded with data. While the crawler is responsible for accessing these web pages and giving it to the indexer for making them available to the users of search engine, the rate at which these web pages change has created the necessity for the crawler to employ refresh strategies to give updated/modified content to the search engine users. Furthermore, Deep web is that part of the web that has alarmingly abundant amounts of quality data (when compared to normal/surface web) but not technically accessible to a search engine’s crawler. The existing deep web crawl methods helps to access the deep web data from the result pages that are generated by filling forms with a set of queries and accessing the web databases through them. However, these methods suffer from not being able to maintain the freshness of the local databases. Both the surface web and the deep web needs an incremental crawl associated with the normal crawl architecture to overcome this problem. Crawling the deep web requires the selection of an appropriate set of queries so that they can cover almost all the records in the data source and in addition the overlapping of records should be low so that network utilization is reduced. An incremental crawl adds to an increase in the network utilization with every increment. Therefore, a reduced query set as described earlier should be used in order to minimize the network utilization. Our contributions in this work are the design of a probabilistic approach based incremental crawler to handle the dynamic changes of the surface web pages, adapting the above mentioned method with a modification to handle the dynamic changes in the deep web databases, a new evaluation measure called the ‘Crawl-hit rate’ to evaluate the efficiency of the incremental crawler in terms of the number of times the crawl is actually necessary in the predicted time and a semantic weighted set covering algorithm for reducing the queries so that the network cost is reduced for every increment of the crawl without any compromise in the number of records retrieved. The evaluation of incremental crawler shows a good improvement in the freshness of the databases and a good Crawl-hit rate (83 % for web pages and 81 % for deep web databases) with a lesser over head when compared to the baseline.  相似文献   

20.
Most Web pages contain location information, which are usually neglected by traditional search engines. Queries combining location and textual terms are called as spatial textual Web queries. Based on the fact that traditional search engines pay little attention in the location information in Web pages, in this paper we study a framework to utilize location information for Web search. The proposed framework consists of an offline stage to extract focused locations for crawled Web pages, as well as an online ranking stage to perform location-aware ranking for search results. The focused locations of a Web page refer to the most appropriate locations associated with the Web page. In the offline stage, we extract the focused locations and keywords from Web pages and map each keyword with specific focused locations, which forms a set of <keyword, location> pairs. In the second online query processing stage, we extract keywords from the query, and computer the ranking scores based on location relevance and the location-constrained scores for each querying keyword. The experiments on various real datasets crawled from nj.gov, BBC and New York Time show that the performance of our algorithm on focused location extraction is superior to previous methods and the proposed ranking algorithm has the best performance w.r.t different spatial textual queries.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号