首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 603 毫秒
1.
针对已有网页分割方法都基于文档对象模型实现且实现难度较高的问题,提出了一种采用字符串数据模型实现网页分割的新方法。该方法通过机器学习获取网页标题的特征,利用标题实现网页分割。首先,利用网页行块分布函数和网页标题标签学习得到网页标题特征;然后,基于标题将网页分割成内容块;最后,利用块深度对内容块进行合并,完成网页分割。理论分析与实验结果表明,该方法中的算法具有O(n)的时间复杂度和空间复杂度,该方法对于高校门户、博客日志和资源网站等类型的网页具有较好的分割效果,并且可以用于网页信息管理的多种应用中,具有良好的应用前景。  相似文献   

2.
The complexity of web information environments and multiple‐topic web pages are negative factors significantly affecting the performance of focused crawling. A highly relevant region in a web page may be obscured because of low overall relevance of that page. Segmenting the web pages into smaller units will significantly improve the performance. Conquering and traversing irrelevant page to reach a relevant one (tunneling) can improve the effectiveness of focused crawling by expanding its reach. This paper presents a heuristic‐based method to enhance focused crawling performance. The method uses a Document Object Model (DOM)‐based page partition algorithm to segment a web page into content blocks with a hierarchical structure and investigates how to take advantage of block‐level evidence to enhance focused crawling by tunneling. Page segmentation can transform an uninteresting multi‐topic web page into several single topic context blocks and some of which may be interesting. Accordingly, focused crawler can pursue the interesting content blocks to retrieve the relevant pages. Experimental results indicate that this approach outperforms Breadth‐First, Best‐First and Link‐context algorithm both in harvest rate, target recall and target length. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

3.
With the rapid changes in dynamic web pages, there is an increasing need for receiving instant updates for dynamic blocks on the Web. In this paper, we address the problem of automatically following dynamic blocks in web pages. Given a user-specified block on a web page, we continuously track the content of the block and report the updates in real time. This service can bring obvious benefits to users, such as the ability to track top-ten breaking news on CNN, the prices of iPhones on Amazon, or NBA game scores. We study 3,346 human labeled blocks from 1,127 pages, and analyze the effectiveness of four types of patterns, namely visual area, DOM tree path, inner content and close context, for tracking content blocks. Because of frequent web page changes, we find that the initial patterns generated on the original page could be invalidated over time, leading to the failure of extracting correct blocks. According to our observations, we combine different patterns to improve the accuracy and stability of block extractions. Moreover, we propose an adaptive model that adapts each pattern individually and adjusts pattern weights for an improved combination. The experimental results show that the proposed models outperform existing approaches, with the adaptive model performing the best.  相似文献   

4.
In this work we propose a model to represent the web as a directed hypergraph (instead of a graph), where links connect pairs of disjointed sets of pages. The web hypergraph is derived from the web graph by dividing the set of pages into non-overlapping blocks and using the links between pages of distinct blocks to create hyperarcs. A hyperarc connects a block of pages to a single page, in order to provide more reliable information for link analysis. We use the hypergraph model to create the hypergraph versions of the Pagerank and Indegree algorithms, referred to as HyperPagerank and HyperIndegree, respectively. The hypergraph is derived from the web graph by grouping pages by two different partition criteria: grouping together the pages that belong to the same web host or to the same web domain. We compared the original page-based algorithms with the host-based and domain-based versions of the algorithms, considering a combination of the page reputation, the textual content of the pages and the anchor text. Experimental results using three distinct web collections show that the HyperPagerank and HyperIndegree algorithms may yield better results than the original graph versions of the Pagerank and Indegree algorithms. We also show that the hypergraph versions of the algorithms were slightly less affected by noise links and spamming.  相似文献   

5.
Although caching has been shown as an efficient technique to reduce the delay in generating web pages to meet the page requests from web users, it becomes less effective if the pages are dynamic and contain dynamic contents. In this paper, instead of using caching, we study the effectiveness of using pre-fetching to resolve the problems in handling dynamic web pages. Pre-fetching is a proactive caching scheme since a page is cached before the receipt of any page request for the page. In addition to the problem of which pages to be pre-fetched, another equally important question is when to perform the pre-fetching. To resolve the prediction and timing problems, we explore the temporal properties of the dynamic web pages and the timing issues in accessing the pages to determine which pages to be pre-fetched and the best time to pre-fetch the pages to maximize the cache hit probability of the pre-fetched page. If the required pages can be found in the cache validly, the response times of the requests can be greatly reduced. The proposed scheme is called temporal pre-fetching (TPF) in which we prioritize pre-fetching requests based on the predicted usability of the to-be pre-fetched pages. To minimize the impact of incorrect prediction in pre-fetching on processing of on-demand page requests, a qualifying examination is performed to remove unnecessary and low usability pre-fetching requests while they are waiting to be processed and just before their processing. We have implemented the proposed TPF scheme in a web server system and experiments have been performed to study its performance characteristics compared with conventional cache-only scheme using a benchmark auction application under different system and application settings. As shown in the experiment results, the overall system performance, i.e., response time, is improved as more page requests can be served immediately using pre-fetched pages.  相似文献   

6.
基于扩展DOM树的Web页面信息抽取   总被引:1,自引:0,他引:1  
随着Internet的发展,Web页面提供的信息量日益增长,信息的密集程度也不断增强.多数Web页面包含多个信息块,它们布局紧凑,在HTML语法上具有类似的模式.针对含有多信息块的Web页面,提出一种信息抽取的方法:首先创建扩展的DOM(Document Object Model)树,将页面抽取成离散的信息条;然后根据扩展DOM树的层次结构,并结合必要的视觉特性和语义信息对离散化的信息条重新整合;最后确定包含信息块的子树,深度遍历DOM树实现信息抽取.该算法能对多信息块的Web页面进行信息抽取.  相似文献   

7.
随着通信技术的发展,人们迫切希望能方便地利用手持移动设备访问Web网站,由于移动设备的小屏幕和低带宽的缺点,使得这一难题一直没有得到很好的解决.本文提出一种适合于移动设备小屏幕的Web页面分块算法,算法利用Web网页上对象的位置信息对信息块进行逐层聚类,生成一棵网页分块树,再根据移动设备屏幕的特点把网页分块树转换成适合小屏幕浏览的页面.  相似文献   

8.
Although many web pages consist of blocks of text surrounded by graphics, there is a lack of valid empirical research to aid the design of this type of page [D. Diaper, P. Waelend, Interact. Comput. 13 (2000) 163]. In particular little is known about the influence of animations on interaction with web pages. Proportion, in particular the Golden Section, is known to be a key determinant of aesthetic quality of objects and aesthetics have recently been identified as a powerful factor in the quality of human–computer interaction [N. Tractinsky, A.S. Katz, D. Ikar, Interact. Comput. 13 (2000) 127]. The current study aimed to establish the relative strength of the effects of graphical display and screen ratio of content and navigation areas in web pages, using an information retrieval task and a split-plot experimental research design. Results demonstrated the effect of screen ratio, but a lack of an effect of graphical display on task performance and two subjective outcome measures. However, there was an effect of graphical display on perceived distraction, with animated display leading to more distraction than static display, t(64) = 2.33. Results are discussed in terms of processes of perception and attention and recommendations for web page design are given.  相似文献   

9.
网络数据的飞速增长为搜索引擎带来了巨大的存储和网络服务压力,大量冗余、低质量乃至垃圾数据造成了搜索引擎存储与运算能力的巨大浪费,在这种情况下,如何建立适合万维网实际应用环境的网页数据质量评估体系与评估算法成为了信息检索领域的重要研究课题。在前人工作的基础上,通过网络用户及网页设计人员的参与,文章提出了包括权威知名度、内容、时效性和网页外观呈现四个维度十三个因素的网页质量评价体系;标注数据显示我们的网页质量评价体系具有较强的可操作性,标注结果比较一致;文章最后使用Ordinal Logistic Regression 模型对评价体系的各个维度的重要性进行了分析并得出了一些启发性的结论 互联网网页内容和实效性能否满足用户需求是决定其质量的重要因素。  相似文献   

10.
单个页面信息量远远大于特定用户对页面中的信息需求.为快速准确从当前页面中获取特定用户所需求的兴趣信息,提出了页面信息主动检索模型.该检索模型中,根据页面Block特点将当前Web页面转化成信息树,根据用户过去的浏览行为构造用户特征树,挖掘用户特征树产生用户需求信息集,然后从当前页面中检索需求的信息,获取用户兴趣信息集.详述了主动检索的基本原理,给出了相应的算法描述,并通过实验证明了该模型具有可行性.  相似文献   

11.
This paper is concerned with the problem of boosting social annotations using propagation, which is also called social propagation. In particular, we focus on propagating social annotations of web pages (e.g., annotations in Del.icio.us). Social annotations are novel resources and valuable in many web applications, including web search and browsing. Although they are developing fast, social annotations of web pages cover only a small proportion (<0.1%) of the World Wide Web. To alleviate the low coverage of annotations, a general propagation model based on Random Surfer is proposed. Specifically, four steps are included, namely basic propagation, multiple-annotation propagation, multiple-link-type propagation, and constraint-guided propagation. The model is evaluated on a dataset of 40,422 web pages randomly sampled from 100 most popular English sites and ten famous academic sites. Each page’s annotations are obtained by querying the history interface of Del.icio.us. Experimental results show that the proposed model is very effective in increasing the coverage of annotations while still preserving novel properties of social annotations. Applications of propagated annotations on web search and classification further verify the effectiveness of the model.  相似文献   

12.
基于链接分块的相关链接提取方法   总被引:1,自引:0,他引:1  
每个网页都包含了大量的超链接,其中既包含了相关链接,也包含了大量噪声链接。提出了一种基于链接分块的相关链接提取方法。首先,将网页按照HTML语言中标签将网页分成许多的块,从块中提取链接,形成若干链接块;其次,根据相关链接的成块出现,相关链接文字与其所在网页标题含相同词等特征,应用规则与统计相结合的方法从所有链接块中提取相关链接块。相关链接提取方法测试结果,精确率在85%以上,召回率在70%左右,表明该方法很有效。  相似文献   

13.
基于Lucene的中文全文检索系统的研究与设计   总被引:4,自引:0,他引:4  
提出了一种基于Lucene的中文全文检索系统模型.通过分析Lucene的系统结构,系统采用了基于统计的网页正文提取技术,并且加入了中文分词模块和索引文档预处理模块来提高检索系统的效率和精度.在检索结果的处理上,采用文本聚类的办法,使检索结果分类显示,提高了用户的查找的效率.实验数据表明,该系统在检索中文网页时,在效率,精度和结果处理等方面性能明显提高.  相似文献   

14.
网络用户可以使用浏览器收藏夹收藏网页并快速访问其中内容。基于收藏夹的用户行为研究将对用户个性化、网页质量评估、大规模网页目录构建等方面的工作具有指导意义。该文使用近27万个用户的收藏夹数据,从组织结构、收藏内容和用户兴趣三个方面对用户收藏行为进行了研究。首先,我们提出收藏夹浏览点击模型,分析了收藏夹结构特征和使用效率;其次,通过与PageRank值比较,我们发现用户倾向于收藏质量高的网络资源;最后,我们结合ODP分析了收藏夹用户的兴趣分布特点。  相似文献   

15.
It is common to browse web pages via mobile devices. However, most of the web pages were designed for desktop computers equipped with big screens. When browsing on mobile devices, a user has to scroll up and down to find the information they want because of the limited screen size. Some commercial products reformat web pages. However, the result pages still contain irrelevant information. We propose a system to personalize users’ mobile web pages. A user can determine which blocks in a web page should be retained. The sequence of these blocks can also be altered according to individual preferences.  相似文献   

16.
袁莹静  陈婷  陈龙  周芷仪  谢鹏辉 《软件》2020,(4):195-199
随着经济社会的不断发展,数据信息呈现出爆炸式增长的特点,每个领域都包含了非常广泛的数据信息。在现在社会中,网页设计与数据库的结合密不可分。在网页设计上通过运用SQL语句与数据库的连接中,实现了信息的添加、删除、修改、查询等的功能,使得我们对网页设计有一定的了解。本文主要介绍通过运用Visual Studio软件和数据库软件,实现了二手车交易系统的网页设计,并对网页中相关功能的介绍。  相似文献   

17.
Graphs are widely used for modeling complicated data such as social networks, chemical compounds, protein interactions and semantic web. To effectively understand and utilize any collection of graphs, a graph database that efficiently supports elementary querying mechanisms is crucially required. For example, Subgraph and Supergraph queries are important types of graph queries which have many applications in practice. A primary challenge in computing the answers of graph queries is that pair-wise comparisons of graphs are usually hard problems. Relational database management systems (RDBMSs) have repeatedly been shown to be able to efficiently host different types of data such as complex objects and XML data. RDBMSs derive much of their performance from sophisticated optimizer components which make use of physical properties that are specific to the relational model such as sortedness, proper join ordering and powerful indexing mechanisms. In this article, we study the problem of indexing and querying graph databases using the relational infrastructure. We present a purely relational framework for processing graph queries. This framework relies on building a layer of graph features knowledge which capture metadata and summary features of the underlying graph database. We describe different querying mechanisms which make use of the layer of graph features knowledge to achieve scalable performance for processing graph queries. Finally, we conduct an extensive set of experiments on real and synthetic datasets to demonstrate the efficiency and the scalability of our techniques.  相似文献   

18.
Contents, layout styles, and parse structures of web news pages differ greatly from one page to another. In addition, the layout style and the parse structure of a web news page may change from time to time. For these reasons, how to design features with excellent extraction performances for massive and heterogeneous web news pages is a challenging issue. Our extensive case studies indicate that there is potential relevancy between web content layouts and their tag paths. Inspired by the observation, we design a series of tag path extraction features to extract web news. Because each feature has its own strength, we fuse all those features with the DS (Dempster-Shafer) evidence theory, and then design a content extraction method CEDS. Experimental results on both CleanEval datasets and web news pages selected randomly from well-known websites show that the F 1-score with CEDS is 8.08% and 3.08% higher than existing popular content extraction methods CETR and CEPR-TPR respectively.  相似文献   

19.
We propose a novel partition path-based (PPB) grouping strategy to store compressed XML data in a stream of blocks. In addition, we employ a minimal indexing scheme called block statistic signature (BSS) on the compressed data, which is a simple but effective technique to support evaluation of selection and aggregate XPath queries of the compressed data. We present a formal analysis and empirical study of these techniques. The BSS indexing is first extended into effective cluster statistic signature (CSS) and multiple-cluster statistic signature (MSS) indexing by establishing more layers of indexes. We analyze how the response time is affected by various parameters involved in our compression strategy such as the data stream block size, the number of cluster layers, and the query selectivity. We also gain further insight about the compression and querying performance by studying the optimal block size in a stream, which leads to the minimum processing cost for queries. The cost model analysis provides a solid foundation for predicting the querying performance. Finally, we demonstrate that our PPB grouping and indexing strategies are not only efficient enough to support path-based selection and aggregate queries of the compressed XML data, but they also require relatively low computation time and storage space when compared with other state-of-the-art compression strategies.  相似文献   

20.
基于智能Agent的中文元搜索引擎模型研究   总被引:4,自引:0,他引:4  
论文讨论了现有搜索引擎技术的缺点,比较了中文与英文分词方法的差别,描述了中文文档的基于无词典信息抽取方法。通过分析用户搜索信息的历史,构建用户的个性化搜索模型,并将这些文档进行分档,在本地服务器上进行整理与保存。文中对系统涉及的关键技术:文档类关键词提取方法、用户特征的建立方法、页面价值评比算法等进行了描述。最后,对进一步研究指明了方向。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号