首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 218 毫秒
1.
聚类技术能将大规模数据按照数据的相似性划分成用户可迅速理解的簇.从而使用户更快地了解大量文档中所包含的内容。因此.聚类技术成为搜索引擎中不可或缺的部分和研究热点。Web上的AJAX应用和PowerPoint文件等弱链接文档由于缺乏足够的超链接信息,导致搜索该类文档时.排序结果不佳。针对该问题.给出一个弱链接文档的搜索引擎框架,并重点描述一个基于网页搜索结果的弱链接文档排序算法.基于聚类的弱链接文档排序算法利用聚类算法从高质量的网页搜索结果中提取与查询相关的主题.并根据主题的相关网页的排名确定该主题的重要性.根据识别的带权重的主题计算弱链接文档的排序值。实验结果表明该算法能够为弱链接文档产生较好的排序结果.  相似文献   

2.
针对串行PageRank算法在处理海量网页数据时效率低下的问题,提出一种基于网页链接分类的PageRank并行算法.首先,将网页按照网页所属网站分类,为来自不同站点的网页设置不同的权重;其次,利用Hadoop并行计算框架,结合MapReduce分而治之的特点,并行计算网页排名;最后,采用一种包含3层:数据层、预处理层、计算层的数据压缩方法,对并行算法进行优化.实验结果表明,与串行PageRank算法相比,所提算法在最好情况下结果准确率提高了12%,计算效率提高了33%.  相似文献   

3.
为了高效地获取与主题相关的资源,就垂直搜索引擎展开了研究。首先,在现有的PageRank算法基础上,提出一种改进的PageRank算法来测量网页的链接相似度;其次,从单个网页考虑,利用每个网页的url、title和正文,给出基于内容的相似度的计算方法;最后结合内容相似度和链接相似度,提出了一种基于链接和内容的BLCT主题爬行算法。实验结果表明,该算法在平均收获率和目标召回率上有显著提高,爬行的网页主题相关性也提高了。  相似文献   

4.
基于链接的方法进行Web信息检索的TREC实验研究   总被引:1,自引:0,他引:1  
本文通过TREC实验研究基于链接信息的检索对Web信息检索的影响,包括使用链接描述文本,链接结构以及将基于链接的方法和传统基于内容检索的方法合并。得到如下结论:首先,链接描述文档对网页主题的概括有高度的精确性,但是对网页内容的描述有极大的不完全性;其次,与传统检索方法相比,使用链接文本在网页定位的任务上能够使系统性能提高96% ,但是在信息查询任务上没有帮助;最后,将基于链 接信息的检索与传统的基于内容检索技术合并,在网页入口定位任务上总能将系统性能提高48%到124.8% ,而对特定信息查询任务也能在一定程度上改善检索效果。  相似文献   

5.
随着网络的高速发展,如何在海量信息中找到用户需求的高质量信息变得非常重要,技术难度较大.网页在搜索结果中排名是否靠前与巨大的商业利润相关联,这使得大量的垃圾网页出现在网络中.过滤Spam页面、给用户提供高质量的搜索结果成为当前Web搜索引擎的面临的一个巨大挑战.大量研究工作显示Spam页面之间存在着勾结的现象,分析Spam页面链接结构特性成为过滤Spam页面的重要方法.根据Spam网页链接结构存在的共性,提出了一种基于链接分析的Web Spam过滤方法.在标准检测数据集上进行实验,并与相关工作进行比较.实验结果表明,提出的方法能有效地对Spam网页进行过滤,提高搜索结果的质量.  相似文献   

6.
基于链接描述文本及其上下文的Web信息检索   总被引:20,自引:0,他引:20  
文档之间的超链接结构是Web信息检索和传统信息检索的最大区别之一,由此产生了基于超链接结构的检索技术。描述了链接描述文档的概念,并在此基础上研究链接文本(anchor text)及其上下文信息在检索中的作用。通过使用超过169万篇网页的大规模真实数据集以及TREC 2001提供的相关文档及评价方法进行测试,得到如下结论:首先,链接描述文档对网页主题的概括有高度的精确性,但是对网页内容的描述有极大的不完全性;其次,与传统检索方法相比,使用链接文本在已知网页定位的任务上能够使系统性能提高96%,但是链接文本及其上下文信息无法在未知信息查询任务上改善检索性能;最后,把基于链接描述文本的方法与传统方法相结合,能够在检索性能上提高近16%。  相似文献   

7.
一种抵抗链接作弊的PageRank改进算法   总被引:3,自引:0,他引:3  
大量的基于链接的搜索引擎作弊方法对传统PageRank算法造成了巨大的影响,例如,链接农场、交换链接、黄金链、财富链等使得网页的PageRank值失去了公正性和权威性。该文在分析多种作弊方法对传统PageRank算法所造成的不利影响的基础上,提出了一种可以抵抗链接作弊的三阶段PageRank算法-TSPageRank算法,该文对TSPageRank算法的原理进行了详细分析,并通过实验证明TSPageRank算法比传统的PageRank算法在效果上提高了59.4%,能够有效地提升重要网页的PageRank值,并降低作弊网页的PageRank值。  相似文献   

8.
Spam网页主要通过链接作弊手段达到提高搜索排名而获利的目的,根据链接作弊的特征,引入链接相似度和作弊系数两个指标来判定网页作弊的可能性。借鉴BadRank算法思想,从Spam网页种子集合通过迭代计算链接相似度和作弊系数,并根据与种子集合的链接指向关系设置权重,将待判定的网页进行度量。最后选取Anti-Trust Rank等算法作对比实验,结果验证了本文算法在准确率和适应性方面优于对比算法。  相似文献   

9.
王有为  汪定伟 《控制与决策》2002,17(Z1):695-698
提出链接可达性和网页可达性的定义.为计算网页可达性,设计了计算到达网页路径的路径树生成算法(PTSA).基于极大化网页重要性与网页可达性之间相关性的链接结构设计思想,建立一种网站最优链接结构设计的数学模型,并提出将PTSA嵌入禁忌搜索的求解方法.实验结果表明,该方法可帮助网站设计者建设链接结构合理的电子超市网站.  相似文献   

10.
针对攻击者利用URL缩短服务导致仅依赖于URL特征的恶意网页检测失效的问题,及恶意网页检测中恶意与良性网页高度不均衡的问题,提出一种融合网页内容层次语义树特征的成本敏感学习的恶意网页检测方法。该方法通过构建网页内容链接层次语义树,提取基于语义树的特征,解决了URL缩短服务导致特征失效的问题;并通过构建成本敏感学习的检测模型,解决了数据类别不均衡的问题。实验结果表明,与现有的方法相比,提出的方法不仅能应对缩短服务的问题,还能在类别不均衡的恶意网页检测任务中表现出较低的漏报率2.1%和误报率3.3%。此外,在25万条无标签数据集上,该方法比反病毒工具VirusTotal的查全率提升了38.2%。  相似文献   

11.
In this work we propose a model to represent the web as a directed hypergraph (instead of a graph), where links connect pairs of disjointed sets of pages. The web hypergraph is derived from the web graph by dividing the set of pages into non-overlapping blocks and using the links between pages of distinct blocks to create hyperarcs. A hyperarc connects a block of pages to a single page, in order to provide more reliable information for link analysis. We use the hypergraph model to create the hypergraph versions of the Pagerank and Indegree algorithms, referred to as HyperPagerank and HyperIndegree, respectively. The hypergraph is derived from the web graph by grouping pages by two different partition criteria: grouping together the pages that belong to the same web host or to the same web domain. We compared the original page-based algorithms with the host-based and domain-based versions of the algorithms, considering a combination of the page reputation, the textual content of the pages and the anchor text. Experimental results using three distinct web collections show that the HyperPagerank and HyperIndegree algorithms may yield better results than the original graph versions of the Pagerank and Indegree algorithms. We also show that the hypergraph versions of the algorithms were slightly less affected by noise links and spamming.  相似文献   

12.
为了提取网页中的主题信息,提出了一种基于支持向量机(SVM)的网页主题信息提取算法.该算法首先将整个网页划分成多个不同的信息块;然后根据信息块中的文本、图片、链接及信息块的位置建立其特征向量;通过训练得到SVM的最优分类函数;最后通过最优分类函数的符号判断给定的信息块是否是主题信息.封闭式测试中,指标precision和gain在最高时达到98%和96%;开放式测试中,两指标分别为92%和87%.  相似文献   

13.
The complexity of web information environments and multiple‐topic web pages are negative factors significantly affecting the performance of focused crawling. A highly relevant region in a web page may be obscured because of low overall relevance of that page. Segmenting the web pages into smaller units will significantly improve the performance. Conquering and traversing irrelevant page to reach a relevant one (tunneling) can improve the effectiveness of focused crawling by expanding its reach. This paper presents a heuristic‐based method to enhance focused crawling performance. The method uses a Document Object Model (DOM)‐based page partition algorithm to segment a web page into content blocks with a hierarchical structure and investigates how to take advantage of block‐level evidence to enhance focused crawling by tunneling. Page segmentation can transform an uninteresting multi‐topic web page into several single topic context blocks and some of which may be interesting. Accordingly, focused crawler can pursue the interesting content blocks to retrieve the relevant pages. Experimental results indicate that this approach outperforms Breadth‐First, Best‐First and Link‐context algorithm both in harvest rate, target recall and target length. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

14.
We present in this paper a model for indexing and querying web pages, based on the hierarchical decomposition of pages into blocks. Splitting up a page into blocks has several advantages in terms of page design, indexing and querying such as (i) blocks of a page most similar to a query may be returned instead of the page as a whole (ii) the importance of a block can be taken into account, as well as (iii) the permeability of the blocks to neighbor blocks: a block b is said to be permeable to a block b?? in the same page if b?? content (text, image, etc.) can be (partially) inherited by b upon indexing. An engine implementing this model is described including: the transformation of web pages into blocks hierarchies, the definition of a dedicated language to express indexing rules and the storage of indexed blocks into an XML repository. The model is assessed on a dataset of electronic news, and a dataset drawn from web pages of the ImagEval campaign where it improves by 16% the mean average precision of the baseline.  相似文献   

15.
一种测量任意链路可用带宽的方法   总被引:3,自引:1,他引:2  
何莉  余顺争 《软件学报》2009,20(4):997-1013
可用带宽测量对于网络行为分析、网络服务质量(quality of service,简称QoS)的验证等有很重要的作用.现有可用带宽测量工作主要集中在端到端路径可用带宽测量,仅提供路径上承压链路(tight link)的信息,而不能提供其他关键链路的信息.为此,提出一种新颖的链路可用带宽测量算法LinkPPQ(trains of pairs of packet-quartets used to measure available bandwidth of arbitrary links),它采用由四探测分组结构对构成的探测序列,能够测量网络中任意链路的可用带宽,并跟踪该链路上背景流的变化.在仿真环境和实际网络环境下研究了LinkPPQ 的性能.仿真结果表明,在几种不同背景流场景下,对于具有单狭窄链路的路径和具有多狭窄链路的路径,LinkPPQ 都能够对各个链路的可用带宽进行有效的测量.绝大多数情况下测量误差小于30%,且具有较好的测量平稳性.实验网的实验结果也表明,LinkPPQ 可以准确测量以下几种情况下的链路的可用带宽:a) 从容量为10Mbps 的链路准确地测量一条100Mbps 链路的可用带宽;b) 准确测量容量10 倍于紧邻其后狭窄链路的容量的链路的可用带宽;c) 准确测量具有多狭窄链路的路径上各狭窄链路的可用带宽.  相似文献   

16.
针对已有网页分割方法都基于文档对象模型实现且实现难度较高的问题,提出了一种采用字符串数据模型实现网页分割的新方法。该方法通过机器学习获取网页标题的特征,利用标题实现网页分割。首先,利用网页行块分布函数和网页标题标签学习得到网页标题特征;然后,基于标题将网页分割成内容块;最后,利用块深度对内容块进行合并,完成网页分割。理论分析与实验结果表明,该方法中的算法具有O(n)的时间复杂度和空间复杂度,该方法对于高校门户、博客日志和资源网站等类型的网页具有较好的分割效果,并且可以用于网页信息管理的多种应用中,具有良好的应用前景。  相似文献   

17.
Strategies in searching a link from a web page can rely either on expectations of prototypical locations or on memories of earlier visits to the page. What is the nature of these expectations, how are locations of web objects remembered, and how do the expectations and memories control search? These questions were investigated in an experiment where, in the experimental group, nine experienced users searched links. To obtain information about expectations, users' eye movements were recorded. Memory for locations of web objects was tested immediately afterwards. In the control group, nine matched users had to guess the locations of web objects without seeing the page. Eye-movement data and control group's guesses both indicated a robust expectation of links residing on the left side of the page. Only the location of task-relevant web objects could be recollected, indicating that deep processing is required for memories to become consciously accessible. A comparison between the experimental group and the control group revealed that what was represented in memory was not an individual link's location but the approximate locations of link panels. We argue that practice-related decreases in reaction time were caused by semantic priming. Roles for the different types of memory in link search are discussed.  相似文献   

18.
A path-based approach for web page retrieval   总被引:1,自引:0,他引:1  
Use of links to enhance page ranking has been widely studied. The underlying assumption is that links convey recommendations. Although this technique has been used successfully in global web search, it produces poor results for website search, because the majority of the links in a website are used to organize information and convey no recommendations. By distinguishing these two kinds of links, respectively for recommendation and information organization, this paper describes a path-based method for web page ranking. We define the Hierarchical Navigation Path (HNP) as a new resource for improving web search. HNP is composed of multi-step navigation information in visitors’ website browsing. It provides indications of the content of the destination page. We first classify the links inside a website. Then, the links for web page organization are exploited to construct the HNPs for each page. Finally, the PathRank algorithm is described for web page retrieval. The experiments show that our approach results in significant improvements over existing solutions.  相似文献   

19.
Dynamic Traffic Assignment with More Flexible Modelling within Links   总被引:1,自引:1,他引:0  
Traffic network models tend to become very large even for medium-size static assignment problems. Adding a time dimension, together with time-varying flows and travel times within links and queues, greatly increases the scale and complexity of the problem. In view of this, to retain tractability in dynamic traffic assignment (DTA) formulations, especially in mathematical programming formulations, additional assumptions are normally introduced. In particular, the time varying flows and travel times within links are formulated as so-called whole-link models. We consider the most commonly used of these whole-link models and some of their limitations.In current whole-link travel-time models, a vehicle's travel time on a link is treated as a function only of the number of vehicles on the link at the moment the vehicle enters. We first relax this by letting a vehicle's travel time depend on the inflow rate when it enters and the outflow rate when it exits. We further relax the dynamic assignment formulation by stating it as a bi-level program, consisting of a network model and a set of link travel time sub-models, one for each link. The former (the network model) takes the link travel times as bounded and assigns flows to links and routes. The latter (the set of link models) does the reverse, that is, takes link inflows as given and finds bounds on link travel times. We solve this combined model by iterating between the network model and link sub-models until a consistent solution is found. This decomposition allows a much wider range of link flow or travel time models to be used. In particular, the link travel time models need not be whole-link models and can be detailed models of flow, speed and density varying along the link. In our numerical examples, algorithms designed to solve this bi-level program converged quickly, but much remains to be done in exploring this approach further. The algorithms for solving the bi-level formulation may be interpreted as traveller learning behaviour, hence as a day-to-day traffic dynamics. Thus, even though in our experiments the algorithms always converged, their behaviour is still of interest even if they cycled rather than converged. Directions for further research are noted. The bi-level model can be extended to handle issues and features similar to those addressed by other DTA models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号