首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 26 毫秒
1.
In this article we first explain the knowledge extraction (KE) process from the World Wide Web (WWW) using search engines. Then we explore the PageRank algorithm of Google search engine (a well-known link-based search engine) with its hidden Markov analysis. We also explore one of the problems of link-based ranking algorithms called hanging pages or dangling pages (pages without any forward links). The presence of these pages affects the ranking of Web pages. Some of the hanging pages may contain important information that cannot be neglected by the search engine during ranking. We propose methodologies to handle the hanging pages and compare the methodologies. We also introduce the TrustRank algorithm (an algorithm to handle the spamming problems in link-based search engines) and include it in our proposed methods so that our methods can combat Web spam. We implemented the PageRank algorithm and TrustRank algorithm and modified those algorithms to implement our proposed methodologies.  相似文献   

2.
Web spam denotes the manipulation of web pages with the sole intent to raise their position in search engine rankings. Since a better position in the rankings directly and positively affects the number of visits to a site, attackers use different techniques to boost their pages to higher ranks. In the best case, web spam pages are a nuisance that provide undeserved advertisement revenues to the page owners. In the worst case, these pages pose a threat to Internet users by hosting malicious content and launching drive-by attacks against unsuspecting victims. When successful, these drive-by attacks then install malware on the victims’ machines. In this paper, we introduce an approach to detect web spam pages in the list of results that are returned by a search engine. In a first step, we determine the importance of different page features to the ranking in search engine results. Based on this information, we develop a classification technique that uses important features to successfully distinguish spam sites from legitimate entries. By removing spam sites from the results, more slots are available to links that point to pages with useful content. Additionally, and more importantly, the threat posed by malicious web sites can be mitigated, reducing the risk for users to get infected by malicious code that spreads via drive-by attacks.  相似文献   

3.
Web spam has become one of the most exciting challenges and threats to web search engines. The relationship between the search systems and those who try to manipulate them came up with the field of adversarial information retrieval. In this article, we set up several experiments to compare HostRank and TrustRank to show how effective it is for TrustRank to combat web spam, and we report a comparison on different link-based web spam detection algorithms.  相似文献   

4.
Web spam是指通过内容作弊和网页间链接作弊来欺骗搜索引擎,从而提升自身搜索排名的作弊网页,它干扰了搜索结果的准确性和相关性。提出基于Co-Training模型的Web spam检测方法,使用了网页的两组相互独立的特征——基于内容的统计特征和基于网络图的链接特征,分别建立两个独立的基本分类器;使用Co-Training半监督式学习算法,借助大量未标记数据来改善分类器质量。在WEB SPAM-UK2007数据集上的实验证明:算法改善了SVM分类器的效果。  相似文献   

5.
基于目的分析的作弊页面分类   总被引:4,自引:1,他引:3  
随着互联网的飞速发展,因网络作弊而产生的垃圾页面越来越多,严重影响了搜索引擎的检索效率和用户体验。反作弊已经成为搜索引擎所面临的最重要挑战之一。但目前的反作弊研究大都是基于页面内容或链接特征的,没有一个通用可行的识别方法。本文主要基于作弊目的的分析,给出作弊页面另一种体系的分类,为基于目的的作弊页面识别起到良好的导向作用。  相似文献   

6.
7.
SEO技术在网站开发中的应用   总被引:1,自引:0,他引:1  
首先根据搜索引擎基本原理分析影响网页搜索排名的主要因素,然后就URL地址重写、排除重复内容、HTML优化3个主题,论述了网站开发时的SEO技术。  相似文献   

8.
随着Web技术的发展和Web上越来越多的各种信息,如何提供高质量、相关的查询结果成为当前Web搜索引擎的一个巨大挑战.PageRank和HITS是两个最重要的基于链接的排序算法并在商业搜索引擎中使用.然而,在PageRank算法中,每个网页的PR值被平均地分配到它所指向的所有网页,网页之间的质量差异被完全忽略.这样的算法很容易被当前的Web SPAM攻击.基于这样的认识,提出了一个关于PageRank算法的改进,称为Page Quality Based PageRank(QPR)算法.QPR算法动态地评估每个网页的质量,并根据网页的质量对每个网页的PR值做相应公平的分配.在多个不同特性的数据集上进行了全面的实验,实验结果显示,提出的QPR算法能大大提高查询结果的排序,并能有效减轻SPAM网页对查询结果的影响.  相似文献   

9.
一种抵抗链接作弊的PageRank改进算法   总被引:3,自引:0,他引:3  
大量的基于链接的搜索引擎作弊方法对传统PageRank算法造成了巨大的影响,例如,链接农场、交换链接、黄金链、财富链等使得网页的PageRank值失去了公正性和权威性。该文在分析多种作弊方法对传统PageRank算法所造成的不利影响的基础上,提出了一种可以抵抗链接作弊的三阶段PageRank算法-TSPageRank算法,该文对TSPageRank算法的原理进行了详细分析,并通过实验证明TSPageRank算法比传统的PageRank算法在效果上提高了59.4%,能够有效地提升重要网页的PageRank值,并降低作弊网页的PageRank值。  相似文献   

10.
Visual Basic 6.0中浏览器控件的重要方法和重要事件,利用该控件设计了一个具有可装入页面,可返回上一个页面、可进入下一个页面、可刷新页面、可搜索网页或站点的浏览器,该浏览器短小精悍,代码精练,可方便地嵌入到应用程序中,增强应用软件的网络功能。  相似文献   

11.
Currently, web spamming is a serious problem for search engines. It not only degrades the quality of search results by intentionally boosting undesirable web pages to users, but also causes the search engine to waste a significant amount of computational and storage resources in manipulating useless information. In this paper, we present a novel ensemble classifier for web spam detection which combines the clonal selection algorithm for feature selection and under-sampling for data balancing. This web spam detection system is called USCS. The USCS ensemble classifiers can automatically sample and select sub-classifiers. First, the system will convert the imbalanced training dataset into several balanced datasets using the under-sampling method. Second, the system will automatically select several optimal feature subsets for each sub-classifier using a customized clonal selection algorithm. Third, the system will build several C4.5 decision tree sub-classifiers from these balanced datasets based on its specified features. Finally, these sub-classifiers will be used to construct an ensemble decision tree classifier which will be applied to classify the examples in the testing data. Experiments on WEBSPAM-UK2006 dataset on the web spam problem show that our proposed approach, the USCS ensemble web spam classifier, contributes significant classification performance compared to several baseline systems and state-of-the-art approaches.  相似文献   

12.
搜索引擎优化技术是近年来发展较迅速的技术,目前被广泛的应用于电子商务网站的优化。分析了搜索引擎优化的概念,对页面标题、Meta标签、关键字、网页设计等提出优化策略。  相似文献   

13.
ABSTRACT

This study investigates how a search interface that displays users’ ultimate query (i.e. users’ current search goal) can cope with the age-related decrease of fluid abilities and support older users’ search behaviours. 30 young and 18 older adults completed 9 search problems with a regular web browser or with the experimental search interface. Results showed that older adults spent longer time on the search engine result pages, they needed more time to reformulate, and they had more difficulties exploring the search paths elaborated. Age-differences also appeared as soon as the beginning of the search. The support tool helped older users reformulate their queries more rapidly and elaborate more flexible search strategies at the beginning of the activity. Indeed, older adults who interacted with the support tool switched to the processing of a new search path more rapidly instead of exploiting their initial query (i.e. they visited fewer websites for the initial query produced and reformulated a query instead of keeping on conducting a deeper investigation of the search results provided in the search engine result page). Implications of these findings for the design of effective support tools for older users are discussed.  相似文献   

14.
Drive-by-download malware exposes internet users to infection of their personal computers, which can occur simply by visiting a website containing malicious content. This can lead to a major threat to the user’s most sensitive information. Popular browsers such as Firefox, Internet Explorer and Maxthon have extensions that block JavaScript, Flash and other executable content. Some extensions globally block all dynamic content, and in others the user needs to specifically enable the content for each site (s)he trusts. Since most of the web-pages today contain dynamic content, disabling them damages user experience and page usability, and that prevents many users from installing security extensions. We propose a novel approach, based on Social Network Analysis parameters, that predicts the user trust perspective for the HTML page currently being viewed. Our system examines the URL that appears in the address bar of the browser and each of the inner HTML URL reputations, and only if all of them have a reputation greater than our predetermined threshold, it marks the webpage as trusted. Each URL reputation is calculated based on the number and quality of the links on the whole web pointing back to the URL. The method was examined on a corpus of 44,429 malware domains and on the top 2000 most popular Alexa sites. Our system managed to enable dynamic content of 70% of the most popular websites and block 100% of malware web-pages, all without any user intervention. Our approach can augment most browser security applications and enhance their effectiveness, thus encouraging more users to install these important applications.  相似文献   

15.
System performance assessment and comparison are fundamental for large-scale image search engine development. This article documents a set of comprehensive empirical studies to explore the effects of multiple query evidences on large-scale social image search. The search performance based on the social tags, different kinds of visual features and their combinations are systematically studied and analyzed. To quantify the visual query complexity, a novel quantitative metric is proposed and applied to assess the influences of different visual queries based on their complexity levels. Besides, we also study the effects of automatic text query expansion with social tags using a pseudo relevance feedback method on the retrieval performance. Our analysis of experimental results shows a few key research findings: (1) social tag-based retrieval methods can achieve much better results than content-based retrieval methods; (2) a combination of textual and visual features can significantly and consistently improve the search performance; (3) the complexity of image queries has a strong correlation with retrieval results’ quality—more complex queries lead to poorer search effectiveness; and (4) query expansion based on social tags frequently causes search topic drift and consequently leads to performance degradation.  相似文献   

16.
最近,spam页面急剧增加,这极大的影响了搜索引擎的精度和效率。如何抵御spam页面已经成为一个非常重要的问题。文章合并了基于内容来侦测spam页面和基于链接spam侦测spam页面的方法,从而提出了一个两步的侦测spam页面的方法。第一步是一个过滤的步骤,用于生成spam页面的候选列表;第二步,通过一个自动的分类器从候选页面中侦测出最终的spam页面。  相似文献   

17.
In this paper, we present a new rule-based method to detect phishing attacks in internet banking. Our rule-based method used two novel feature sets, which have been proposed to determine the webpage identity. Our proposed feature sets include four features to evaluate the page resources identity, and four features to identify the access protocol of page resource elements. We used approximate string matching algorithms to determine the relationship between the content and the URL of a page in our first proposed feature set. Our proposed features are independent from third-party services such as search engines result and/or web browser history. We employed support vector machine (SVM) algorithm to classify webpages. Our experiments indicate that the proposed model can detect phishing pages in internet banking with accuracy of 99.14% true positive and only 0.86% false negative alarm. Output of sensitivity analysis demonstrates the significant impact of our proposed features over traditional features. We extracted the hidden knowledge from the proposed SVM model by adopting a related method. We embedded the extracted rules into a browser extension named PhishDetector to make our proposed method more functional and easy to use. Evaluating of the implemented browser extension indicates that it can detect phishing attacks in internet banking with high accuracy and reliability. PhishDetector can detect zero-day phishing attacks too.  相似文献   

18.
基于内容与链接特征的中文垃圾网页分类   总被引:2,自引:0,他引:2  
随着搜索引擎使用的日益普及,web作弊已成为搜索引擎面临的一个重大挑战。国内外研究人员从基于内容,基于链接等方面提出了许多反web作弊的技术,这些技术一定程度上能有效地检测垃圾网页。本文在前人研究基础上提出了一种结合网页内容和链接方面的特征,采用机器学习对中文垃圾网页进行分类检测的方法。实验结果表明,该方法能有效地对中文垃圾网页分类。  相似文献   

19.
Link spam is created with the intention of boosting one target’s rank in exchange of business profit. This unethical way of deceiving Web search engines is known as Web spam. Since then many anti-link spam detection techniques have constantly being proposed. Web spam detection is a crucial task due to its devastation towards Web search engines and global cost of billion dollars annually. In this paper, we proposed a novel technique by incorporating weight properties to enhance the Web spam detection algorithms. Weight properties can be defined as the influences of one Web node towards another Web node. We modified existing Web spam detection algorithms with our novel technique to evaluate the performances on a large public Web spam dataset – WEBSPAM-UK2007. The overall performance have shown that the modified algorithms outperform the benchmark algorithms up to 30.5 % improvement at host level and 6.11 % improvement at page level.  相似文献   

20.
微博客作为一种新的用户信息传播载体,在网络舆情发起和传播中起着重要作用。由于用户有意(上传广告)、无意(转发)操作所带来的大量噪音微博和相似微博,对网络舆情分析和用户浏览造成极为不利的影响。检测这些噪音微博和相似微博,对微博数据进行提纯,成为一个亟待解决的问题。基于统计数据分析了噪音微博和相似微博的特点,提出一种面向微博文本流的噪音判别和内容相似性双重检测的过滤方法:通过URL链接、字符率、高频词等特征判别,过滤噪音微博;通过分段过滤和索引过滤的双重内容过滤,检测和剔除相似微博。实验表明该方法能有效地对微博数据进行提纯,高效准确地过滤掉相似微博和噪音微博。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号