首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 171 毫秒
1.
基于集成学习的钓鱼网页深度检测系统   总被引:1,自引:0,他引:1  
网络钓鱼是一种在线欺诈行为,它利用钓鱼网页仿冒正常合法的网页,窃取用户敏感信息从而达到非法目的.提出了基于集成学习的钓鱼网页深度检测方法,采用网页渲染来应对常见的页面伪装手段,提取渲染后网页的URL信息特征、链接信息特征以及页面文本特征,利用集成学习的方法,针对不同的特征信息构造并训练不同的基础分类器模型,最后利用分类集成策略综合多个基础分类器生成最终的结果.针对PhishTank钓鱼网页的检测实验表明,本文提出的检测方法具有较好的准确率与召回率.  相似文献   

2.
如何快速有效地计算网页的相似性是发现钓鱼网页的关键.现有的钓鱼网页检测方法在检测效果上依然存在较大的提升空间.文中提出基于匈牙利匹配的钓鱼网页检测模型,该模型首先提取渲染后网页的文本特征签名、图像特征签名以及网页整体特征签名,比较全面地刻画了网页访问后的特征;然后通过匈牙利算法计算二分图的最佳匹配来寻找不同网页签名之间匹配的特征对,在此基础上能够更加客观地度量网页之间的相似性,从而提高钓鱼网页的检测效果.一系列的仿真实验表明文中方法可行,并具有较高的准确率和召回率.  相似文献   

3.
现有钓鱼网页检测算法以网页文本、网站结构或图像等特征进行相似性比较,在性能及效率仍有很大的提升空间.基于此,提出一种融合网页噪声和n-gram的钓鱼网站检测算法,提取出可疑网页中网页噪声,利用n-gram表示成模式特征,与受保护的网站进行相似性计算,以此判断可疑网页是否为钓鱼网页.对钓鱼网站样本数据的检测实验结果表明:该算法处理的网页特征稳定,网页数据量较少,在检测性能与效率较以往算法有了很大提升.  相似文献   

4.
针对钓鱼URL常用的混淆技术,提出一种基于规则匹配和逻辑回归的钓鱼网页检测方法(RMLR)。首先,使用针对违反URL命名标准及隐藏钓鱼目标词等混淆技术所构建的规则库对给定网页分类,若可判定其为钓鱼网址,则省略后续的特征提取及检测过程,以满足实时检测的需要。若未能直接判定为钓鱼网址,则提取该URL的相关特征,并使用逻辑回归分类器进行二次检测,以提升检测的适应性和准确率,并降低因规则库规模不足导致的误报率。同时,RMLR引入基于字符串相似度的Jaccard随机域名识别方法来辅助检测钓鱼URL。实验结果表明,RMLR准确率达到98.7%,具有良好的检测效果。  相似文献   

5.
针对目前大部分钓鱼网站检测方法存在检测准确率低、误判率高等问题,提出了一种基于特征选择与集成学习的钓鱼网站检测方法。该检测方法首先使用FSIGR算法进行特征选择,FSIGR算法结合过滤和封装模式的优点,从信息相关性和分类能力两个方面对特征进行综合度量,并采用前向递增后向递归剔除策略对特征进行选择,以分类精度作为评价指标对特征子集进行评价,从而获取最优特征子集;然后使用最优特征子集数据对随机森林分类算法模型进行训练。在UCI数据集上的实验表明,所提方法能够有效提高钓鱼网站检测的正确率,降低误判率,具有实际应用意义。  相似文献   

6.
基于嵌套EMD的钓鱼网页检测算法   总被引:1,自引:0,他引:1  
网络钓鱼(Web phishing)以相似网站欺诈用户、骗取个人机密信息,已成为电子金融活动的重大威胁.对此,文中提出了一个钓鱼网页检测架构.在具体检测机制方面,提出了一个基于嵌套EMD(Nested Earth Mover's Distance)的网页相似度判定算法,对Web图像进行分割,抽取子图特征并构建网页的ARG (Attributed Relational Graph),在计算不同ARG属性距离的基础上,采用嵌套EMD方法计算网页的相似度,实现了对钓鱼网站的检测.实验结果表明,与国际现有研究成果相比,该算法具有较高的精度和较强的适应性.  相似文献   

7.
为防范终端从网络中获取不良信息,分析了常见的网络访问控制和信息过滤方法,建立了基于终端信息过滤的网络访问控制模型。该模型通过综合分析URL地址/关键字I、P地址和协议等信息来识别不良网站,通过分析网页文本关键词识别不良网页。基于Windows网络过滤驱动技术,开发了一款Windows终端网络信息过滤和访问控制软件。该软件拦截Win-dows终端的网络访问数据流,应用建立的网络访问控制模型,实现了对不良网站和网页的访问控制。  相似文献   

8.
熊忠阳  蔺显强  张玉芳  牙漫 《计算机工程》2013,(12):200-203,210
网页中存在正文信息以及与正文无关的信息,无关信息的存在对Web页面的分类、存储及检索等带来负面的影响。为降低无关信息的影响,从网页的结构特征和文本特征出发,提出一种结合网页结构特征与文本特征的正文提取方法。通过正则表达式去除网页中的无关元素,完成对网页的初次过滤。根据网页的结构特征对网页进行线性分块,依据各个块的文本特征将其区分为链接块与文本块,并利用噪音块连续出现的结果完成对正文部分的定位,得到网页正文信息。实验结果表明,该方法能够快速准确地提取网页的正文内容。  相似文献   

9.
基于权重均值的不良网页过滤算法研究   总被引:2,自引:0,他引:2  
传统的网页权重过滤算法中的权重大都根据词频统计方法来确定,该方法不能很好地表达关键词对主题的表征程度,且易被某些网站利用反关键字过滤策略逃避检测.在传统方法的基础上,设置加权的关键字矩阵词典,从关联规则出发,应用汉语语料库里的同类词定义,提出基于同类词权重均值的关联过滤算法.试验结果表明,该算法过滤更为高效,并且能够很好地应对色情网站的反关键字过滤策略,尤其在色情与医学网页的分离上有明显的效果.  相似文献   

10.
为了应对钓鱼网站的检测逃避策略,提出一种基于URL语言特征的钓鱼网站检测算法。通过分析钓鱼网站和合法网站的URL在不同检测域上的差异,定义基元和敏感度来描述其语言特征。先根据基元对主级域名进行相似性检测,当相似性低于预先设定的阈值时,选取有效的子域名特征,利用随机森林算法对子域名的语言特征进行学习和检测。实验结果表明,该算法的准确率达95.6%,系统运行时间相对较小,平均识别时间小于1 s。  相似文献   

11.
如何发现主题信息源是主题Web信息整合的前提。提出了一种主题信息源发现方法,将主题信息源发现转化为网站主题分类问题,并利用站外链接发现新的信息源。从网站中提取出能反映网站主题的内容特征词和结构特征词,建立描述网站主题的改进的向量空间模型。以该模型为基础,通过类中心向量法与SVM相结合对网站主题进行分类。提出一种能尽量少爬取网页的网络搜索策略,在发现站外链接的同时爬取最能代表网站主题的页面。将该主题信息源发现方法应用于林业商务信息源,通过实验验证了该方法的有效性。  相似文献   

12.
In web browsers, a variety of anti-phishing tools and technologies are available to assist users to identify phishing attempts and potentially harmful pages. Such anti-phishing tools and technologies provide Internet users with essential information, such as warnings of spoofed pages. To determine how well users are able to recognise and identify phishing web pages with anti-phishing tools, we designed and conducted usability tests for two types of phishing-detection applications: blacklist-based and whitelist-based anti-phishing toolbars. The research results mainly indicate no significant performance differences between the application types. We also observed that, in many web browsing cases, a significant amount of useful and practical information for users is absent, such as information explaining professional web page security certificates. Such certificates are crucial in ensuring user privacy and protection. We also found other deficiencies in web identities in web pages and web browsers that present challenges to the design of anti-phishing toolbars. These challenges will require more professional, illustrative, instructional, and reliable information for users to facilitate user verification of the authenticity of web pages and their content.  相似文献   

13.

One of the major challenges in cyber space and Internet of things (IoT) environments is the existence of fake or phishing websites that steal users’ information. A website as a multimedia system provides access to different types of data such as text, image, video, audio. Each type of these data are prune to be used by fishers to perform a phishing attack. In phishing attacks, people are directed to fake pages and their important information is stolen by a thief or phisher. Machine learning and data mining algorithms are the widely used algorithms for classifying websites and detecting phishing attacks. Classification accuracy is highly dependent on the feature selection method employed to choose appropriate features for classification. In this research, an improved spotted hyena optimization algorithm (ISHO algorithm) is proposed to select proper features for classifying phishing websites through support vector machine. The proposed ISHO algorithm outperformed the standard spotted hyena optimization algorithm with better accuracy. In addition, the results indicate the superiority of ISHO algorithm to three other meta-heuristic algorithms including particle swarm optimization, firefly algorithm, and bat algorithm. The proposed algorithm is also compared with a number of classification algorithms proposed before on the same dataset.

  相似文献   

14.
The detection of phishing and legitimate websites is considered a great challenge for web service providers because the users of such websites are indistinguishable. Phishing websites also create traffic in the entire network. Another phishing issue is the broadening malware of the entire network, thus highlighting the demand for their detection while massive datasets (i.e., big data) are processed. Despite the application of boosting mechanisms in phishing detection, these methods are prone to significant errors in their output, specifically due to the combination of all website features in the training state. The upcoming big data system requires MapReduce, a popular parallel programming, to process massive datasets. To address these issues, a probabilistic latent semantic and greedy levy gradient boosting (PLS-GLGB) algorithm for website phishing detection using MapReduce is proposed. A feature selection-based model is provided using a probabilistic intersective latent semantic preprocessing model to minimize errors in website phishing detection. Here, the missing data in each URL are identified and discarded for further processing to ensure data quality. Subsequently, with the preprocessed features (URLs), feature vectors are updated by the greedy levy divergence gradient (model) that selects the optimal features in the URL and accurately detects the websites. Thus, greedy levy efficiently differentiates between phishing websites and legitimate websites. Experiments are conducted using one of the largest public corpora of a website phish tank dataset. Results show that the PLS-GLGB algorithm for website phishing detection outperforms state-of-the-art phishing detection methods. Significant amounts of phishing detection time and errors are also saved during the detection of website phishing.  相似文献   

15.
Phishing attack is growing significantly each year and is considered as one of the most dangerous threats in the Internet which may cause people to lose confidence in e-commerce. In this paper, we present a heuristic method to determine whether a webpage is a legitimate or a phishing page. This scheme could detect new phishing pages which black list based anti-phishing tools could not. We first convert a web page into 12 features which are well selected based on the existing normal and fishing pages. A training set of web pages including normal and fishing pages are then input for a support vector machine to do training. A testing set is finally fed into the trained model to do the testing. Compared to the existing methods, the experimental results show that the proposed phishing detector can achieve the high accuracy rate with relatively low false positive and low false negative rates.  相似文献   

16.
基于特征串的大规模中文网页快速去重算法研究   总被引:16,自引:1,他引:16  
网页检索结果中,用户经常会得到内容相同的冗余页面,其中大量是由于网站之间的转载造成。它们不但浪费了存储资源,并给用户的检索带来诸多不便。本文依据冗余网页的特点引入模糊匹配的思想,利用网页文本的内容、结构信息,提出了基于特征串的中文网页的快速去重算法,同时对算法进行了优化处理。实验结果表明该算法是有效的,大规模开放测试的重复网页召回率达97.3% ,去重正确率达99.5%。  相似文献   

17.
The ability to automatically detect fraudulent escrow websites is important in order to alleviate online auction fraud. Despite research on related topics, such as web spam and spoof site detection, fake escrow website categorization has received little attention. The authentic appearance of fake escrow websites makes it difficult for Internet users to differentiate legitimate sites from phonies; making systems for detecting such websites an important endeavor. In this study we evaluated the effectiveness of various features and techniques for detecting fake escrow websites. Our analysis included a rich set of fraud cues extracted from web page text, image, and link information. We also compared several machine learning algorithms, including support vector machines, neural networks, decision trees, naïve bayes, and principal component analysis. Experiments were conducted to assess the proposed fraud cues and techniques on a test bed encompassing nearly 90,000 web pages derived from 410 legitimate and fake escrow websites. The combination of an extended feature set and a support vector machines ensemble classifier enabled accuracies over 90 and 96% for page and site level classification, respectively, when differentiating fake pages from real ones. Deeper analysis revealed that an extended set of fraud cues is necessary due to the broad spectrum of tactics employed by fraudsters. The study confirms the feasibility of using automated methods for detecting fake escrow websites. The results may also be useful for informing existing online escrow fraud resources and communities of practice about the plethora of fraud cues pervasive in fake websites.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号