首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 16 毫秒
1.
针对钓鱼攻击者常用的伪造HTTPS网站以及其他混淆技术,借鉴了目前主流基于机器学习以及规则匹配的检测钓鱼网站的方法RMLR和PhishDef,增加对网页文本关键字和网页子链接等信息进行特征提取的过程,提出了Nmap-RF分类方法。Nmap-RF是基于规则匹配和随机森林方法的集成钓鱼网站检测方法。根据网页协议对网站进行预过滤,若判定其为钓鱼网站则省略后续特征提取步骤。否则以文本关键字置信度,网页子链接置信度,钓鱼类词汇相似度以及网页PageRank作为关键特征,以常见URL、Whois、DNS信息和网页标签信息作为辅助特征,经过随机森林分类模型判断后给出最终的分类结果。实验证明,Nmap-RF集成方法可以在平均9~10 μs的时间内对钓鱼网页进行检测,且可以过滤掉98.4%的不合法页面,平均总精度可达99.6%。  相似文献   

2.
为提升钓鱼网页检测的准确率和效率,提出基于主辅特征的混合式深度学习模型.从URL、HTML页面内容和文档对象模型(document object model,DOM)结构中提取39种特征来表示钓鱼网页的多样性,其中包括两种新特征,基于信息增益将这39种特征根据重要程度分为主要特征和辅助特征;将两种特征向量通过不同通道分别送入由卷积神经网络和双向长短时记忆网络组成的混合式深度学习网络进行训练,对两通道的输出进行加权融合实现分类.实验结果表明,所提模型能有效地检测钓鱼网页.  相似文献   

3.
如何快速有效地计算网页的相似性是发现钓鱼网页的关键.现有的钓鱼网页检测方法在检测效果上依然存在较大的提升空间.文中提出基于匈牙利匹配的钓鱼网页检测模型,该模型首先提取渲染后网页的文本特征签名、图像特征签名以及网页整体特征签名,比较全面地刻画了网页访问后的特征;然后通过匈牙利算法计算二分图的最佳匹配来寻找不同网页签名之间匹配的特征对,在此基础上能够更加客观地度量网页之间的相似性,从而提高钓鱼网页的检测效果.一系列的仿真实验表明文中方法可行,并具有较高的准确率和召回率.  相似文献   

4.
在原始分类器聚焦爬虫的基础上设计并实现在线增量学习的自适应聚焦爬虫.该聚焦爬虫包括一个基础网页分类器和一个在线增量学习自适应链接分类器.基础页面分类器根据领域知识对抓取到的页面内容主题相关性进行分类.在线增量学习自适应链接分类器能即时根据爬虫爬得网页和网页链接信息作出分类模型调整以更合理方式计算链接的主题相关度.系统中链接排序模块采用TopicalRank主题相关度计算方法分析链接优先抓取顺序.把基于增量学习的自适应聚焦爬虫应用到农业领域,实验结果和分析证明在线增量学习的自适应聚焦爬虫在农业领域爬行性能比仅基于网页相关性和链接重要度的原始分类器聚焦爬虫具有更好的性能.  相似文献   

5.
含有语义特征的网页新闻自动抽取   总被引:1,自引:0,他引:1       下载免费PDF全文
施洋  张奇  黄萱菁 《计算机工程》2010,36(7):173-175
通过分析新闻网页的语义特征以及网页之间存在的通用性质,提出一种含有语义特征的网页新闻自动抽取方法,包括利用语义分类器识别新闻网页中的种子信息以及页面中的局部信息来完成抽取。在分类器中加入语义特征可以使F1值达到94.2%。在语义分类器与局部特征结合的情况下,F1值可以达到96.9%。实验结果证明,该方法能有效提高网页信息抽取算法的精度,降低机器学习所需要的标注成本。  相似文献   

6.
针对载有结构化信息的网页,提出一种基于学习的去重方法。通过先期准备的样本定义分类器,根据分类器对页面中结构化信息的不同属性字段进行分类和距离计算,计算出整个信息对象和已分类样本信息的距离,以这些距离与阈值的大小关系判断该信息对象是否重复。  相似文献   

7.
唐寿洪  朱焱  杨凡 《计算机科学》2015,42(1):239-243
网页作弊不仅造成信息检索质量下降,而且给互联网的安全也带来了极大的挑战.提出了一种基于Bag-ging-SVM集成分类器的网页作弊检测方法.在预处理阶段,首先采用K-means方法解决数据集的不平衡问题,然后采用CFS特征选择方法筛选出最优特征子集,最后对特征子集进行信息熵离散化处理.在分类器训练阶段,通过Bagging方法构建多个训练集并分别对每个训练集进行SVM学习来产生弱分类器.在检测阶段,通过多个弱分类器投票决定测试样本所属类别.在数据集WEBSPAM-UK2006上的实验结果表明,在使用特征数量较少的情况下,本检测方法可以获得非常好的检测效果.  相似文献   

8.
基于深层特征和集成分类器的微博谣言检测研究   总被引:1,自引:0,他引:1  
微博中存在着大量的虚假信息甚至谣言,微博谣言的广泛传播影响社会稳定,损害个人和国家利益。为有效检测微博谣言,提出了一种基于深层特征和集成分类器的微博谣言检测方法。首先,对微博情感倾向性、微博传播过程和微博用户历史信息进行特征提取得到深层分类特征;然后利用分类特征训练集成分类器;最后利用集成分类器对微博谣言进行检测。实验结果表明,提出的基于深层特征和集成分类器的方法能够有效提高微博谣言检测的性能。  相似文献   

9.
针对目前主流恶意网页检测技术耗费资源多、检测周期长和分类效果低等问题,提出一种基于Stacking的恶意网页集成检测方法,将异质分类器集成的方法应用在恶意网页检测识别领域。通过对网页特征提取分析相关因素和分类集成学习来得到检测模型,其中初级分类器分别使用K近邻(KNN)算法、逻辑回归算法和决策树算法建立,而次级的元分类器由支持向量机(SVM)算法建立。与传统恶意网页检测手段相比,此方法在资源消耗少、速度快的情况下使识别准确率提高了0.7%,获得了98.12%的高准确率。实验结果表明,所提方法构造的检测模型可高效准确地对恶意网页进行识别。  相似文献   

10.
异常检测系统在网络空间安全中起着至关重要的作用,为网络安全提供有效的保障.对于复杂的网络流量信息,传统的单一的分类器往往无法同时具备较高检测精确度和较强的泛化能力.此外,基于全特征的异常检测模型往往会受到冗余特征的干扰,影响检测的效率和精度.针对这些问题,本文提出了一种基于平均特征重要性的特征选择和集成学习的模型,选取决策树(DT)、随机森林(RF)、额外树(ET)作为基分类器,建立投票集成模型,并基于基尼系数计算基分类器的平均特征重要性进行特征选择.在多个数据集上的实验评估结果表明,本文提出的集成模型优于经典集成学习模型及其他著名异常检测集成模型.且提出的基于平均特征重要性的特征选择方法可以使集成模型准确率平均进一步提升约0.13%,训练时间平均节省约30%.  相似文献   

11.
Increasing high volume phishing attacks are being encountered every day due to attackers’ high financial returns. Recently, there has been significant interest in applying machine learning for phishing Web pages detection. Different from literatures, this paper introduces predicted labels of textual contents to be part of the features and proposes a novel framework for phishing Web pages detection using hybrid features consisting of URL-based, Web-based, rule-based and textual content-based features. We achieve this framework by developing an efficient two-stage extreme learning machine (ELM). The first stage is to construct classification models on textual contents of Web pages using ELM. In particular, we take Optical Character Recognition (OCR) as an assistant tool to extract textual contents from image format Web pages in this stage. In the second stage, a classification model on hybrid features is developed by using a linear combination model-based ensemble ELMs (LC-ELMs), with the weights calculated by the generalized inverse. Experimental results indicate the proposed framework is promising for detecting phishing Web pages.  相似文献   

12.
搜索引擎作为互联网主要应用之一,能够根据用户需求从互联网资源中检索并返回有效信息。然而,得到的返回列表往往包含广告和失效网页等噪声信息,而这些信息会干扰用户的检索与查询。针对复杂的网页结构特征和丰富的语义信息,提出了一种基于注意力机制和集成学习的网页黑名单判别方法,并采用本方法构建了一种基于集成学习和注意力机制的卷积神经网络(EACNN)模型来过滤无用的网页。首先,根据网页上不同种类的HTML标签数据,构建多个基于注意力机制的卷积神经网络(CNN)基学习器;然后,采用基于网页结构特征的集成学习方法对不同基学习器的输出结果执行不同的权重计算,从而实现EACNN的构建;最后,将EACNN的输出结果作为网页内容分析结果,从而实现网页黑名单的判别。所提方法通过注意力机制来关注网页语义信息,并通过集成学习的方式引入网页结构特征。实验结果表明,与支持向量机(SVM)、K近邻(KNN)、CNN、长短期记忆(LSTM)网络、GRU、结合注意力机制的卷积神经网络(ACNN)等基线模型相比,所提模型在所构建的面向地理信息领域的判别数据集上具有最高的准确率(0.97)、召回率(0.95)和F1分值(0.96),验证了EACNN在网页黑名单判别工作中的优势。  相似文献   

13.
Phishing attack is growing significantly each year and is considered as one of the most dangerous threats in the Internet which may cause people to lose confidence in e-commerce. In this paper, we present a heuristic method to determine whether a webpage is a legitimate or a phishing page. This scheme could detect new phishing pages which black list based anti-phishing tools could not. We first convert a web page into 12 features which are well selected based on the existing normal and fishing pages. A training set of web pages including normal and fishing pages are then input for a support vector machine to do training. A testing set is finally fed into the trained model to do the testing. Compared to the existing methods, the experimental results show that the proposed phishing detector can achieve the high accuracy rate with relatively low false positive and low false negative rates.  相似文献   

14.
In this paper, we present a new rule-based method to detect phishing attacks in internet banking. Our rule-based method used two novel feature sets, which have been proposed to determine the webpage identity. Our proposed feature sets include four features to evaluate the page resources identity, and four features to identify the access protocol of page resource elements. We used approximate string matching algorithms to determine the relationship between the content and the URL of a page in our first proposed feature set. Our proposed features are independent from third-party services such as search engines result and/or web browser history. We employed support vector machine (SVM) algorithm to classify webpages. Our experiments indicate that the proposed model can detect phishing pages in internet banking with accuracy of 99.14% true positive and only 0.86% false negative alarm. Output of sensitivity analysis demonstrates the significant impact of our proposed features over traditional features. We extracted the hidden knowledge from the proposed SVM model by adopting a related method. We embedded the extracted rules into a browser extension named PhishDetector to make our proposed method more functional and easy to use. Evaluating of the implemented browser extension indicates that it can detect phishing attacks in internet banking with high accuracy and reliability. PhishDetector can detect zero-day phishing attacks too.  相似文献   

15.
针对已有网页分割方法都基于文档对象模型实现且实现难度较高的问题,提出了一种采用字符串数据模型实现网页分割的新方法。该方法通过机器学习获取网页标题的特征,利用标题实现网页分割。首先,利用网页行块分布函数和网页标题标签学习得到网页标题特征;然后,基于标题将网页分割成内容块;最后,利用块深度对内容块进行合并,完成网页分割。理论分析与实验结果表明,该方法中的算法具有O(n)的时间复杂度和空间复杂度,该方法对于高校门户、博客日志和资源网站等类型的网页具有较好的分割效果,并且可以用于网页信息管理的多种应用中,具有良好的应用前景。  相似文献   

16.
As the World Wide Web develops at an unprecedented pace, identifying web page genre has recently attracted increasing attention because of its importance in web search. A common approach for identifying genre is to use textual features that can be extracted directly from a web page, that is, On-Page features. The extracted features are subsequently inputted into a machine learning algorithm that will perform classification. However, these approaches may be ineffective when the web page contains limited textual information (e.g., the page is full of images). In this study, we address genre identification of web pages under the aforementioned situation. We propose a framework that uses On-Page features while simultaneously considering information in neighboring pages, that is, the pages that are connected to the original page by backward and forward links. We first introduce a graph-based model called GenreSim, which selects an appropriate set of neighboring pages. We then construct a multiple classifier combination module that utilizes information from the selected neighboring pages and On-Page features to improve performance in genre identification. Experiments are conducted on well-known corpora, and favorable results indicate that our proposed framework is effective, particularly in identifying web pages with limited textual information.  相似文献   

17.
In web browsers, a variety of anti-phishing tools and technologies are available to assist users to identify phishing attempts and potentially harmful pages. Such anti-phishing tools and technologies provide Internet users with essential information, such as warnings of spoofed pages. To determine how well users are able to recognise and identify phishing web pages with anti-phishing tools, we designed and conducted usability tests for two types of phishing-detection applications: blacklist-based and whitelist-based anti-phishing toolbars. The research results mainly indicate no significant performance differences between the application types. We also observed that, in many web browsing cases, a significant amount of useful and practical information for users is absent, such as information explaining professional web page security certificates. Such certificates are crucial in ensuring user privacy and protection. We also found other deficiencies in web identities in web pages and web browsers that present challenges to the design of anti-phishing toolbars. These challenges will require more professional, illustrative, instructional, and reliable information for users to facilitate user verification of the authenticity of web pages and their content.  相似文献   

18.
An effective approach to phishing Web page detection is proposed, which uses Earth mover's distance (EMD) to measure Web page visual similarity. We first convert the involved Web pages into low resolution images and then use color and coordinate features to represent the image signatures. We use EMD to calculate the signature distances of the images of the Web pages. We train an EMD threshold vector for classifying a Web page as a phishing or a normal one. Large-scale experiments with 10,281 suspected Web pages are carried out to show high classification precision, phishing recall, and applicable time performance for online enterprise solution. We also compare our method with two others to manifest its advantage. We also built up a real system which is already used online and it has caught many real phishing cases  相似文献   

19.
半结构化网页中多记录信息的自动抽取方法   总被引:1,自引:0,他引:1  
朱明  王庆伟 《计算机仿真》2005,22(12):95-98
从多记录网页中准确的自动抽取出需要的信息,是Web信息处理中的一个重要研究课题。针对现有方法对噪声敏感的缺点,该文提出了基于记录子树的最大相似度发现记录模式的思想,以在同类记录的表现模式存在一定差异的情况下正确识别记录。在此基础上,实现了多记录网页自动抽取系统,该系统可以从多个学术论文检索网站中,自动获取结果网页,并自动抽取其中的记录。对常见论文检索网站的实验表明了该系统具有较好的有效性和准确性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号