首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we present a new rule-based method to detect phishing attacks in internet banking. Our rule-based method used two novel feature sets, which have been proposed to determine the webpage identity. Our proposed feature sets include four features to evaluate the page resources identity, and four features to identify the access protocol of page resource elements. We used approximate string matching algorithms to determine the relationship between the content and the URL of a page in our first proposed feature set. Our proposed features are independent from third-party services such as search engines result and/or web browser history. We employed support vector machine (SVM) algorithm to classify webpages. Our experiments indicate that the proposed model can detect phishing pages in internet banking with accuracy of 99.14% true positive and only 0.86% false negative alarm. Output of sensitivity analysis demonstrates the significant impact of our proposed features over traditional features. We extracted the hidden knowledge from the proposed SVM model by adopting a related method. We embedded the extracted rules into a browser extension named PhishDetector to make our proposed method more functional and easy to use. Evaluating of the implemented browser extension indicates that it can detect phishing attacks in internet banking with high accuracy and reliability. PhishDetector can detect zero-day phishing attacks too.  相似文献   

2.
Web spam denotes the manipulation of web pages with the sole intent to raise their position in search engine rankings. Since a better position in the rankings directly and positively affects the number of visits to a site, attackers use different techniques to boost their pages to higher ranks. In the best case, web spam pages are a nuisance that provide undeserved advertisement revenues to the page owners. In the worst case, these pages pose a threat to Internet users by hosting malicious content and launching drive-by attacks against unsuspecting victims. When successful, these drive-by attacks then install malware on the victims’ machines. In this paper, we introduce an approach to detect web spam pages in the list of results that are returned by a search engine. In a first step, we determine the importance of different page features to the ranking in search engine results. Based on this information, we develop a classification technique that uses important features to successfully distinguish spam sites from legitimate entries. By removing spam sites from the results, more slots are available to links that point to pages with useful content. Additionally, and more importantly, the threat posed by malicious web sites can be mitigated, reducing the risk for users to get infected by malicious code that spreads via drive-by attacks.  相似文献   

3.
Visual layout has a strong impact on performance and is a critical factor in the design of graphical user interfaces (GUIs) and Web pages. Many design guidelines employed in Web page design were inherited from human performance literature and GUI design studies and practices. However, few studies have investigated the more specific patterns of performance with Web pages that may reflect some differences between Web page and GUI design. We investigated interactions among four visual layout factors in Web page design (quantity of links, alignment, grouping indications, and density) in two experiments: one with pages in Hebrew, entailing right-to-left reading, and the other with English pages, entailing left-to-right reading. Some performance patterns (measured by search times and eye movements) were similar between languages. Performance was particularly poor in pages with many links and variable densities, but it improved with the presence of uniform density. Alignment was not shown to be a performance-enhancing factor. The findings are discussed in terms of the similarities and differences in the impact of layout factors between GUIs and Web pages. Actual or potential applications of this research include specific guidelines for Web page design.  相似文献   

4.
Memory reservations are used to provide real-time tasks with guaranteed memory access to a specified amount of physical memory. However, previous work on memory reservation primarily focused on private pages, and did not pay attention to shared pages, which are widely used in current operating systems. With previous schemes, a real-time task may experience unexpected timing delays from other tasks through shared pages that are shared by another process, even though the task has enough free pages in its own reservation. In this paper, we first describe the problems that arise when real-time tasks share pages. We then propose a shared-page management framework which enhances the temporal isolation provided by memory reservations in resource kernels that use the resource reservation approach. Our proposed solution consists of two schemes, Shared-Page Conservation (SPC) and Shared-Page Eviction Lock (SPEL), each of which prevents timing penalties caused by the seemingly arbitrary eviction of shared pages. The framework can manage shared data for inter-process communication and shared libraries, as well as pages shared by the kernel’s copy-on-write technique and file caches. We have implemented and evaluated our schemes on the Linux/RK platform, but it can also be applied to other operating systems with paged virtual memory.  相似文献   

5.
In this paper, we propose data space mapping techniques for storage and retrieval in multi-dimensional databases on multi-disk architectures. We identify the important factors for an efficient multi-disk searching of multi-dimensional data and develop secondary storage organization and retrieval techniques that directly address these factors. We especially focus on high dimensional data, where none of the current approaches are effective. In contrast to the current declustering techniques, storage techniques in this paper consider both inter- and intra-disk organization of the data. The data space is first partitioned into buckets, then the buckets are declustered to multiple disks while they are clustered in each disk. The queries are executed through bucket identification techniques that locate the pages. One of the partitioning techniques we discuss is especially practical for high dimensional data, and our disk and page allocation techniques are optimal with respect to number of I/O accesses and seek times. We provide experimental results that support our claims on two real high dimensional datasets.  相似文献   

6.
Web信息抽取引发了大规模的应用。基于包装器的Web信息抽取有两个研究领域:包装器产生和包装器平衡,提出了一种新的包装器自动平衡算法。它基于以下的观察:尽管页面有多种多样的变化方式,但是许多重要的页面特征在新页面都得到了保存,例如文本模式、注释信息和超级链接。新的算法能充分利用这些保存下来的页面特征在变化的页面中定位目标信息,并能自动修复失效的包装器。对实际Web站点信息抽取的实验表明,新的算法能有效地维持包装器的平衡以便更精确地抽取信息。  相似文献   

7.
基于多知识的Web网页信息抽取方法   总被引:10,自引:1,他引:9  
从Web网页中自动抽取所需要的信息内容,是互联网信息智能搜取的一个重要研究课题,为有效解决网页信息抽取所需的信息描述知识获取问题,这里提出了一个种基于多知识的Web网页信息抽取方法(简称MKIE方法)。该方法将网页信息抽取所需的知识分为二类,一类是描绘网页内容本身表示特点,以及识别各网信息对象的确定模式知识,另一类则描述网页信息记录块,以及各网页信息对象的非确定模式知识,MKIE方法根据前一类知识,动态分析获得后一类知识;并利用这两类知识,最终完全从信息内容类似担其表现形式各异的网页中,抽取出所需要的信息,美大学教员论文网页信息抽取实验结果表明,MKIE方法具有较强的网而信息自动识别与抽取能力。  相似文献   

8.
网络钓鱼Web页面检测算法   总被引:4,自引:0,他引:4       下载免费PDF全文
网络钓鱼(Phishing)攻击在电子商务和电子金融中普遍存在。该文分析Phishing页面敏感特征,提出一种防御Phishing攻击的Web页面检测算法。该算法通过分析Web页面的文档对象模型来提取Phishing敏感特征,使用BP神经网络检测页面异常程度,利用线性分类器判断该页面是否为Phishing页面。该算法成功过滤了Phishing页面,有效地阻止了Phishing攻击。  相似文献   

9.
一种新的基于Web日志的挖掘用户浏览偏爱路径的方法   总被引:2,自引:0,他引:2  
任永功  付玉  张亮  吕君义 《计算机科学》2008,35(10):192-196
提出了一种新的基于Web日志的挖掘用户浏览偏爱路径的方法.该方法首先在单元数组存储结构(存储矩阵)基础上建立以浏览兴趣度为基本元素的会话矩阵和路径矩阵.然后,在会话矩阵上采用两个页面向量夹角余弦作为相似用户的页面距离公式进行页面聚类,求得相似用户的相关页面集.最后,利用路径选择偏爱度在相似用户的路径矩阵上挖掘出相似用户的浏览偏爱路径.实验证明此方法是合理有效的,能够得到更准确的偏爱路径.  相似文献   

10.
侦查部门打击利用网络进行的诈骗犯罪需要有法可依,这是我们依法治国方略的应有之义。然而,利用网络进行诈骗犯罪是一个近年来随着网络发展才开始泛滥,并受到广泛关注的问题,因此惩治利用网络进行诈骗犯罪的相关法律法规还不够完备,情报信息应用还不够充分,相关网络诈骗防范机制还未建立,这些都直接影响着案件的侦破。因此,有必要借鉴国外经验,结合我国国情,完善利用网络进行诈骗犯罪的防控对策。  相似文献   

11.
网络数据的飞速增长为搜索引擎带来了巨大的存储和网络服务压力,大量冗余、低质量乃至垃圾数据造成了搜索引擎存储与运算能力的巨大浪费,在这种情况下,如何建立适合万维网实际应用环境的网页数据质量评估体系与评估算法成为了信息检索领域的重要研究课题。在前人工作的基础上,通过网络用户及网页设计人员的参与,文章提出了包括权威知名度、内容、时效性和网页外观呈现四个维度十三个因素的网页质量评价体系;标注数据显示我们的网页质量评价体系具有较强的可操作性,标注结果比较一致;文章最后使用Ordinal Logistic Regression 模型对评价体系的各个维度的重要性进行了分析并得出了一些启发性的结论 互联网网页内容和实效性能否满足用户需求是决定其质量的重要因素。  相似文献   

12.
As the World Wide Web develops at an unprecedented pace, identifying web page genre has recently attracted increasing attention because of its importance in web search. A common approach for identifying genre is to use textual features that can be extracted directly from a web page, that is, On-Page features. The extracted features are subsequently inputted into a machine learning algorithm that will perform classification. However, these approaches may be ineffective when the web page contains limited textual information (e.g., the page is full of images). In this study, we address genre identification of web pages under the aforementioned situation. We propose a framework that uses On-Page features while simultaneously considering information in neighboring pages, that is, the pages that are connected to the original page by backward and forward links. We first introduce a graph-based model called GenreSim, which selects an appropriate set of neighboring pages. We then construct a multiple classifier combination module that utilizes information from the selected neighboring pages and On-Page features to improve performance in genre identification. Experiments are conducted on well-known corpora, and favorable results indicate that our proposed framework is effective, particularly in identifying web pages with limited textual information.  相似文献   

13.
Future many-core chip multiprocessors (CMPs) will integrate hundreds of processor cores on chip. Two cache coherence protocols are the mainstream applied to current CMPs. The token-based protocol (Token) provides high performance, but it generates a prohibitive amount of network traffic, which translates into excessive power consumption. The directory-based protocol (Directory) reduces network traffic, yet trades off with the storage overhead of the directory as well as entails comparatively low performance caused by indirection limiting its applicability for many-core CMPs. In this work, we present DP&TB, a novel cache coherence protocol particularly suited to future many-core CMPs. In DP&TB, cache coherence is maintained at the granularity of a page, facilitating to filter out either unnecessary coherence inspections for blocks inside private pages or network traffic for blocks inside shared pages. We employ Directory to detect private and shared pages and Token to maintain the coherence of the blocks inside shared pages. DP&TB inherits the merit of Directory and Token and overcome their problems. Experimental results show that DP&TB comprehensively beyond Directory and Token with improvement by 9.1 % in performance over Token and by 13.8 % in network traffic over Directory. In addition, the storage overhead of DP&TB is less than half of that of Directory. Our proposal can fulfill the requirement of many-core CMPs to achieve high performance, power and area efficiency.  相似文献   

14.
Classical Web crawlers make use of only hyperlink information in the crawling process. However, focused crawlers are intended to download only Web pages that are relevant to a given topic by utilizing word information before downloading the Web page. But, Web pages contain additional information that can be useful for the crawling process. We have developed a crawler, iCrawler (intelligent crawler), the backbone of which is a Web content extractor that automatically pulls content out of seven different blocks: menus, links, main texts, headlines, summaries, additional necessaries, and unnecessary texts from Web pages. The extraction process consists of two steps, which invoke each other to obtain information from the blocks. The first step learns which HTML tags refer to which blocks using the decision tree learning algorithm. Being guided by numerous sources of information, the crawler becomes considerably effective. It achieved a relatively high accuracy of 96.37% in our experiments of block extraction. In the second step, the crawler extracts content from the blocks using string matching functions. These functions along with the mapping between tags and blocks learned in the first step provide iCrawler with considerable time and storage efficiency. More specifically, iCrawler performs 14 times faster in the second step than in the first step. Furthermore, iCrawler significantly decreases storage costs by 57.10% when compared with the texts obtained through classical HTML stripping. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

15.
一个普通的Web页面可以被分成信息块和噪音块两部分。基于web信息检索的第1步就是过滤掉网页中的噪音块。通过网页的特性可以看出,同层网页大多具有相似的显示风格和噪音块。在VIPS算法的基础上,该文提出一种基于同层网页相似性的匹配算法,这个算法可以被用来过滤网页中的噪音块。通过实验检测,算法可以达到95%以上的准确率。  相似文献   

16.
In this paper, we propose a data-mining-based approach to public buffer management for a multiuser database system, where database buffers are organized into two areas – public and private. While the private buffer areas contain pages to be updated by particular users, the public buffer area contains pages shared among different users. Unlike traditional buffer management strategies where limited knowledge of user access patterns is used, the proposed approach discovers knowledge from page access sequences of user transactions and uses it to guide public buffer placement and replacement. A prefetch strategy is exploited based on the discovered page access knowledge. In practice, to make such a data-mining-based buffer management approach tractable, we present a soft variation to approximate our absolute best buffer replacement solution. The knowledge to be discovered and the discovery methods are discussed in the paper. The effectiveness of the proposed approach was investigated through a simulation study. The results indicate that with the help of the discovered knowledge, the public buffer hit ratio can be improved significantly, while the added computational complexity, compared to the achievement in buffer hit ratio, is less. In some situations, the time cost of the data-mining-based buffer management policy is even lower than that of the simplest buffer management policy.  相似文献   

17.
Various attacks are designed to gain access to the assets of Java Card Platforms. These attacks use software, hardware or a combination of both. Manufacturers have improved their countermeasures to protect card assets from these attacks. In this paper, we attempt to gain access to assets of a recent Java Card Platform by combining various logical attacks. As we did not have any information about the internal structure of the targeted platform, we had to execute various attacks and analyze the results. Our investigation on the targeted Java Card Platform lead us to introduce two generic methods to gain access to the assets of Java Card Platforms. One of the new methods we present in this paper is based on the misuse of the Java Card API to build a type confusion and get access to the objects (including cryptographic keys) of a Java Card applet. The other method is a new approach to get access to the return address of the methods in Java Cards with Separate Stack countermeasure. We also propose a pattern that the targeted platform uses to store data and code of applets on the card plus the ability to read and write in the data and code area of the applets in different security contexts. These new attacks occur even in the presence of countermeasures such as Separate Stack for kernel and user data, indirect mapping for objects addressing and firewall mechanisms.  相似文献   

18.
We describe the evolution of a distributed shared memory (DSM) system, Mirage, and the difficulties encountered when moving the system from a Unix-based* kernel on the VAX to a Unix-based kernel on personal computers. Mirage provides a network transparent form of shared memory for a loosely coupled environment. The system hides network boundaries for processes that are accessing shared memory and is upward compatible with the Unix System V Interface Definition. This paper addresses the architectural dependencies in the design of the system and evaluates performance of the implementation. The new version, MIRAGE +, performs well compared to Mirage even though eight times the amount of data is sent on each page fault because of the larger page size used in the implementation. We show that performance of systems with a large page size to network packet size can be dramatically improved on conventional hardware by applying three well-known techniques: packet blasting, compression, and running at interrupt level. The measured time for a page fault in MIRAGE + has been reduced 37 per cent by sending a page using packet blasting instead of using a handshake for each portion of the page. When compression was added to MIRAGE +, the time to fault a page across the network was further improved by 47 per cent when the page was compressed into one network packet. Our measured performance compares favorably with the amount of time it takes to fault a page from disk. Lastly, running at interrupt level may improve performance 16 per cent when faulting pages without compression.  相似文献   

19.
文章介绍了一种通过日志恢复EXT3文件系统下被删除Oracle数据文件的方法,研究了通过数据页存储特征恢复与检验Oracle数据文件的方法,介绍了基于EXT3的Oralce数据文件恢复与检验软件的总体执行流程图。该软件可以根据日志文件中保存的地址指针定位被删除文件散布在硬盘分区中的具体数据,进而将它们恢复成一个完整的数据库文件,该软件还可以根据Oracle数据页的存储特征最大限度地提取分布在硬盘分区上的数据页碎片,将这些数据页组合成一个文件,再利用自行开发的检验软件从中检验出有效的数据记录。  相似文献   

20.
网络舆情分析系统中,网页信息预处理方案的实现采用了基于网页结构分析的信息抽取技术和数据存储技术。结合HTML网页的内部结构,设计了一种基于HTMLDOM结构节点路径的网页信息解析模板,用于网页信息抽取。通过网页U1KL的特征研究建立了网页之间的联系机制,应用于数据库存取提高了效率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号