首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 875 毫秒
1.
Drive-by-download malware exposes internet users to infection of their personal computers, which can occur simply by visiting a website containing malicious content. This can lead to a major threat to the user’s most sensitive information. Popular browsers such as Firefox, Internet Explorer and Maxthon have extensions that block JavaScript, Flash and other executable content. Some extensions globally block all dynamic content, and in others the user needs to specifically enable the content for each site (s)he trusts. Since most of the web-pages today contain dynamic content, disabling them damages user experience and page usability, and that prevents many users from installing security extensions. We propose a novel approach, based on Social Network Analysis parameters, that predicts the user trust perspective for the HTML page currently being viewed. Our system examines the URL that appears in the address bar of the browser and each of the inner HTML URL reputations, and only if all of them have a reputation greater than our predetermined threshold, it marks the webpage as trusted. Each URL reputation is calculated based on the number and quality of the links on the whole web pointing back to the URL. The method was examined on a corpus of 44,429 malware domains and on the top 2000 most popular Alexa sites. Our system managed to enable dynamic content of 70% of the most popular websites and block 100% of malware web-pages, all without any user intervention. Our approach can augment most browser security applications and enhance their effectiveness, thus encouraging more users to install these important applications.  相似文献   

2.
Enhancing web browser security against malware extensions   总被引:1,自引:0,他引:1  
In this paper we examine security issues of functionality extension mechanisms supported by web browsers. Extensions (or “plug-ins”) in modern web browsers enjoy unrestrained access at all times and thus are attractive vectors for malware. To solidify the claim, we take on the role of malware writers looking to assume control of a user’s browser space. We have taken advantage of the lack of security mechanisms for browser extensions and implemented a malware application for the popular Firefox web browser, which we call browserSpy, that requires no special privileges to be installed. browserSpy takes complete control of the user’s browser space, can observe all activity performed through the browser and is undetectable. We then adopt the role of defenders to discuss defense strategies against such malware. Our primary contribution is a mechanism that uses code integrity checking techniques to control the extension installation and loading process. We describe two implementations of this mechanism: a drop-in solution that employs JavaScript and a faster, in-browser solution that makes uses of the browser’s native cryptography implementation. We also discuss techniques for runtime monitoring of extension behavior to provide a foundation for defending threats posed by installed extensions.  相似文献   

3.
Control flow is often key problem in current web applications. For example, using the back button gives a POSTDATA error, using multiple windows books the wrong hotel, and sending a link to a friend does not work.Previous solutions used continuations as a model for user interaction. However continuations are insufficient as a model of all web interactions. We believe the protocol and browsers themselves are insufficiently powerful to represent the control flow desired in a web application.Our solution is to extend the protocol and browser sufficiently that these problems can be avoided. We seek to be agnostic about how web applications are written and instead recognise that many of the problems stem from underlying weaknesses in the protocol.As an example, the application ought to be able to inform the browser that pressing back on a payment confirmation page is not allowed. Instead, the cached page can be displayed in a read-only, archive fashion to the user, or a new page can be shown instead which is consistent with the global state.We discuss how some of these ideas may be implemented within the existing HTTP/1.1 protocol; and what modest extensions to the protocol would enable full implementation. We also discuss the interaction with Web 2.0 and the security and privacy implications of our extensions.  相似文献   

4.
网络用户可以使用浏览器收藏夹收藏网页并快速访问其中内容。基于收藏夹的用户行为研究将对用户个性化、网页质量评估、大规模网页目录构建等方面的工作具有指导意义。该文使用近27万个用户的收藏夹数据,从组织结构、收藏内容和用户兴趣三个方面对用户收藏行为进行了研究。首先,我们提出收藏夹浏览点击模型,分析了收藏夹结构特征和使用效率;其次,通过与PageRank值比较,我们发现用户倾向于收藏质量高的网络资源;最后,我们结合ODP分析了收藏夹用户的兴趣分布特点。  相似文献   

5.
6.
Social recommender systems largely rely on user-contributed data to infer users’ preference. While this feature has enabled many interesting applications in social networking services, it also introduces unreliability to recommenders as users are allowed to insert data freely. Although detecting malicious attacks from social spammers has been studied for years, little work was done for detecting Noisy but Non-Malicious Users (NNMUs), which refers to those genuine users who may provide some untruthful data due to their imperfect behaviors. Unlike colluded malicious attacks that can be detected by finding similarly-behaved user profiles, NNMUs are more difficult to identify since their profiles are neither similar nor correlated from one another. In this article, we study how to detect NNMUs in social recommender systems. Based on the assumption that the ratings provided by a same user on closely correlated items should have similar scores, we propose an effective method for NNMU detection by capturing and accumulating user’s “self-contradictions”, i.e., the cases that a user provides very different rating scores on closely correlated items. We show that self-contradiction capturing can be formulated as a constrained quadratic optimization problem w.r.t. a set of slack variables, which can be further used to quantify the underlying noise in each test user profile. We adopt three real-world data sets to empirically test the proposed method. The experimental results show that our method (i) is effective in real-world NNMU detection scenarios, (ii) can significantly outperform other noisy-user detection methods, and (iii) can improve recommendation performance for other users after removing detected NNMUs from the recommender system.  相似文献   

7.
This paper provides an empirical characterization of user actions at the web browser. The study is based on an analysis of 4 months of logged client-side data that describes user actions with recent versions of Netscape Navigator. In particular, the logged data allow us to determine the title, URL and time of each page visit, how often they visited each page, how long they spent at each page, the growth and content of bookmark collections, as well as a variety of other aspects of user interaction with the web. The results update and extend prior empirical characterizations of web use. Among the results we show that web page revisitation is a much more prevalent activity than previously reported (approximately 81% of pages have been previously visited by the user), that most pages are visited for a surprisingly short period of time, that users maintain large (and possibly overwhelming) bookmark collections, and that there is a marked lack of commonality in the pages visited by different users. These results have implications for a wide range of web-based tools including the interface features provided by web browsers, the design of caching proxy servers, and the design of efficient web sites.  相似文献   

8.
Correlation-Based Web Document Clustering for Adaptive Web Interface Design   总被引:2,自引:2,他引:2  
A great challenge for web site designers is how to ensure users' easy access to important web pages efficiently. In this paper we present a clustering-based approach to address this problem. Our approach to this challenge is to perform efficient and effective correlation analysis based on web logs and construct clusters of web pages to reflect the co-visit behavior of web site users. We present a novel approach for adapting previous clustering algorithms that are designed for databases in the problem domain of web page clustering, and show that our new methods can generate high-quality clusters for very large web logs when previous methods fail. Based on the high-quality clustering results, we then apply the data-mined clustering knowledge to the problem of adapting web interfaces to improve users' performance. We develop an automatic method for web interface adaptation: by introducing index pages that minimize overall user browsing costs. The index pages are aimed at providing short cuts for users to ensure that users get to their objective web pages fast, and we solve a previously open problem of how to determine an optimal number of index pages. We empirically show that our approach performs better than many of the previous algorithms based on experiments on several realistic web log files. Received 25 November 2000 / Revised 15 March 2001 / Accepted in revised form 14 May 2001  相似文献   

9.
We propose a new way of browsing bilingual web sites through concurrent browsing with automatic similar-content synchronization and viewpoint retrieval facilities. Our prototype browser system is called the Bilingual Comparative Web Browser (B-CWB) and it concurrently presents bilingual web pages in a way that enables their contents to be automatically synchronized. The B-CWB allows users to browse multiple web news sites concurrently and compare their viewpoint of news articles written in different languages (English and Japanese). Our viewpoint retrieval is based on similar and different detection. We described categorizing pages in terms of viewpoint: the entire similarity, the content difference, and subject difference. Content synchronization means that user operation (scrolling or clicking) on one web page does not necessarily invoke the same operations on the other web page to preserve similarity of content between the multiple web pages. For example, scrolling a web page may invoke passage-level viewpoint retrieval on the other web page. Clicking a web page (and obtaining a new web page) invokes page-level viewpoint retrieval within the other site's pages through the use of an English-Japanese dictionary.  相似文献   

10.
Distributed Denial of Service (DDoS) is one of the most damaging attacks on the Internet security today. Recently, malicious web crawlers have been used to execute automated DDoS attacks on web sites across the WWW. In this study we examine the effect of applying seven well-established data mining classification algorithms on static web server access logs in order to: (1) classify user sessions as belonging to either automated web crawlers or human visitors and (2) identify which of the automated web crawlers sessions exhibit ‘malicious’ behavior and are potentially participants in a DDoS attack. The classification performance is evaluated in terms of classification accuracy, recall, precision and F1 score. Seven out of nine vector (i.e. web-session) features employed in our work are borrowed from earlier studies on classification of user sessions as belonging to web crawlers. However, we also introduce two novel web-session features: the consecutive sequential request ratio and standard deviation of page request depth. The effectiveness of the new features is evaluated in terms of the information gain and gain ratio metrics. The experimental results demonstrate the potential of the new features to improve the accuracy of data mining classifiers in identifying malicious and well-behaved web crawler sessions.  相似文献   

11.
ABSTRACT

The critical infrastructures of industrialized countries depend on an interconnected “system of systems” that physically spans the globe and exists virtually everywhere. Often, the governments and industries responsible for the critical infrastructures are unaware of the extension and integration of the “web” on which their infrastructures rely. The safety of the citizens and the economies of these countries rely on the continuous operation of these systems to provide the services vital to the country and its people. The proliferation of interconnectivity and dependencies of these critical infrastructure systems has led to vulnerabilities, with attacks possible against nearly every country's critical infrastructures from nearly anywhere in the world. The vulnerable infrastructures' systems include banking and financial institutions, transportation systems, electricity and natural gas generation and distribution, petroleum distribution, and phone and cell communications. The threats may originate from unintended incidents, attacks by criminals, terrorists, and adversarial foreign nations with malicious intent, any of which could cause significant damage, bringing the critical systems to a halt. This literature review looks at various aspects of critical infrastructures and the governmental approaches to address the problem of government dependency on the private sector's ownership and control of critical infrastructures.  相似文献   

12.
Web spam denotes the manipulation of web pages with the sole intent to raise their position in search engine rankings. Since a better position in the rankings directly and positively affects the number of visits to a site, attackers use different techniques to boost their pages to higher ranks. In the best case, web spam pages are a nuisance that provide undeserved advertisement revenues to the page owners. In the worst case, these pages pose a threat to Internet users by hosting malicious content and launching drive-by attacks against unsuspecting victims. When successful, these drive-by attacks then install malware on the victims’ machines. In this paper, we introduce an approach to detect web spam pages in the list of results that are returned by a search engine. In a first step, we determine the importance of different page features to the ranking in search engine results. Based on this information, we develop a classification technique that uses important features to successfully distinguish spam sites from legitimate entries. By removing spam sites from the results, more slots are available to links that point to pages with useful content. Additionally, and more importantly, the threat posed by malicious web sites can be mitigated, reducing the risk for users to get infected by malicious code that spreads via drive-by attacks.  相似文献   

13.
14.
网络信息安全问题关乎一个国家的安全、社会的稳定,并且随着全球信息化步伐的加快越来越重要。网络信息安全面临的主要威胁有:固有的安全漏洞、黑客(Hack)的恶意攻击、网络自身的管理缺陷、恶意网站设置的陷阱、用户操作失误以及网络内部人员的不良行为造成的安全问题。确保网络信息安全需要采取的对策有:信息加密、为计算机安装防病毒软件和防火墙、升级操作系统补丁、安装入侵检测系统、隐藏IP地址、更换管理员账户和不要随意回复陌生邮件。  相似文献   

15.
In this paper, we present a new rule-based method to detect phishing attacks in internet banking. Our rule-based method used two novel feature sets, which have been proposed to determine the webpage identity. Our proposed feature sets include four features to evaluate the page resources identity, and four features to identify the access protocol of page resource elements. We used approximate string matching algorithms to determine the relationship between the content and the URL of a page in our first proposed feature set. Our proposed features are independent from third-party services such as search engines result and/or web browser history. We employed support vector machine (SVM) algorithm to classify webpages. Our experiments indicate that the proposed model can detect phishing pages in internet banking with accuracy of 99.14% true positive and only 0.86% false negative alarm. Output of sensitivity analysis demonstrates the significant impact of our proposed features over traditional features. We extracted the hidden knowledge from the proposed SVM model by adopting a related method. We embedded the extracted rules into a browser extension named PhishDetector to make our proposed method more functional and easy to use. Evaluating of the implemented browser extension indicates that it can detect phishing attacks in internet banking with high accuracy and reliability. PhishDetector can detect zero-day phishing attacks too.  相似文献   

16.
网页代理提供了一种快捷的中继服务,与其它类型的代理服务相比,如隐匿网络/VPN服务/Socks代理等,用户可以不需要安装任何软件就免费使用。因此,网页代理在绕过访问限制、隐藏身份等方面的便利性上有其不可比拟的优势。然而,网页代理在获取个人隐私信息、推送垃圾广告、隐匿行踪等方面也给人们的网络生活带来严重的安全威胁。所以,如何快速有效地将它们与大量正常网页区分开来成为网络空间安全面临的一个重要挑战。针对这一问题,本文提出了一种基于多维特征分析的网页代理发现方法——ProxyMiner。在主动发现方面,引入了网页代理特有的结构特征和内容特征,通过机器学习的方法进行预测发现。在被动发现方面,基于用户访问网页代理特有的访问模式,通过构建二分图对代理用户进行谱聚类分析,获取代理用户群体访问的顶级域名,从而发现网页代理。此方法仅基于客户端IP地址和目标URL,不需要任何有关HTTP头(经常会被恶意修改)或数据包(通常是加密的或不可用的)的信息。实验结果表明,在相同数据集上,相比于传统检测方法,ProxyMiner可以显著提高网页代理检测效果,降低平均检测时间。  相似文献   

17.
文章基于开源的Firefox浏览器,对运行于Firefox内部的浏览器扩展的行为进行分析、总结,对恶意浏览器扩展的行为进行分类,并建立恶意浏览器扩展的状态转移行为模型,以对恶意浏览器扩展的恶意行为进行较为全面的描述,为后续针对恶意浏览器扩展的检测工作奠定基础,以期建立和完善安全浏览器。  相似文献   

18.
为了帮助用户在低带宽、高延迟的情况下快速切入自己需要的页面,并自动得到页面中自己感兴趣的部分,该文提出了一个基于Web Component的页面分解算法。算法首先将HTML页面格式化为XHTML形式,然后根据XHTML页面生成XMLDOMTree,从中分析、抽取Web Component作为独立实体,并给这些Web Component分配标识,最后在数据库中存储页面结构、Web Component及相关信息,用于构建个性化门户。  相似文献   

19.
无证书密码体制能同时解决证书管理和密钥托管问题,但其安全模型中总是假设TypeⅡ敌手(恶意密钥生成中心(KGC))不会发起公钥替换攻击,这一安全性假设在现实应用中具有一定的局限性。国密SM9签名方案是一种高效的标识密码方案,它采用了安全性好、运算速率高的R-ate双线性对,但需要KGC为用户生成和管理密钥,存在密钥托管问题。针对以上问题,基于区块链和国密 SM9 签名方案提出一种抗恶意KGC无证书签名方案。所提方案基于区块链的去中心化、不易篡改等特性,使用智能合约将用户秘密值对应的部分公钥记录在区块链上,在验签阶段,验证者通过调用智能合约查询用户公钥,从而保证用户公钥的真实性。用户私钥由KGC生成的部分私钥和用户自己随机选取的秘密值构成,用户仅在首次获取私钥时由KGC为其生成部分私钥来对其身份标识符背书。随后可以通过更改秘密值及其存证于区块链的部分公钥实现私钥的自主更新,在此过程中身份标识保持不变,为去中心化应用场景提供密钥管理解决方法。区块链依靠共识机制来保证分布式数据的一致性,用户部分公钥的变更日志存储在区块链中,基于区块链的可追溯性,可对恶意公钥替换攻击行为进行溯源,从而防止恶意KGC发起替换公钥攻击。基于实验仿真和安全性证明结果,所提方案签名与验签的总开销仅需7.4 ms,与同类无证书签名方案相比,所提方案能有效抵抗公钥替换攻击,且具有较高的运算效率。  相似文献   

20.
针对数字电视的嵌入式浏览器在支持对广播数据的处理时,多采用与浏览器紧耦合的方式,会破坏浏览器原有的独立性和重复开发导致效率低下等问题,本文设计了一种嵌入式浏览器同时独立地支持互联网数据和广播数据的处理方法。该方法中设计了一种自定义的“cable”通信协议,通过该协议可以支持在互联网网页中实时下载访问广播数据,并且在数据广播网页中实时访问互联网数据。该方法在独立于浏览器内核之外的应用层快速支持对广播数据的下载和存储,与浏览器对标准的互联网数据处理完全独立,互不影响。实际应用表明,该方法设计结构清晰,数据处理高效,具有应用扩展强和开发周期短的特点。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号