首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
蠕虫病毒对当前Internet造成的威胁日益严峻。因此,必须在蠕虫的早期阶段检测出它的传播,并有效地进行抑制和隔离。蠕虫传播基本采用随机扫描方式.在网络中产生了异常的数据流。利用了蠕虫传播的这些共同特性,提出了一种新颖的蠕虫检测机制,有效降低了计算复杂度,并且具有较低的误警率和检测实时性。  相似文献   

2.
Internet attacks such as distributed denial-of-service (DDoS) attacks and worm attacks are increasing in severity. Identifying realtime attack detection and mitigation of Internet traffic is an important and challenging problem. For example, a compromised host doing fast scanning for worm propagation often makes an unusually high number of connections to distinct destinations within a short time. We call such a host a superpoint, which are sources that connect to a large number of distinct destinations. Detecting superpoints is very important in developing effective and efficient traffic engineering schemes. We propose two novel schemes for detecting superpoints and prove guarantees on their accuracy and memory requirements. These schemes are implemented by introducing a reversible counting Bloom filter (RCBF), a special counting Bloom filter. The RCBF consists of 4 hash functions which projectively select some consecutive bits from original strings as function values. We obtain the information of superpoints using the overlapping of hash bit strings of the RCBF. The theoretical analysis and experiment results show that our schemes can precisely and efficiently detect superpoints.  相似文献   

3.
One of the most serious security threats on the Internet are Distributed Denial of Service (DDoS) attacks, due to the significant service disruption they can create and the difficulty in preventing them. In this paper, we propose new deterministic packet marking models in order to characterize DDoS attack streams. Such a common characterization can be used to make filtering near the victim more effective. In this direction we propose a rate control scheme that protects destination domains by limiting the amount of traffic during an attack, while leaving a large percentage of legitimate traffic unaffected. The above features enable providers to offer enhanced security protection against such attacks as a value-added service to their customers, and hence offer positive incentives for them to deploy the proposed models. We evaluate the proposed marking models using a snapshot of the actual Internet topology, in terms of how well they differentiate attack traffic from legitimate traffic in cases of full and partial deployment.  相似文献   

4.
The Denial-of-Service (DoS) attack is a challenging problem in the current Internet. Many schemes have been proposed to trace spoofed (forged) attack packets back to their sources. Among them, hop-by-hop schemes are less vulnerable to router compromise than packet marking schemes, but they require accurate attack signatures, high storage or bandwidth overhead, and cooperation of many ISPs.In this paper, we propose honeypot back-propagation, an efficient hop-by-hop traceback mechanism, in which accurate attack signatures are obtained by a novel leverage of the roaming honeypots scheme. The reception of attack packets by a roaming honeypot (a decoy machine camouflaged within a server pool) triggers the activation of a tree of honeypot sessions rooted at the honeypot under attack toward attack sources. The tree is formed hierarchically, first at Autonomous system (AS) level and then at router level. Honeypot back-propagation supports incremental deployment by providing incentives for ISPs even with partial deployment.Against low-rate attackers, most traceback schemes would take a long time to collect the needed number of packets. To address this problem, we also propose progressive back-propagation to handle low-rate attacks, such as on-off attacks with short bursts. Analytical and simulation results demonstrate the effectiveness of the proposed schemes under a variety of DDoS attack scenarios.  相似文献   

5.
《Parallel Computing》1997,23(6):777-781
In this paper we present a new bypass queue scheme for an input buffered nonblocking packet switch operating under bursty traffic. The proposed scheme uses first-in-first-out (FIFO) queues and is thus more efficient for implementation as compared to other schemes which use first-in-random-out (FIRO) queues. Maximum throughput comparison of the proposed scheme with the conventional scheme shows significant improvement.  相似文献   

6.
This article presents evaluations of an immunity-based anomaly detection method with dynamic updating of profiles. Our experiments showed that the updating of both self and nonself profiles markedly decreased both the false alarm and missed alarm rates in masquerader detection. In computer worm detection, all the random-scanning worms and simulated metaserver worms examined were detected. The detection accuracy of the simulated passive worm was markedly improved.  相似文献   

7.
《Computer Networks》2007,51(5):1256-1274
As next-generation computer worms may spread within minutes to millions of hosts, protection via human intervention is no longer an option. We discuss the implementation of SweetBait, an automated protection system that employs low- and high-interaction honeypots to recognise and capture suspicious traffic. After discarding whitelisted patterns, it automatically generates worm signatures. To provide a low response time, the signatures may be immediately distributed to network intrusion detection and prevention systems. At the same time the signatures are continuously refined for increased accuracy and lower false identification rates. By monitoring signature activity and predicting ascending or descending trends in worm virulence, we are able to sort signatures in order of urgency. As a result, the set of signatures to be monitored or filtered is managed in such a way that new and very active worms are always included in the set, while the size of the set is bounded. SweetBait is deployed on medium sized academic networks across the world and is able to react to zero-day worms within minutes. Furthermore, we demonstrate how globally sharing signatures can help immunise parts of the Internet.  相似文献   

8.
An Automated Signature-Based Approach against Polymorphic Internet Worms   总被引:2,自引:0,他引:2  
Capable of infecting hundreds of thousands of hosts, worms represent a major threat to the Internet. However, the defense against them is still an open problem. This paper attempts to answer an important question: How can we distinguish polymorphic worms from normal background traffic? We propose a new worm signature, called the position-aware distribution signature (PADS), which fills the gap between traditional signatures and anomaly-based intrusion detection systems. The new signature is a collection of position-aware byte frequency distributions. It is more flexible than the traditional signatures of fixed strings while it is more precise than the position-unaware statistical signatures. We propose two algorithms based on expectation-maximization (EM) and Gibbs sampling to efficiently compute PADS from a set of polymorphic worm samples. We also discuss how to separate a mixture of different polymorphic worms such that their respective PADS signatures can be calculated. We perform extensive experiments to demonstrate the effectiveness of PADS in separating new worm variants from normal background traffic.  相似文献   

9.
网络蠕虫已经严重威胁了网络的安全.为了有效防治网络蠕虫,首要任务必须清楚有什么扫描方法,以及这些扫描方法对蠕虫传播的影响.为此,本文构建了一个基于离散时间的简单蠕虫传播模型,通过对Code Red蠕虫传播的真实数据比较,验证了此模型的有效性.以此模型为基础,详细分析了蠕虫的不同扫描策略,如均匀扫描、目标列表扫描、路由扫描、分治扫描、本地子网、顺序扫描、置换扫描,并给出了相应的模型.  相似文献   

10.
Traffic sampled from the network backbone using uniform packet sampling is commonly utilized to detect heavy hitters, estimate flow level statistics, as well as identify anomalies like DDoS attacks and worm scans. Previous work has shown however that this technique introduces flow bias and truncation which yields inaccurate flow statistics and “drowns out” information from small flows, leading to large false positives in anomaly detection.In this paper, we present a new sampling design: Fast Filtered Sampling (FFS), which is comprised of an independent low-complexity filter, concatenated with any sampling scheme at choice. FFS ensures the integrity of small flows for anomaly detection, while still providing acceptable identification of heavy hitters. This is achieved through a filter design which suppresses packets from flows as a function of their size, “boosting” small flows relative to medium and large flows. FFS design requires only one update operation per packet, has two simple control parameters and can work in conjunction with existing sampling mechanisms without any additional changes. Therefore, it accomplishes a lightweight online implementation of the “flow-size dependent” sampling method. Through extensive evaluation on traffic traces, we show the efficacy of FFS for applications such as portscan detection and traffic estimation.  相似文献   

11.
《Computer Networks》2005,47(3):393-408
In this paper, we consider the problem of dynamic load balancing in wavelength division multiplexing (WDM)-based optical burst switching (OBS) networks. We propose a load balancing scheme based on adaptive alternate routing aimed at reducing burst loss. The key idea of adaptive alternate routing is to reduce network congestion by adaptively distributing the load between two pre-determined link-disjoint alternative paths based on the measurement of the impact of traffic load on each of them. We develop two alternative-path selection schemes to select link-disjoint alternative paths to be used by adaptive alternate routing. The path selection schemes differ in the way the cost of a path is defined and in the assumption made about the knowledge of the traffic demands. Through extensive simulation experiments for different traffic scenarios, we show that the proposed dynamic load balancing algorithm outperforms the shortest path routing and static alternate routing algorithms.  相似文献   

12.
对等网络蠕虫利用对等网络的固有特征(如本地路由表、应用层路由等),不仅复制快,而且提供了更好的隐蔽性和传播性,因而其危害大,防御困难。从分析互联网蠕虫及其传播机制入手,对对等网络上的蠕虫(即P2P蠕虫)及其特殊性进行了综合分析。在此基础之上,提出了基于良性益虫的被动激活主动传播防御策略(PAIFDP),并对该策略的技术原理和响应防御系统的功能模块等进行了详细设计。以Peersim仿真平台为基础,对各种不同网络参数下的防御效果和资源消耗情况进行了实验分析。结果表明,基于良性益虫的P2P蠕虫防御技术具有收敛时间快、网络资源消耗少、适应性强等特点。  相似文献   

13.
Fast and accurate generation of worm signatures is essential to contain zero-day worms at the Internet scale. Recent work has shown that signature generation can be automated by analyzing the repetition of worm substrings (that is, fingerprints) and their address dispersion. However, at the early stage of a worm outbreak, individual edge networks are often short of enough worm exploits for generating accurate signatures. This paper presents both theoretical and experimental results on a collaborative worm signature generation system (WormShield) that employs distributed fingerprint filtering and aggregation over multiple edge networks. By analyzing real-life Internet traces, we discovered that fingerprints in background traffic exhibit a Zipf-like distribution. Due to this property, a distributed fingerprint filtering reduces the amount of aggregation traffic significantly. WormShield monitors utilize a new distributed aggregation tree (DAT) to compute global fingerprint statistics in a scalable and load-balanced fashion. We simulated a spectrum of scanning worms including CodeRed and Slammer by using realistic Internet configurations of about 100,000 edge networks. On average, 256 collaborative monitors generate the signature of CodeRedl-v2 135 times faster than using the same number of isolated monitors. In addition to speed gains, we observed less than 100 false signatures out of 18.7-Gbyte Internet traces, yielding a very low false-positive rate. Each monitor only generates about 0.6 kilobit per second of aggregation traffic, which is 0.003 percent of the 18 megabits per second link traffic sniffed. These results demonstrate that the WormShield system offers distinct advantages in speed gains, signature accuracy, and scalability for large-scale worm containment.  相似文献   

14.
15.
16.
Contagion蠕虫传播仿真分析   总被引:2,自引:0,他引:2  
Contagion 蠕虫利用正常业务流量进行传播,不会引起网络流量异常,具有较高的隐蔽性,逐渐成为网络安全的一个重要潜在威胁.为了能够了解Contagion蠕虫传播特性,需要构建一个合适的仿真模型.已有的仿真模型主要面向主动蠕虫,无法对Contagion蠕虫传播所依赖的业务流量进行动态模拟.因此,提出了一个适用于Contagion蠕虫仿真的Web和P2P业务流量动态仿真模型,并通过选择性抽象,克服了数据包级蠕虫仿真的规模限制瓶颈,在通用网络仿真平台上,实现了一个完整的Contagion蠕虫仿真系统.利用该系统,对Contagion蠕虫传播特性进行了仿真分析.结果显示:该仿真系统能够有效地用于Contagion蠕虫传播分析.  相似文献   

17.
Fail-stop signature (FSS) schemes protect a signer against a forger with unlimited computational power by enabling the signer to provide a proof of forgery, if it occurs. A decade after its invention, there have been several FSS schemes proposed in the literature. Nonetheless, the notion of short FSS scheme has not been addressed yet. Furthermore, the short size in signature schemes has been done mainly with the use of pairings. In this paper, we propose a construction of short FSS scheme based on factorization and discrete logarithm assumption. However, in contrast to the known notion in the literature, our signature scheme does not incorporate any pairing operations. Nonetheless, our scheme is the shortest FSS scheme compared to all existing schemes in the literature that are based on the same assumption. The efficiency of our scheme is comparable to the best known FSS scheme, that is based on the discrete logarithm assumption.  相似文献   

18.
Wireless Mesh Networks (WMNs) extend Internet access in areas where the wired infrastructure is not available. A problem that arises is the congestion around gateways, delayed access latency and low throughput. Therefore, object replication and placement is essential for multi-hop wireless networks. Many replication schemes are proposed for the Internet, but they are designed for CDNs that have both high bandwidth and high server capacity, which makes them unsuitable for the wireless environment. Object replication has received comparatively less attention from the research community when it comes to WMNs. In this paper, we propose an object replication and placement scheme for WMNs. In our scheme, each mesh router acts as a replica server in a peer-to-peer fashion. The scheme exploits graph partitioning to build a hierarchy from fine-grained to coarse-grained partitions. The challenge is to replicate content as close as possible to the requesting clients and thus reduce the access latency per object, while minimizing the number of replicas. Using simulation tests, we demonstrate that our scheme is scalable, performing well with respect to the number of replica servers and the number of objects. The simulation results show that our proposed scheme has better performance compared to other replication schemes.  相似文献   

19.
《Computer Networks》2007,51(6):1421-1443
Efficient multicast congestion control (MCC) is one of the critical components required to enable the IP multicast deployment over the Internet. Previously proposed MCC schemes can be categorized in two: single-rate or multi-rate. Single-rate schemes make all recipients get data at a common rate allowed by the slowest receiver, but are relatively simple. Multi-rate schemes allow of heterogeneous receive rates and thus provide better scalability, but rely heavily on frequent updates to group membership state in the routers. A recent work by Kwon and Byers, combined these two methods and provided a multi-rate scheme by means of single-rate schemes with relatively low complexity.In this paper, we propose a new scheme called generalized multicast congestion control (GMCC). GMCC provides multi-rate features at low complexity by using a set of independent single-rate sub-sessions (a.k.a layers) as building blocks. The scheme is named GMCC because single-rate MCC is just one of its special cases. Unlike the earlier work by Kwon and Byers, GMCC does not have the drawback of static configuration of the source which may not match with the dynamic network situations. GMCC is fully adaptive in that (i) it does not statically set a particular range for the sending rates of layers, and (ii) it eliminates redundant layers when they are not needed. Receivers can subscribe to different subsets of the available layers and hence can always obtain different throughput. While no redundant layers are used, GMCC allows receivers to activate a new layer in case existing layers do not accommodate the needs of the actual receivers.  相似文献   

20.
基于AOI方法的未知蠕虫特征自动发现算法研究   总被引:3,自引:0,他引:3  
近年来频繁爆发的大规模网络蠕虫对Internet的整体安全构成了巨大的威胁,新的变种仍在不断出现。由于无法事先得到未知蠕虫的特征,传统的基于特征的入侵检测机制已经失效。目前蠕虫监测的一般做法是在侦测到网络异常后由人工捕获并进行特征的分析,再将特征加入高速检测引擎进行监测。本文提出了一种新的基于面向属性归纳(AOI)方法的未知蠕虫特征自动提取方法。该算法在可疑蠕虫源定位的基础上进行频繁特征的自动提取,能够在爆发的早期检测到蠕虫的特征,进而通过控制台特征关联监测未知蠕虫的发展趋势。实验证明该方法是可行而且有效的。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号