首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The recent deregulation of telecommunication industry by the Taiwanese government has brought about the acute competition for Internet Service Providers (ISP). Taiwan’s ISP industry is characterized by the heavy pressure for raising revenue after hefty capital investments of last decade and the lack of knowledge to develop competitive strategies. To attract subscribers, all ISP dealers are making an all-out effort to improve their service management. This study proposes a Business Intelligence process for ISP dealers in Taiwan to assist management in developing effective service management strategies. We explore the customers’ usage characteristics and preference knowledge through applying the attribute-oriented induction (AOI) method on IP traffic data of users. Using the self-organizing map (SOM) method, we are able to divide customers into clusters with different usage behavior patterns. We then apply RFM modeling to calibrate customers’ value of each cluster, which will enable the management to develop direct and effective marketing strategies. For network resource management, this research mines the facility utilization over various administrative districts of the region, which could assist management in planning for effective network facilities investment. With actual data from one major ISP, we develop a BI decision support system with visual presentation, which is well received by its management staff.  相似文献   

2.
《Computer Networks》2005,47(1):47-61
In this work we consider the problem of routing bandwidth-guaranteed flows with time-variable bandwidth profiles on a MPLS network. We assume that each demand is routed in an explicitly routed LSP, and the amount of bandwidth that must be reserved along the LSP varies during the day according to a piece-wise mask, which is known in advance. The time-of-day bandwidth profiles can be explicitly declared by the VPN customers in the SLA, or alternatively predicted by the ISP based on past measurements.In this framework, we propose a simple on-line algorithm for optimal selection of LSP paths. We also provide an ILP formulation for the associated off-line problem, and adopt it as a reference performance bound for the on-line algorithm.Additionally, we compare the performances of fixed and variable routing in presence of time-variable bandwidth profiles. The results presented here suggest that the a priori knowledge of the per-demand traffic profiles can be exploited to achieve a fixed routing configuration, which can be marginally improved by variable reconfigurations. We relate our findings with a couple of previous works that in different application contexts achieved similar results.  相似文献   

3.
Virtual Worlds (VW), such as Massive Multiplayer Online Social Games, have been gaining increasing attention in the last few years, mainly due to the new way users interact with them. However, little effort has been devoted to understand their traffic profile and the implications to the traffic management area. With the current growing rate of VWs’ usage, their traffic demand could eventually impose a significant burden on the operation of a typical Internet Service Provider (ISP) network. In this paper, we seek to understand the traffic behavior of an increasingly popular VW application, namely Second Life (SL), from both the connection and network level perspectives. We also show results of a traffic analysis of a SL client, when an avatar performs different actions in the VW, at different places and under different network conditions. Our results show that SL makes intensive use of network resources (mostly bandwidth), since the capacity needed for having a full SL experience (listening to live music) may reach 500 kbps. We then extend the traffic analysis work on SL by providing analytical models for its traffic profile. We aim at helping the network management and planning area to estimate the impact of an intense use of such VW on access links or core networks. Such synthetic models are also useful to the networking research community for the use in network simulations.  相似文献   

4.
High bandwidth DDoS attacks consume more resources and have direct impact at ISP level in contrast to low rate DDoS attacks which lead to graceful degradation of network and are mostly undetectable. Although an array of detection schemes have been proposed, current requirement is a real time DDoS detection mechanism that adapts itself to varying network conditions to give minimum false alarms. DDoS attacks that disturb the distribution of traffic features in ISP domain are reflected by entropic variations on in stream samples. We propose honeypot detection for attack traffic having statistically similar distribution features as legitimate traffic. Next we propose to calibrate the detection mechanism for minimum false alarm rate by varying tolerance factor in real time. Simulations are carried out in ns-2 at different attack strengths. We also report our experimental results over MIT Lincoln lab dataset and its subset KDD 99 dataset. Results show that the proposed approach is comparable to previously reported approaches with an advantage of variable rate attack detection with minimum false positives and negatives.  相似文献   

5.
The overlay/underlay topology mismatch affects the performance of existing P2P platforms that can generate large volumes of unnecessary inter-ISP network traffic. Although recent works have shown the benefits of network awareness P2P solutions, no studies have focused on the investigation of the ISP behavior and their cooperative/non-cooperative attitudes.This paper proposes a game theoretic framework to help the design of techniques promoting the ISP cooperation in P2P streaming platforms and decreasing unnecessary inter-domain streaming traffic.We first analyze some simple scenarios to discuss the existence of Nash equilibria, the Pareto optimality, and a fairness criterion to refine the equilibrium points. Moreover, we apply ideas from Evolutionary Game Theory to design a distributed schemata that the ISPs can use to reach “socially acceptable” equilibrium points in a large ISP population. Furthermore, we develop a discrete event simulation to evaluate the effectiveness of the Evolutionary Game Theory framework.The study presented in the paper shows that the proposed strategies can effectively stimulate ISP cooperation aiming at the minimization of inter-ISP traffic and help to provide reliable P2P streaming service.  相似文献   

6.
《Computer Networks》2008,52(15):2961-2974
Inter-ISP traffic flow determines the settlement between ISPs and affects the perceived performance of ISP services. In today’s Internet, the inter-ISP traffic flow patterns are controlled not only by ISPs’ policy-based routing configuration and traffic engineering, but also by application layer routing. The goal of this paper is to study the economic implications of this shift in Internet traffic control assuming rational ISPs and subscribers. For this purpose, we build a general traffic model that predicts traffic patterns based on subscriber distribution and abstract traffic controls such as caching functions and performance sensitivity functions. We also build a game–theoretic model of subscribers picking ISPs, and ISPs making provisioning and peering decisions. In particular, we apply this to a local market where two ISPs compete for market share of subscribers under two traffic patterns: “Web” and “P2P overlay”, that typifies the transition the current Internet is going through. Our methodology can be used to quantitatively demonstrate that (1) while economy of scale is the predominant property of the competitive ISP market, P2P traffic may introduce unfair distribution of peering benefit (i.e. free-riding); (2) the large ISP can restore more fairness by reducing its private capacity (bandwidth throttling), which has the drawback of hurting business growth; and (3) ISPs can reduce the level of peering (e.g. by reducing peering bandwidth) to restore more fairness, but this has the side-effect of also reducing the ISPs’ collective bargaining power towards subscribers.  相似文献   

7.
The monopoly of state ownership of telecommunication industry in Taiwan was lifted in 1997. In choosing an ISP, pricing was and still is a main differentiating factor in the mind of customers; however, service quality has emerged as a major concern among users lately. Management of ISP has discovered that service quality is important not only for attracting new customers, but, more importantly, for retaining existing customers who may otherwise be lured away by lower fees. Hence, it is essential to develop a CRM system, which could help keeping existing customers and exploring further business opportunities at the same time. In this study, we, based on the IP traffic data, developed a CRM systematic approach for a major ISP company in Taiwan to enhance customers' longer-term loyalty. This approach employs CRISP-DM methodology, and applies Attribute-Oriented Induction as the mining technique to discover network usage behaviors of customers, which help management identify usage pattern and also pinpoint the time when usage is excessively heavy. The former allows management to make effective personal calls for services or maintenance, and the latter presents opportunities for management to offer personalized cares and advanced products. Pixel-oriented visualization is applied to improve the understanding of mining results.  相似文献   

8.
In this paper, we present one of the first and most extensive characterizations of closed community-based P2P systems. Such systems are organic groups of peer-to-peer (P2P) clients, which can be joined only by users belonging to a certain network (e.g., connected to a given Internet Service Provider (ISP)). A number of factors motivate the growth of these communities, such as quality of content, anonymity of transfers, and the potential for better performance that enhances user experience. Our study is conducted in two contrasting environments—a campus network and a national ISP—located in different continents. In both cases, large-scale closed communities have been found to be the predominant P2P systems in use. We shed light both on the factors motivating the growth of such communities, and present results characterizing the extensiveness of their usage, the performance achievable by the systems, and the implications of such communities for network providers. While our findings are interesting in their own right, they also offer important lessons for ongoing research that seeks to localize traffic within ISP boundaries. In particular, our results suggest that (i) in ISPs with heterogeneous access technologies, the performance benefits to users on localizing P2P traffic is largely dependent on the degree of seed-like behavior of peers behind high-bandwidth access technologies; and (ii) while localization can reduce the traffic on Internet peering links, it has the potential to cause a significant increase in traffic on internal links of providers, potentially requiring upgrades of network links.  相似文献   

9.
As the total amount of traffic data in networks has been growing at an alarming rate, there is currently a substantial body of research that attempts to mine traffic data with the purpose of obtaining useful information. For instance, there are some investigations into the detection of Internet worms and intrusions by discovering abnormal traffic patterns. However, since network traffic data contain information about the Internet usage patterns of users, network users’ privacy may be compromised during the mining process. In this paper, we propose an efficient and practical method that preserves privacy during sequential pattern mining on network traffic data. In order to discover frequent sequential patterns without violating privacy, our method uses the N-repository server model, which operates as a single mining server and the retention replacement technique, which changes the answer to a query probabilistically. In addition, our method accelerates the overall mining process by maintaining the meta tables in each site so as to determine quickly whether candidate patterns have ever occurred in the site or not. Extensive experiments with real-world network traffic data revealed the correctness and the efficiency of the proposed method.  相似文献   

10.
The popularity of a new generation of smart peer-to-peer applications has resulted in several new challenges for accurately classifying network traffic. In this paper, we propose a novel two-stage p2p traffic classifier, called Self-Learning Traffic Classifier (SLTC), that can accurately identify p2p traffic in high speed networks. The first stage classifies p2p traffic from the rest of the network traffic, and the second stage automatically extracts application payload signatures to accurately identify the p2p application that generated the p2p flow. For the first stage, we propose a fast, light-weight algorithm called Time Correlation Metric (TCM), that exploits the temporal correlation of flows to clearly separate peer-to-peer (p2p) traffic from the rest of the traffic. Using real network traces from tier-1 ISPs that are located in different continents, we show that the detection rate of TCM is consistently above 95% while always keeping the false positives at 0%. For the second stage, we use the LASER signature extraction algorithm [20] to accurately identify signatures of several known and unknown p2p protocols with very small false positive rate (<1%). Using our prototype on tier-1 ISP traces, we demonstrate that SLTC automatically learns signatures for more than 95% of both known and unknown traffic within 3 min.  相似文献   

11.
The discovery of moving object trajectory patterns representing high traffic density has been covered in various works using diverse approaches. These models are useful in areas such as transportation planning, traffic monitoring, and advertising on public roads. However, though studies tend to recognize the importance of these types of patterns in utility, they usually do not consider traffic congestion as a particular condition of high traffic. In this work, we present a model for the discovery of high traffic flow patterns in relation to traffic congestion. This relationship is represented in terms of traffic that is shared between different sectors of the pattern, making it possible to identify traffic flow situations causing congestion. We also complement this model by discovering alternative paths for the severe traffic depicted in these patterns. These alternative paths depend on traffic level and location inside the road network. Depending on the traffic conditions, alternative paths are commonly sought by drivers when they are approaching a traffic jam, in order to mitigate the effects of traffic congestion. We compare these models with related work from similar areas and validate them by conducting experiments using real data. We describe discovered patterns related to the main elements of the road network in the dataset and show their advantages in comparison to related models. Based on the displayed metrics, the algorithms’ implementation offers good performance execution for the given dataset volume. The results presented confirm the usefulness of the proposed patterns as a tool that helps to improve traffic, allowing the identification of problems and possible alternatives.  相似文献   

12.
The Internet is a hierarchical architecture comprising heterogeneous entities of privately owned infrastructures, where higher level Internet service providers (ISPs) supply connectivity to the local ISPs and charge the local ISPs for the transit services. One of the challenging problems facing service providers today is how the profitability can be increased while maintaining good service qualities as the network scales up. In this work, we seek to understand the fundamental issues on the "interplay" (or interaction) between ISPs at different tiers. Although the local ISPs (which we term peers) can communicate with each other by purchasing the connectivity from transit ISPs, there stands an opportunity for them to set up private peering relationships. Under this competitive framework, we explore the issues on 1) the impact of peering relationship; 2) resource distribution; 3) revenue maximization; and 4) condition for network upgrade. First, a generalized model is presented to characterize the behaviors of peers and the transit ISP, in which their economic interests are reflected. We study how a peer can distributively determine its optimal peering strategy. Furthermore, we show how a transit ISP is able to utilize the available information to infer its optimal pricing strategy, under which a revenue maximization is achieved. Two distributed algorithms are proposed to help ISPs to provide a fair and efficient bandwidth allocation to peers, avoiding a resource monopolization of the market. Last, we investigate the above issues in a "many-peers region," that is, when we scale up the network. We provide insightful evidence to show that the ISPs can still gain profits as they upgrade the network infrastructures. Extensive simulations are carried out to support our theoretical claims.  相似文献   

13.
In individual-centered simulations, the variety and consistency of agents' behaviors reinforce the realism and validity of the simulation. Variety increases the diversity of behaviors that users meet during the simulation. Consistency ensures that these behaviors improve the users' feeling of immersion. In this work, we address the issue of the simultaneous influence of these two elements. We propose a formalization of the construction of populations for agent-based simulations, which provides the basis for a generic and non-intrusive tool allowing an out-of-the-agent design. First, the model uses behavioral patterns to describe standards of behaviors for the agents. They provide a behavioral archetype during agents' creation, and are also a compliance reference, that allows to detect deviant behaviors and address them. Then, a specific process instantiates the agents by using the specification provided by the patterns. Finally, inference enables to automate behavioral patterns configuration from real or simulated data. This formalization allows for the easy introduction of variety in agents' behaviors, while controlling the conformity to specifications. We applied the model to traffic simulation, in order to introduce driving styles specified using behavioral patterns (e.g. cautious or aggressive drivers). The behavioral realism of the traffic was therefore improved, and the experimentations we conducted show how the model contributes to increase the variety and the representativeness of the behaviors.  相似文献   

14.
The goal of network traffic classification is to identify the protocols or types of protocols in the network traffic. In particular, the identification of network traffic with high resource consumption, such as peer-to-peer (P2P) traffic, represents a great concern for Internet Service Providers (ISP) and network managers. Most current flow-based classification approaches report high accuracy without paying attention to the generalization ability of the classifier. However, without this ability, a classifier may not be suitable for on-line classification. In this paper, a number of experiments on real traffic help to elucidate the reason for this lack of generalization. It is also shown that one way to attain the generalization ability is by using dynamic classifiers. From these results, a dynamic classification approach based on the pairing of flows according to a similarity criterion is proposed. The pairing method is not a classifier by itself. Rather, its goal is to determine in a fast way that two given flows are similar enough to conclude they correspond to the same protocol. Combining this method with a classifier, most of the flows do not need to be explicitly evaluated by the later, so that the computational overhead is reduced without a significant reduction in accuracy. In this paper, as a case study, we explore complementing the pairing method with payload inspection. In the experiments performed, the pairing approach generalizes well to traffic obtained in different conditions and scenarios than that used for calibration. Moreover, a high portion of the traffic unclassified by payload inspection is categorized with the pairing method.  相似文献   

15.
《Computer Networks》2007,51(12):3595-3616
As mobile ad hoc network (MANET) systems research has matured and several testbeds have been built to study MANETs, research has focused on developing new MANET applications such as collaborative games, collaborative computing, messaging systems, distributed security schemes, MANET middleware, peer-to-peer file sharing systems, voting systems, resource management and discovery, vehicular computing and collaborative education systems. The growing set of diverse applications developed for MANETs pose far more complex traffic patterns than the simple one-to-one traffic pattern, and hence the one-to-one traffic pattern widely used in previous protocol studies has become inadequate in reflecting the relative performance of these protocols when deployed to support these emerging applications.As a first step towards effectively supporting newly developed and future diverse MANET applications, this paper studies the performance impact of diverse traffic patterns on routing protocols in MANETs. Specifically, we propose a new communication model that extends the previous communication model to include a more general traffic pattern that varies the number of connections per source node. We study the performance impact of traffic patterns on various routing protocols via detailed simulations of an ad hoc network of 112 mobile nodes. Our simulation results show that many of the conclusions drawn in previous protocol comparison studies no longer hold under the new traffic patterns. These results motivate the need for performance evaluation of ad hoc networks to not only include rich and diverse mobility models as has been done in the past but also include diverse traffic patterns that stress a wide set of protocol design issues.  相似文献   

16.
在大规模网络中的流量行为体现为相当复杂的非线性,目前国内外对它的研究还没有成熟的方法。本文采用一种累加模型将复杂大规模网络流量分解成趋势项、周期项和随机项。根据这一分解,利用不同的数学工具分别建模三个相对简单的子系统来仿真复杂流量。为了检查我们的模型,使用分解模型分析CERNET主干网络长期流量行为,并将分析结果同传统的ARIMA季节模型比较,结果表明,累加模型在描述流量宏观行为时个有简单和高精度的优点。  相似文献   

17.
在对复杂网络攻击行为进行多阶段特性分析的基础上,本文提出了一种由攻击后果逆向回溯挖掘整个攻击统计特征模式的方法及模型。该模型以网络管理系统收集的网络异常流量为数据源,通过对不同阶段的攻击特征数据进行Granger因果关系检验,可以提取出描述其关联关系的高置信度攻击模式。文中最后采用五种DDoS工具进行实验研究,结果证实了所提方法及模型的可行性。  相似文献   

18.
Extracting knowledge from big network traffic data is a matter of foremost importance for multiple purposes including trend analysis, network troubleshooting, capacity planning, network forensics, and traffic classification. An extremely useful approach to profile traffic is to extract and display to a network administrator the multi-dimensional hierarchical heavy hitters (HHHs) of a dataset. However, existing schemes for computing HHHs have several limitations: (1) they require significant computational resources; (2) they do not scale to high dimensional data; and (3) they are not easily extensible. In this paper, we introduce a fundamentally new approach for extracting HHHs based on generalized frequent item-set mining (FIM), which allows to process traffic data much more efficiently and scales to much higher dimensional data than present schemes. Based on generalized FIM, we build and thoroughly evaluate a traffic profiling system we call FaRNet. Our comparison with AutoFocus, which is the most related tool of similar nature, shows that FaRNet is up to three orders of magnitude faster. Finally, we describe experiences on how generalized FIM is useful in practice after using FaRNet operationally for several months in the NOC of GÉANT, the European backbone network.  相似文献   

19.
《Computer Networks》2007,51(10):2450-2466
Wireless mesh networks (WMNs) consist of static wireless routers, some of which, called gateways, are directly connected to the wired infrastructure. User stations are connected to the wired infrastructure via wireless routers. This paper presents a simple and effective management architecture for WMNs, termed configurable access network (CAN). Under this architecture, the control function is separated from the switching function, so that the former is performed by an network operation center (NOC) which is located in the wired infrastructure. The NOC monitors the network topology and user performance requirements, from which it computes a path between each wireless router and a gateway, and allocates fair bandwidth for carrying the associated traffic along the selected route. By performing such functions in the NOC, we offload the network management overhead from wireless routers, and enable the deployment of simple/low-cost wireless routers. Our goal is to maximize the network utilization by balancing the traffic load, while providing fair service and quality of service (QoS) guarantees to the users. Since, this problem is NP-hard, we devise approximation algorithms that provide guarantees on the quality of the approximated solutions against the optimal solutions. The simulations show that the results of our algorithms are very close to the optimal solutions.  相似文献   

20.
With the development of sensor technology and embedded systems, building large-scale, low-cost sensor networks, which is a critical step to facilitating the application of pervasive sensing in the future, becomes possible. One of the major challenges in developing sensor network applications is to improve the execution efficiency of programs running on power-constrained embedded devices. While profiling-guided code optimization has been widely used as a compiler-level optimization technique for improving the performance of programs running on general-purpose computers, it has not been applied to sensor network programs due to some defects. In this paper, we overcome these defects and design a more effective profiling-guided code placement approach for sensor network programs. Specifically, we model the execution of sensor network programs taking nondeterministic inputs as discrete-time Markov processes, and propose a novel approach named Code Tomography to estimate parameters of the Markov models that reflect sensor network programs’ dynamic execution behaviors by only using end-to-end timing information measured at the start and end points of each procedure in the source code. The parameters estimated by Code Tomography are fed back to compilers to optimize the code placement. The evaluation results demonstrate that Code Tomography can achieve satisfactory estimation accuracy with low profiling overhead and the branch misprediction rate can be reduced after reorganizing the code placement based on the profiling results. Besides, Code Tomography can also be useful for purposes such as post-mortem analysis, debugging and energy profiling of sensor network programs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号