首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The user clients for accessing Internet are increasingly shifting from desktop computers to cellular devices. To be competitive in the rapidly changing market, operators, Internet service providers and application developers are required to have the capability of recognizing the models of cellular devices and understanding the traffic dynamics of cellular data network. In this paper, we propose a novel Jaccard measurement‐based method to recognize cellular device models from network traffic data. This method is implemented as a scalable paralleled MapReduce program and achieves a high accuracy, 91.5%, in the evaluation with 2.9 billion traffic records collected from the real network. Based on the recognition results, we conduct a comprehensive study of three characteristics of network traffic from device model perspective, the network access time, the traffic volume, and the diurnal patterns. The analysis results show that the distribution of network access time can be modeled by a two‐component Gaussian mixture model, and the distribution of traffic volumes is highly skewed and follows the power law. In addition, seven distinct diurnal patterns of cellular device usage are identified by applying unsupervised clustering algorithm on the collected massive traffic data. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

2.
Recently, the deployment of novel smart network concepts, such as the Internet of things (IoT) or machine‐to‐machine communication, has gained more attention owing to its role in providing communication among various smart devices. The IoT involves a set of IoT devices (IoTDs) such as actuators and sensors that communicate with IoT applications via IoT gateways without human intervention. The IoTDs have different traffic types with various delay requirements, and we can classify them into two main groups: critical and massive IoTDs. The fundamental promising technology in the IoT is the advanced long‐term evolution (LTE‐A). In the future, the number of IoTDs attempting to access an LTE‐A network in a short period will increase rapidly and, thus, significantly reduce the performance of the LTE‐A network and affect the QoS required by variant IoT traffic. Therefore, efficient resource allocation is required. In this paper, we propose a priority‐based allocation scheme for multiclass service in IoT to efficiently share resources between critical and massive IoTD traffic based on their specific characteristics while protecting the critical IoTDs, which have a higher priority over the massive IoTDs. The performance of the proposed scheme is analyzed using the Geo/G/1 queuing system focusing on QoS guarantees and resource utilization of both critical and massive IoTDs. The distribution of service time of the proposed system is determined and, thus, the average waiting and service times are derived. The results indicate that the performance of the massive IoTDs depends on the data traffic characteristics of the critical IoTDs. Furthermore, the results emphasize the importance of the system delay analysis and demonstrate its effects on IoT configurations.  相似文献   

3.
In this paper, the characteristics of sub‐network traffic is analyzed from the correlation point of view. It is easy to see that the traffic histogram of a sub‐network has a 24‐hour seasonal variation due to daily usage behavior. The auto correlation factor (ACF) and partial auto correlation factor (PACF) tests are applied first to examine the correlation of the traffic among consecutive hours and the correlation with a specific hour. The seasonal auto‐regressive integrated moving average (ARIMA) model is applied to characterize the above properties of the network traffic. Modeling performance is evaluated by examining the coincidence of the histogram and the moving average of traffic volume between the actual traffic collected from the network and the traffic generated by the proposed model. The experimental results illustrate that the proposed model can effectively capture traffic behaviors of the sub‐network and can then be used as a suitable traffic model for analysis of Internet performance. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

4.
电信各专业网管系统中不断积累了大量的网络运行状况数据和性能数据,但缺少一个有效的工具来对数据进行集成和深入分析,没有发挥出历史数据的真正作用;运用数据仓库和OLAP技术创建一个综合的分析平台,对数据集成并进行深入灵活的OLAP分析,将结果通过W eb方式展现出来,充分发挥历史数据的宝贵价值。  相似文献   

5.
随着互联网技术的迅速发展,在我们的生活中网络已经成为我们不可以缺少的重要的组成部分.网络流量监控技术是对网络中海量的流量数据进行分析的重要工具和技术.基于云计算的网络流量监控技术可以对网络流量数据和用户的特征进行更好的分析,可以对用户的上网行为进行深入的挖掘,更好的为用户推荐喜欢的网络内容.本文结合互联网的发展现状对海量网络流量数据分析技术进行了分析,在基于云计算的基础上提出了海量网络流量数据分析的几个关键性技术,对基于云计算的海量网络流量数据分析技术进行了分析和研究.  相似文献   

6.
Nowadays we see a tremendous growth of the Internet, especially in terms of the amont of data being transmitted and new network protocols being introduced. This poses a challenge for network administrators, who need adequate tools for network management. Recent findings show that DNS can contribute valuable information on IP flows and improve traffic visibility in a computer network. In this paper, we apply these findings on DNS to propose a novel traffic classification algorithm with interesting features. We experimentally show that the information carried in domain names and port numbers is sufficient for immediate classification of a highly significant portion of the traffic. We present DNS‐Class: an innovative, fast and reliable flow‐based traffic classification algorithm, which on average yields 99.8% of true positives and < 0.1% of false positives on real traffic traces. The algorithm can work as a major element of a modular system in a cascade architecture. Additionally, we provide an analysis on how various network protocols depend on DNS in terms of flows, packets and bytes. We release the complete source code implementing the presented system as open source. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
Accurate and real‐time classification of network traffic is significant to a number of network operation and management tasks such as quality of service differentiation, traffic shaping and security surveillance. However, with emerging P2P applications using dynamic port numbers, IP masquerading techniques and payload encryption, accurate and intelligent traffic classification continues to be a big challenge despite a wide range of research work on the topic. Since each classification method has its disadvantages and hardly could meet the specific requirement of Internet traffic classification, this paper innovatively presents a composite traffic classification system. The proposed lightweight system can accurately and effectively identify Internet traffic with good scalability to accommodate both known and unknown/encrypted applications. Furthermore, It promises to satisfy various Internet uses and is feasible for use in real‐time line speed applications. Our experimental results show the distinct advantages of the proposed classification system. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

8.
In this paper, we present a deep neural network model to enhance the intrusion detection performance. A deep learning architecture combining convolution neural network and long short‐term memory learns spatial‐temporal features of network flows automatically. Flow features are extracted from raw network traffic captures, flows are grouped, and the consecutive N flow records are transformed into a two‐dimensional array like an image. These constructed two‐dimensional feature vectors are normalized and forwarded to the deep learning model. Transformation of flow information assures deep learning in a computationally efficient manner. Overall, convolution neural network learns spatial features, and long short‐term memory learns temporal features from a sequence of network raw data packets. To maximize the detection performance of the deep neural network and to reach at the highest statistical metric values, we apply the tree‐structured Parzen estimator seeking the optimum parameters in the parameter hyper‐plane. Furthermore, we investigate the impact of flow status interval, flow window size, convolution filter size, and long short‐term memory units to the detection performance in terms of level in statistical metric values. The presented flow‐based intrusion method outperforms other publicly available methods, and it detects abnormal traffic with 99.09% accuracy and 0.0227 false alarm rate.  相似文献   

9.
Current network management needs an end‐to‐end overview of various flows rather than the information that is purely local to the individual devices. The typical manager‐centric polling approach, however, is not suitable to understand network‐wide behavior of a large‐scale Internet. In this paper, we propose a new management information base (MIB) approach called Service Monitoring MIB (SM MIB). The MIB provides a network manager with dynamic end‐to‐end management information by utilizing special packets. The special packet is an Internet control message protocol (ICMP) application that is sent to a remote network element to monitor Internet services. The SM MIB makes an end‐to‐end management feasible while it reduces management‐related traffic and manager‐to‐manager interactions. Real examples show that the proposed SM MIB is useful for end‐to‐end QoS monitoring. We discuss the accuracy of the obtained data as well as the monitoring overhead. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

10.
A traffic matrix can exhibit the volume of network traffic from origin nodes to destination nodes. It is a critical input parameter to network management and traffic engineering, and thus it is necessary to obtain accurate traffic matrix estimates. Network tomography method is widely used to reconstruct end‐to‐end network traffic from link loads and routing matrix in a large‐scale Internet protocol backbone networks. However, it is a significant challenge because solving network tomography model is an ill‐posed and under‐constrained inverse problem. Compressive sensing reconstruction algorithms have been well known as efficient and precise approaches to deal with the under‐constrained inference problem. Hence, in this paper, we propose a compressive sensing‐based network traffic reconstruction algorithm. Taking into account the constraints in compressive sensing theory, we propose an approach for constructing a novel network tomography model that obeys the constraints of compressive sensing. In the proposed network tomography model, a framework of measurement matrix according to routing matrix is proposed. To obtain optimal traffic matrix estimates, we propose an iteration algorithm to solve the proposed model. Numerical results demonstrate that our method is able to pursuit the trace of each origin–destination flow faithfully. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
As the Internet evolves from a packet network supporting a single best effort service class towards an integrated infrastructure supporting several service classes—some with QoS guarantees—there is a growing interest in the introduction of admission control and in devising bandwidth sharing strategies, which meet the diverse needs of QoS‐assured and elastic services. In this paper we show that the extension of the classical multi‐rate loss model is possible in a way that makes it useful in the performance analysis of a future admission control based Internet that supports traffic with peak rate guarantee as well as elastic traffic. After introducing the model, it is applied for the analysis of a single link, where it sheds light on the trade‐off between blocking probability and throughput. For the investigation of this trade‐off, we introduce the throughput‐threshold constraint, which bounds the probability that the throughput of a traffic flow drops below a predefined threshold. Finally, we use the model to determine the optimal parameter set of the popular partial overlap link allocation policy: we propose a computationally efficient algorithm that provides blocking probability‐ and throughput guarantees. We conclude that the model and the numerical results provide important insights in traffic engineering in the Internet. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

12.
HTTP adaptive streaming (HAS) is becoming the de facto standard for video streaming services over the Internet. In HAS, each video is segmented and stored in different qualities. Rate adaptation heuristics, deployed at the client, allow the most appropriate quality level to be dynamically requested, based on the current network conditions. It has been shown that state‐of‐the‐art heuristics perform suboptimal when sudden bandwidth drops occur, therefore leading to freezes in the video playout, the main factor influencing users' quality of experience (QoE). This issue is aggravated in case of live events, where the client‐side buffer has to be kept as small as possible in order to reduce the playout delay between the user and the live signal. In this article, we propose a framework capable of increasing the QoE of HAS clients by reducing video freezes. The framework is based on OpenFlow, a widely adopted protocol to implement the software‐defined networking principle. An OpenFlow controller is in charge of introducing prioritized delivery of HAS segments, based on the network conditions and the HAS clients' status. Particularly, the HAS clients' status is obtained without any explicit clients‐to‐controller communication, and thus, no extra signaling is introduced into the network. Moreover, this OpenFlow controller is transparent to the quality decision process of the clients, as it assists the delivery of the segments, but it does not determine the actual quality to be requested. In order to provide a comprehensive analysis of the proposed approach, we investigate the performance of the proposed OpenFlow‐based framework in the presence of realistic Internet cross‐traffic. Particularly, we model two types of applications, namely, HTTP web browsing and progressive download video streaming, which currently represent the majority of Internet traffic together with HAS. By evaluating this novel approach through emulation in several multi‐client scenarios, we show how the proposed approach can reduce freeze time for the HAS clients due to network congestion up to 10 times compared with state‐of‐the‐art heuristics, without impacting the performance of the cross‐traffic applications. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

13.
A simulation‐based optimization is a decision‐making tool that helps in identifying an optimal solution or a design for a system. An optimal solution and design are more meaningful if they enhance a smart system with sensing, computing, and monitoring capabilities with improved efficiency. In situations where testing the physical prototype is difficult, a computer‐based simulation and its optimization processes are helpful in providing low‐cost, speedy and lesser time‐ and resource‐consuming solutions. In this work, a comparative analysis of the proposed heuristic simulation‐optimization method for improving quality‐of‐service (QoS) is performed with generalized integrated optimization (a simulation approach based on genetic algorithms with evolutionary simulated annealing strategies having simplex search). In the proposed approach, feature‐based local (group) and global (network) formation processes are integrated with Internet of Things (IoT) based solutions for finding the optimum performance. Further, the simulated annealing method is applied for finding local and global optimum values supporting minimum traffic conditions. A small‐scale network of 50 to 100 nodes shows that genetic simulation optimization with multicriteria and multidimensional features performs better as compared to other simulation‐optimization approaches. Further, a minimum of 3.4% and a maximum of 16.2% improvement is observed in faster route identification for small‐scale IoT networks with simulation‐optimization constraints integrated model as compared to the traditional method. The proposed approach improves the critical infrastructure monitoring performance as compared to the generalized simulation‐optimization process in complex transportation scenarios with heavy traffic conditions. The communicational and computational‐cost complexities are least for the proposed approach.  相似文献   

14.
Kim  Meejoung 《Wireless Networks》2020,26(8):6189-6202

In this paper, we introduce the integer-valued generalized autoregressive conditional heteroscedasticity (INGARCH) as a network traffic prediction model. As the INGARCH is known as a non-linear analytical model that could capture the characteristics of network traffic such as Poisson packet arrival and long-range dependence property, INGARCH seems to be an adequate model for network traffic prediction. Based on the investigation for the traffic arrival process in various network topologies including IoT and VANET, we could confirm that assuming the Poisson process as packet arrival works for some networks and environments of networks. The prediction model is generated by estimating parameters of the INGARCH process and predicting the Poisson parameters of future-steps ahead process using the conditional maximum likelihood estimation method and prediction procedure, respectively. Its performance is compared with those of three different models; autoregressive integrated moving average, GARCH, and long short-term memory recurrent neural network. Anonymized passive traffic traces provided by the Center for Applied Internet Data Analysis are used in the experiment. Numerical results show that the proposed model predicts better than the three models in terms of measurements used in prediction models. Based on the study, we can conclude the followings: INGARCH can capture the characteristics of network traffic better than other statistic models, it is more tractable than neural networks (NNs) overcoming the black-box nature of NNs, and the performances of some statistical models are comparable or even superior to those of NNs, especially when the data is insufficient to apply deep NNs.

  相似文献   

15.
Future heterogeneous networks with dense cell deployment may cause high intercell interference. A number of interference coordination (IC) approaches have been proposed to reduce intercell interference. For dense small‐cell deployment with high intercell interference between cells, traditional forward link IC approaches intended to improve edge user throughput for best effort traffic (ie, file transfer protocol download), may not necessarily improve quality of service performance for delay‐sensitive traffic such as voice over long‐term evolution traffic. This study proposes a dynamic, centralized joint IC approach to improve forward link performance for delay‐sensitive traffic on densely deployed enterprise‐wide long‐term evolution femtocell networks. This approach uses a 2‐level scheme: central and femtocell. At the central level, the algorithm aims to maximize network utility (the utility‐based approach) and minimize network outage (the graphic‐based approach) by partitioning the network into clusters and conducting an exhaustive search for optimized resource allocation solutions among femtocells (femto access points) within each cluster. At the femtocell level, in contrast, the algorithm uses existing static approaches, such as conventional frequency reuse (ReUse3) or soft frequency reuse (SFR) to further improve user equipment quality of service performance. This combined approach uses utility‐ and graphic‐based SFR and ReUse3 (USFR/GSFR and UReUse3/GReUse3, respectively). The cell and edge user throughput of best effort traffic and the packet loss rate of voice over long‐term evolution traffic have been characterized and compared using both the proposed and traditional IC approaches.  相似文献   

16.
Internet protocol (IP) traffic connections arrive dynamically at wavelength‐division multiplexing (WDM) network edges with low data rates compared with the wavelength capacity, availability, and quality‐of‐service (QoS) constraints. This paper introduces a scheme to be integrated into the control and management plane of IP/WDM networks to satisfy the availability and QoS required for IP traffic connections bundled onto a single wavelength (lightpath) in WDM networks protected by shared‐backup path protection (SBPP). This scheme consists of two main operations: (i) routing multi‐granular connections and traffic grooming policies, and (ii) providing appropriate shared protection on the basis of subscribers’ service‐level agreements in terms of data rate, availability, and blocking probability. Using the Markov chain process, a probabilistic approach is developed to conceive connection blocking probability models, which can quantify the blocking probability and service utilization of M:N and 1:N SBPP schemes. The proposed scheme and developed mathematical models have been evaluated in terms of bandwidth blocking ratio, availability satisfaction rate, network utilization, and connection blocking probability performance metrics. The obtained research results in this paper provide network operators an operational setting parameter, which controls the allocation of working and backup resources to dynamic IP traffic connections on the basis of their priority and data rate while satisfying their requirements in terms of bandwidth and availability. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

17.
Initially, Internet has evolved as a resource sharing model where resources are identified by IP addresses. However, with rapid technological advancement, resources/hardware has become cheap and thus, the need of sharing hardware over Internet is reduced. Moreover, people are using Internet mainly for information exchange and hence, Internet has gradually shifted from resource sharing to information sharing model. To meet the recent growing demand of information exchange, Content Centric Network (CCN) is envisaged as a clean‐slate future network architecture which is specially destined for smooth content distribution over Internet. In CCN, content is easily made available using network caching mechanism which is misaligned with the existing business policy of content providers/publishers in IP‐based Internet. Hence, the transition from contemporary IP‐based Internet to CCN demands attention for redesigning the business policy of the content publishers/providers. In this paper, we have proposed efficient and secure communication protocols for flexible CCN business model to protect the existing business policies of the content publisher while maintaining the salient CCN features like in‐network content caching and Interest packet aggregation. To enhance the efficiency and security, the Elliptic Curve Cryptography (ECC) is used. The proposed ECC‐based scheme is analyzed to show that it is resilient to relevant existing cryptographic attacks. The performance analysis in terms of less computation and communication overheads and increased efficiency is given. Moreover, a formal security verification of the proposed scheme is done using widely used AVISPA simulator and BAN logic that shows our scheme is well secured.  相似文献   

18.
Energy efficiency is one of the top priorities for future cellular networks, which could be accomplished by implementing cooperative mechanisms. In this paper, we propose three evolved node B (eNB)‐centric energy‐saving cooperation techniques for long‐term evolution (LTE) systems. These techniques, named as intra‐network, inter‐network, and joint cooperation, involve traffic‐aware intelligent cooperation among eNBs belonging to the same or different networks. Our proposed techniques dynamically reconfigure LTE access networks in real time utilizing less number of active eNBs and thus, achieve energy savings. In addition, these techniques are distributed and self‐organizing in nature. Analytical models for evaluating switching dynamics of eNBs under these cooperation mechanisms are also formulated. We thoroughly investigate the proposed system under different numbers of cooperating networks, traffic scenarios, eNB power profiles, and their switching thresholds. Optimal energy savings while maintaining quality of service is also evaluated. Results indicate a significant reduction in network energy consumption. System performance in terms of network capacity utilization, switching statistics, additional transmit power, and eNB sleeping patterns is also investigated. Finally, a comprehensive comparison with other works is provided for further validation. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
Fast and accurate methods for predicting traffic properties and trend are essential for dynamic network resource management and congestion control. With the aim of performing online and feasible prediction of network traffic, this paper proposes a novel time series model, named adaptive autoregressive (AAR). This model is built upon an adaptive memory‐shortening technique and an adaptive‐order selection method originally developed by this study. Compared to the conventional one‐step ahead prediction using traditional Box–Jenkins time series models (e.g. AR, MA, ARMA, ARIMA and ARFIMA), performance results obtained from actual Internet traffic traces have demonstrated that the proposed AAR model is able to support online prediction of dynamic network traffic with reasonable accuracy and relatively low computation complexity. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

20.
With an exponential increase in the data size and complexity of various documents to be investigated, existing methods of network forensics are found not much efficient with respect to accuracy and detection ratio. The existing techniques for network forensic analysis exhibit inherent limitations while processing a huge volume, variety, and velocity of data. It makes network forensic a time‐consuming and resource‐consuming task. To balance time taken and output delivered, these existing techniques put a limit on the amount of data under analysis, which results in a polynomial time complexity of these solutions. So to mitigate these issues, in this paper, we propose an effective framework to overcome the limitation to handle large volume, variety, and velocity of data. An architectural setup that consists of MapReduce framework on top of Hadoop Distributed File System environment is proposed in this paper. The proposed framework demonstrates its capability to handle issues of storage and processing of big data using cloud computing. Also, in the proposed framework, supervised machine learning (random forest‐based decision tree) algorithm has been implemented to demonstrate better sensitivity. To train and validate the model, online available data set from CAIDA is taken and university network traffic samples, with increasing size, has been taken for experiment. Results thus obtained confirm the superiority of the proposed framework in network forensics, with an average accuracy of 99.34% (malicious and nonmalicious traffic).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号