共查询到20条相似文献,搜索用时 59 毫秒
1.
2.
3.
Modern wireless communication networks frequently have lower application throughput due to higher number of collisions and subsequent retransmission of data packets. Moreover, these networks are characterized by restricted computational capacity due to limited node‐battery power. These challenges can be assessed for deploying fast, reliable network design with resource‐restrained operation by means of concurrent optimization of multiple performance parameters across different layers of the conventional protocol stack. This optimization can be efficiently accomplished via cross‐layer design with the aid of network coding technique and optimal allocation of limited resources to wireless links. In this paper, we evaluate and analyze intersession coding across several source–destination pairs in random access ad hoc networks with inherent power scarcity and variable capacity links. The proposed work addresses the problem of joint optimal coding, rate control, power control, contention, and flow control schemes for multi‐hop heterogeneous networks with correlated sources. For this, we employ cross‐layer design for multiple unicast sessions in the system with network coding and bandwidth constraints. This model is elucidated for global optimal solution using CVX software through disciplined convex programming technique to find the improved throughput and power allocation. Simulation results show that the proposed model effectively incorporates throughput and link power management while satisfying flow conservation, bit error rate, data compression, power outage, and capacity constraints of the challenged wireless networks. Finally, we compare our model with three previous algorithms to demonstrate its efficacy and superiority in terms of various performance metrics such as transmission success probability, throughput, power efficiency, and delay. 相似文献
4.
Pejman Goudarzi 《Wireless Communications and Mobile Computing》2013,13(7):633-649
In differentiated quality‐of‐experience enforcement (QoE) for video transmission over wireless networks, accurate video quality metrics play a crucial role in the designing process of optimal resource assignment algorithms. Many cross‐layer optimization‐based rate‐allocation strategies, which consider different objective functions (congestion level, total packet loss, etc.), have been developed for this purpose. The main contributions of the proposed work are twofold. Firstly, an optimal resource assignment framework is being developed in which, based on some network‐specific constraints and by incorporating appropriate video quality metrics, the total weighted QoE of some competing scalable video sources is being optimized based on cross‐layer optimization techniques. Secondly, these optimal rates can be used for differentiated QoE enforcement between multiple competing scalable video sources. The resulting optimal rates can be considered as rate feedbacks for online rate adaptation of a moderate scalable video encoder such as H.264/MPEG‐4 advanced video coding. The aforementioned weight parameters are selected based on the importance of each video sequence's quality and can be associated with some previous service level agreement‐based prices. Some numerical analysis have been presented to validate the theoretical results and to verify the claims. Copyright © 2011 John Wiley & Sons, Ltd. 相似文献
5.
Alejandro Cnovas Miran Taha Jaime Lloret Jesús Tomas 《International Journal of Communication Systems》2019,32(12)
New Internet Protocol Television (IPTV) services are including new technologies such as Stereoscopic TV and three‐dimensions (3D) HDTV. As well, increased ubiquitous networking and promoting in smart devices have led to high demand IPTV (over networks). Stereoscopic content required higher data flow to support these emerging TV services, and there are higher requirements at the network layer to provide good quality of service and quality of experience to the end users in delivering stereoscopic IPTV. In this paper, we propose a new concept of cognitive network management algorithm and protocol based on 3D coding techniques for delivering of stereoscopic IPTV service. The proposed approach explains how the management algorithm observes the network performance to guarantee the quality of the stereoscopic IPTV services, by measuring the performance of quality of service (QoS) parameters (delay, jitter, and packets loss) and quality of experience (QoE) metrics such as Peak Signal‐to‐Noise Ratio (PSNR), Moving Image Videography (MIV), and Mean Opinion Score (MOS). Those parameters are monitored in order to take appropriate codification decision for IPTV service provider. Moreover, the codification decision uses K‐mean classification to select the better codification for the end users. Therefore, both kinds of 3D coding formats such as Stereo Video Coding (SVC) format and 2D + Z Coding format (2D‐plus‐Depth) are selected in our experiments. As a result, our proposal successfully ensures the appropriate quality of service and quality of experience to the end users when the service of stereoscopic IPTV is being delivered. 相似文献
6.
随着大数据时代的到来,网络需要从基础架构到上层应用等各方面做好准备,本文聚焦于网络基础架构侧,从机房建设规模、布局、网络带宽与质量要求等方面探讨如何整装待发,迎接大数据时代。 相似文献
7.
8.
Kemal Akkaya Murat Demirbas R. Savas Aygun 《Wireless Communications and Mobile Computing》2008,8(2):171-193
With the increasing need for different energy saving mechanisms in Wireless Sensor Networks (WSNs), data aggregation techniques for reducing the number of data transmissions by eliminating redundant information have been studied as a significant research problem. These studies have shown that data aggregation in WSNs may produce various trade‐offs among some network related performance metrics such as energy, latency, accuracy, fault‐tolerance and security. In this paper, we investigate the impact of data aggregation on these networking metrics by surveying the existing data aggregation protocols in WSNs. Our aim is twofold: First, providing a comprehensive summary and comparison of the existing data aggregation techniques with respect to different networking metrics. Second, pointing out both the possible future research issues and the need for collaboration between data management and networking research communities working on data aggregation in WSNs. Copyright © 2006 John Wiley & Sons, Ltd. 相似文献
9.
This paper presents an optimal proportional bandwidth allocation and data droppage scheme to provide differentiated services (DiffServ) for downlink pre‐orchestrated multimedia data in a single‐hop wireless network. The proposed resource allocation scheme finds the optimal bandwidth allocation and data drop rates under minimum quality‐of‐service (QoS) constraints. It combines the desirable attributes of relative DiffServ and absolute DiffServ approaches. In contrast to relative DiffServ approach, the proposed scheme guarantees the minimum amount of bandwidth provided to each user without dropping any data at the base‐station, when the network has sufficient resources. If the network does not have sufficient resources to provide minimum bandwidth guarantees to all users without dropping data, the proportional data dropper finds the optimal data drop rates within acceptable levels of QoS and thus avoids the inflexibility of absolute DiffServ approach. The optimal bandwidth allocation and data droppage problems are formulated as constrained nonlinear optimization problems and solved using efficient techniques. Simulations are performed to show that the proposed scheme exhibits the desirable features of absolute and relative DiffServ. Copyright © 2009 John Wiley & Sons, Ltd. 相似文献
10.
Rerngvit Yanggratoke Jawwad Ahmed John Ardelius Christofer Flinta Andreas Johnsson Daniel Gillblad Rolf Stadler 《International Journal of Network Management》2018,28(2)
We predict performance metrics of cloud services using statistical learning, whereby the behaviour of a system is learned from observations. Specifically, we collect device and network statistics from a cloud testbed and apply regression methods to predict, in real‐time, client‐side service metrics for video streaming and key‐value store services. Results from intensive evaluation on our testbed indicate that our method accurately predicts service metrics in real time (mean absolute error below 16% for video frame rate and read latency, for instance). Further, our method is service agnostic in the sense that it takes as input operating systems and network statistics instead of service‐specific metrics. We show that feature set reduction significantly improves the prediction accuracy in our case, while simultaneously reducing model computation time. We find that the prediction accuracy decreases when, instead of a single service, both services run on the same testbed simultaneously or when the network quality on the path between the server cluster and the client deteriorates. Finally, we discuss the design and implementation of a real‐time analytics engine, which processes streams of device statistics and service metrics from testbed sensors and produces model predictions through online learning. 相似文献
11.
12.
13.
Mohammad Reza Abbasi Ajay Guleria Mandalika Syamala Devi 《International Journal of Communication Systems》2020,33(2)
Software‐defined networking (SDN) facilitates network programmability through a central controller. It dynamically modifies the network configuration to adapt to the changes in the network. In SDN, the controller updates the network configuration through flow updates, ie, installing the flow rules in network devices. However, during the network update, improper scheduling of flow updates can lead to a number of problems including overflowing of the switch flow table memory and the link bandwidth. Another challenge is minimizing the network update completion time during large‐network updates triggered by events such as traffic engineering path updates. The existing centralized approaches do not search the solution space for flow update schedules with optimal completion time. We proposed a hybrid genetic algorithm‐based flow update scheduling method (the GA‐Flow Scheduler). By searching the solution space, the GA‐Flow Scheduler attempts to minimize the completion time of the network update without overflowing the flow table memory of the switches and the link bandwidth. It can be used in combination with other existing flow scheduling methods to improve the network performance and reduce the flow update completion time. In this paper, the GA‐Flow Scheduler is combined with a stand‐alone method called the three‐step method. Through large‐scale experiments, we show that the proposed hybrid approach could reduce the network update time and packet loss. It is concluded that the proposed GA‐Flow Scheduler provides improved performance over the stand‐alone three‐step method. Also, it handles the above‐mentioned network update problems in SDN. 相似文献
14.
15.
16.
Pinar Sarisaray Boluk Sebnem Baydere A. Emre Harmanci 《Wireless Communications and Mobile Computing》2014,14(1):1-18
Wireless multimedia sensor networks (WMSNs) have an increasing variety of multimedia‐based applications including image and video transmission. In these types of applications, multimedia sensor nodes should ideally maximize perceptual quality and minimize energy expenditures in communication. For the required perceptual quality to be obtained, quality‐aware routing is a key research area in WMSNs. However, mapping the system parameters to the end user's perceptual quality‐of‐service measures is a challenging task because of incomplete identification metrics. Unfortunately, unless disputable assumptions and simplifications are made, optimal routing algorithm is not tractable. In this paper, we propose a novel image transmission framework to optimize both perceptual quality and energy expenditure in WMSNs. Our framework aims to provide acceptable perceptual quality at the end user by using an analytical distortion prediction model that is able to predict the image distortion resulting from any given error pattern. The innovation of the proposed scheme lies in the combined use of a content‐aware packet prioritization with an energy‐aware and quality‐aware routing protocol, named as image quality‐aware routing. Additionally, it does not only propose an energy‐efficient route selection policy but also manages the network load according to the energy residues of nodes, thus leading to a great energy economy. The results reveal that the framework is capable of identifying true metrics for mapping required image quality to network parameters. Copyright © 2011 John Wiley & Sons, Ltd. 相似文献
17.
The offline signatures are the most widely adopted biometric authentication techniques in banking systems, administrative and financial applications due to its simplicity and uniqueness. Several automated techniques have been developed to anticipate the genuineness of the offline signature. However, the recapitulate of the existing literature on machine learning-based offline signature verification (OfSV) systems are available in a few review studies only. The objective of this systematic review is to present the state-of-the-art machine learning-based models for OfSV systems using five aspects like datasets, preprocessing techniques, feature extraction methods, machine learning-based verification models and performance evaluation metrics. Thus, five research questions were identified and analysed in this context. This review covers the articles published between January 2014 and October 2019. A systematic approach has been adopted to select the 56 articles. This systematic review revealed that recently, the deep learning-based neural network attained the most promising results for the OfSV systems on public datasets. This review consolidates the state-of-the-art OfSV systems performances in selected studies on five public datasets (CEDAR, GPDS, MCYT-75, UTSig and BHSig260). Finally, fifteen open research issues were identified for future development. 相似文献
18.
连接态的移动性管理主要通过切换实现。LTE网络由于是同频组网,在同频切换失败或者不及时会导致同频邻小区电平大于服务小区的时候同频干扰加剧,网络质量下降。通过现网数据分析发现,在目前业务量不高的情况下,切换不合理已经成为除弱覆盖以外导致LTE网络质量下降的一个重要因素。本文通过现网数据及案例分析,提出了通过切换优化提升LTE网络质量的方法。 相似文献
19.
Chien Ting Wang Ying‐Dar Lin Chih‐Chiang Wang Yuan‐Cheng Lai 《International Journal of Communication Systems》2020,33(4)
Network function virtualization (NFV) places network functions onto the virtual machines (VMs) of physical machines (PMs) located in data centers. In practice, a data flow may pass through multiple network functions, which collectively form a service chain across multiple VMs residing on the same or different PMs. Given a set of service chains, network operators have two options for placing them: (a) minimizing the number of VMs and PMs so as to reduce the server rental cost or (b) placing VMs running network functions belonging to the same service chain on the same or nearby PMs so as to reduce the network delay. In determining the optimal service chain placement, operators face the problem of minimizing the server cost while still satisfying the end‐to‐end delay constraint. The present study proposes an optimization model to solve this problem using a nonlinear programming (NLP) approach. The proposed model is used to explore various operational problems in the service chain placement field. The results suggest that the optimal cost ratio for PMs with high, hybrid, and low capacity, respectively, is equal to 4:2:1. Meanwhile, the maximum operating utilization rate should be limited to 55% in order to minimize the rental cost. Regarding quality of service (QoS) relaxation, the server cost reduces by 20%, 30%, and 32% as the end‐to‐end delay constraint is relaxed from 40 to 60, 80, and 100 ms, respectively. For the server location, the cost decreases by 25% when the high‐capacity PMs are decentralized rather than centralized. Finally, the cost reduces by 40% as the repetition rate in the service chain increases from 0 to 2. A heuristic algorithm, designated as common sub chain placement first (CPF), is proposed to solve the service chain placement problem for large‐scale problems (eg, 256 PMs). It is shown that the proposed algorithm reduces the solution time by up to 86% compared with the NLP optimization model, with an accuracy reduction of just 8%. 相似文献
20.
This article presents an analysis of the flow of information in a network of online news sites. Social network theory and research on hyperlinked networks of Web pages are used to develop a model of information flow among Web sites. Kleinberg's authority‐hub model is extended by introducing sources of information in the network. Significant support was found for a Source–Authority–Hub model, which shows the source, directionality, routing, and destination of news information flow through a network of authorities and hubs. This model demonstrates the ability of key Web sites to control the flow of news and information. Applications of the model to over‐time data have the potential to predict future changes in the online news industry. 相似文献