首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The machine-to-machine (M2M) communication is an enabler technology for internet of things (IoT) that provides communication between machines and devices without human intervention. One of the main challenges in IoT is managing a large number of machine-type communications co-existing with the human to human (H2H) or human type communications. Long term evolution (LTE) and LTE-advanced (LTE-A) technologies due to their inherent characteristics like high capacity and flexibility in data access management are appropriate choices for M2M/IoT systems. In this paper, a two-phase intelligent scheduling mechanism based on interval type-2 fuzzy logic to (1) satisfy QoS requirements, (2) ensure fair resource allocation and (3) control energy level of devices for coexistence of M2M/H2H traffics in LTE-A networks, is presented. The proposed interval type-2 fuzzy Logic mechanism enhances data traffic efficiency by predicting and handling the network uncertainties. The performance of the proposed algorithm is evaluated in terms of various metrics such as delay, throughput, and bandwidth utilization.  相似文献   

2.
为了研究基于深度强化学习(Deep Reinforcement Learning, DRL)的5G异构网络模型的性能,同时在最小化系统能耗并满足不同类型终端用户的服务质量要求的基础上制定合理的资源分配方案,提出了一种基于DRL的近端策略优化算法,并结合一种基于优先级的分配策略,引入了海量机器类型通信、增强移动宽带和超可靠低延迟通信业务。所提算法相较于Greedy和DQN算法,网络延迟分别降低73.19%和47.05%,能耗分别降低9.55%和6.93%,而且可以保证能源消耗和用户延迟之间的良好权衡。  相似文献   

3.
Fifth generation (5G) slicing is an emerging technology for software‐defined networking/network function virtualization–enabled mobile networks. Improving the utilization and throughput to meet the quality of service (QoS) requirements of 5G slicing is very important for the operators of mobile networks. With growing data traffic from different applications of numerous smart mobile devices having several QoS requirements, we expect networks to face problems of congestion and overload that prevent the effective functioning of a radio access network (RAN). This paper proposes a more effective packet‐based scheduling scheme for data traffic by 5G slicing with two operation modes for improving the resource utilization of 5G cloud RAN and providing an efficient isolation of the 5G slices. These two operation modes are referred to as static sharing resource (SSR) scheme and dynamic sharing resources (DSR) scheme. The SSR scheme is a modified version of an existing method. The goal of this approach is to reallocate the shared available resources of 5G network fairly and maximize the utilization of bandwidth while protecting a 5G slice from overwhelming other 5G slices. Throughput and delays of the system model are also discussed to show its performance limits. On the basis of the simulation outcomes, we observed that the proposed DSR scheme outperforms the SSR scheme in terms of provided delay and throughput. In addition, the token bucket parameters together with the assigned capacity weight for each slice can be selected and configured based on the required QoS. Finally, a good estimate for the maximum delay bounds of the slices is provided by the derived theoretical delay bound.  相似文献   

4.
In this work, we propose a resource allocation algorithm for the LTE downlink that makes use of an adaptive multifractal envelope process and a minimum service curve. The proposed scheduling algorithm aims to improve some network parameters while guaranteeing a maximum delay to the user by considering the following information: backlog, channel condition and user traffic behavior. In order to estimate the maximum network delay, we propose an adaptive minimum service curve for the LTE network that can be used for admission control purposes in the resource allocation algorithm. The performance of the proposed scheduling algorithm is compared to those of several scheduling schemes known in the literature through computational simulations of the LTE downlink. In order to develop a new adaptive envelope process and to precisely describe network flows, we propose an adaptive algorithm to estimate the parameters of the Multifractal Wavelet Model (MWM). The proposed envelope process is compared to the main traffic model based envelope processes known in the literature. Simulations of the LTE downlink considering AMC (Adaptive Modulation and Coding) are carried out showing the efficiency of the proposed resource allocation approach that considers adaptive estimation of network traffic parameters.  相似文献   

5.
Machine‐to‐machine (M2M) communications is one of the major enabling technologies for the realization of the Internet of Things (IoT). Most machine‐type communication devices (MTCDs) are battery powered, and the battery lifetime of these devices significantly affects the overall performance of the network and the quality of service (QoS) of the M2M applications. This paper proposes a lifetime‐aware resource allocation algorithm as a convex optimization problem for M2M communications in the uplink of a single carrier frequency division multiple access (SC‐FDMA)‐based heterogeneous network. A K‐means clustering is introduced to reduce energy consumption in the network and mitigate interference from MTCDs in neighbouring clusters. The maximum number of clusters is determined using the elbow method. The lifetime maximization problem is formulated as a joint power and resource block maximization problem, which is then solved using Lagrangian dual method. Finally, numerical simulations in MATLAB are performed to evaluate the performance of the proposed algorithm, and the results are compared to existing heuristic algorithm and inbuilt MATLAB optimal algorithm. The simulation results show that the proposed algorithm outperforms the heuristic algorithm and closely model the optimal algorithm with an acceptable level of complexity. The proposed algorithm offers significant improvements in the energy efficiency and network lifetime, as well as a faster convergence and lower computational complexity.  相似文献   

6.
Internet of Things (IoT) is an ecosystem that can improve the life quality of humans through smart services, thereby facilitating everyday tasks. Connecting to cloud and utilizing its services are now public and common, and the experts seek to find some ways to complete cloud computing to use it in IoT, which in next decades will make everything online. Fog computing, where the cloud computing expands to the edge of the network, is one way to achieve the objectives of delay reduction, immediate processing, and network congestion. Since IoT devices produce variations of workloads over time, IoT application services will experience traffic trace fluctuations. So knowing about the distribution of future workloads required to handle IoT workload while meeting the QoS constraint. As a result, in the context of fog computing, the main objective of resource management is dynamic resource provisioning such that it avoids the excess or dearth of provisioning. In the present work, we first propose a distributed computing framework for autonomic resource management in the context of fog computing. Then, we provide a customized version of a provisioning system for IoT services based on control MAPE‐k loop. The system makes use of a reinforcement learning technique as decision maker in planning phase and support vector regression technique in analysis phase. At the end, we conduct a family of simulation‐based experiments to assess the performance of our introduced system. The average delay, cost, and delay violation are decreased by 1.95%, 11%, and 5.1%, respectively, compared with existing solutions.  相似文献   

7.
Internet of Things (IoT) offers various types of application services in different domains, such as “smart infrastructure, health‐care, critical infrastructure, and intelligent transportation system.” The name edge computing signifies a corner or edge in a network at which traffic enters or exits from the network. In edge computing, the data analysis task happens very close to the IoT smart sensors and devices. Edge computing can also speed up the analysis process, which allows decision makers to take action within a short duration of time. However, edge‐based IoT environment has several security and privacy issues similar to those for the cloud‐based IoT environment. Various types of attacks, such as “replay, man‐in‐the middle, impersonation, password guessing, routing attack, and other denial of service attacks” may be possible in edge‐based IoT environment. The routing attacker nodes have the capability to deviate and disrupt the normal flow of traffic. These malicious nodes do not send packets (messages) to the edge node and only send packets to its neighbor collaborator attacker nodes. Therefore, in the presence of such kind of routing attack, edge node does not get the information or sometimes it gets the partial information. This further affects the overall performance of communication of edge‐based IoT environment. In the presence of such an attack, the “throughput of the network” decreases, “end‐to‐end delay” increases, “packet delivery ratio” decreases, and other parameters also get affected. Consequently, it is important to provide solution for such kind of attack. In this paper, we design an intrusion detection scheme for the detection of routing attack in edge‐based IoT environment called as RAD‐EI. We simulate RAD‐EI using the widely used “NS2 simulator” to measure different network parameters. Furthermore, we provide the security analysis of RAD‐EI to prove its resilience against routing attacks. RAD‐EI accomplishes around 95.0% “detection rate” and 1.23% “false positive rate” that are notably better than other related existing schemes. In addition, RAD‐EI is efficient in terms of computation and communication costs. As a result, RAD‐EI is a good match for some critical and sensitive applications, such as smart security and surveillance system.  相似文献   

8.
Mobile devices are the primary communication tool in day to day life of the people. Nowadays, the enhancement of the mobile applications namely IoTApps and their exploitation in various domains like healthcare monitoring, home automation, smart farming, smart grid, and smart city are crucial. Though mobile devices are providing seamless user experience anywhere, anytime, and anyplace, their restricted resources such as limited battery capacity, constrained processor speed, inadequate storage, and memory are hindering the development of resource‐intensive mobile applications and internet of things (IoT)‐based mobile applications. To solve this resource constraint problem, a web service‐based IoT framework is proposed by exploiting fuzzy logic methodologies. This framework augments the resources of mobile devices by offloading the resource‐intensive subtasks from mobile devices to the service providing entities like Arduino, Raspberry PI controller, edge cloud, and distant cloud. Based on the recommended framework, an online Repository of Instructional Talk (RIoTalk) is successfully implemented to store and analyze the classroom lectures given by faculty in our study site. Simulation results show that there is a significant reduction in energy consumption, execution time, bandwidth utilization, and latency. The proposed research work significantly increases the resources of mobile devices by offloading the resource‐intensive subtasks from the mobile device to the service provider computing entities thereby providing Quality of Service (QoS) and Quality of Experience (QoE) to mobile users.  相似文献   

9.
This paper presents a novel framework for quality‐of‐service (QoS) multicast routing with resource allocation that represents QoS parameters, jitter delay, and reliability, as functions of adjustable network resources, bandwidth, and buffer, rather than static metrics. The particular functional form of QoS parameters depends on rate‐based service disciplines used in the routers. This allows intelligent tuning of QoS parameters as functions of allocated resources during the multicast tree search process, rather than decoupling the tree search from resource allocation. The proposed framework minimizes the network resource utilization while keeping jitter delay, reliability, and bandwidth bounded. This definition makes the proposed QoS multicast routing with resource allocation problem more general than the classical minimum Steiner tree problem. As an application of our general framework, we formulate the QoS multicast routing with resource allocation problem for a network consisting of generalized processor sharing nodes as a mixed‐integer quadratic program and find the optimal multicast tree with allocated resources to satisfy the QoS constraints. We then present a polynomial‐time greedy heuristic for the QoS multicast routing with resource allocation problem and compare its performance with the optimal solution of the mixed‐integer quadratic program. The simulation results reveal that the proposed heuristic finds near‐optimal QoS multicast trees along with important insights into the interdependency of QoS parameters and resources.  相似文献   

10.

The idea of Smart City incorporates a few ideas being technology, economy, governance, people, management, and infrastructure. This implies a Smart City can have distinctive communication needs. Wireless technologies, for example, WiFi, Zig Bee, Bluetooth, WiMax, 4G or LTE have introduced themselves as a solution for the communication in Smart City activities. Nonetheless, as the majority of them utilize unlicensed interference, coexistence and bands issues are increasing. So to solve the problem IoT is used in smart cities. This paper addresses the issues of both resource allocation and routing to propose an energy efficient, congestion aware resource allocation and routing protocol (ECRR) for IoT network based on hybrid optimization techniques. The first contribution of proposed ECRR technique is to employ the data clustering and metaheuristic algorithm for allocate the large-scale devices and gateways of IoT to reduce the total congestion between them. The second contribution is to propose a queue based swarm optimization algorithm for select a better route for future route based on multiple constraints, which improves the route discovering mechanism. The proposed ECRR technique is implemented in Network Simulator (NS-2) tool and the simulation results are compared with the existing state-of-art techniques in terms of energy consumption, node lifetime, throughput, end-to-end delay, packet delivery ratio and packet overheads.

  相似文献   

11.
This paper proposes efficient resource allocation techniques for a policy-based wireless/wireline interworking architecture, where quality of service (QoS) provisioning and resource allocation is driven by the service level agreement (SLA). For end-to-end IP QoS delivery, each wireless access domain can independently choose its internal resource management policies to guarantee the customer access SLA (CASLA), while the border-crossing traffic is served by a core network following policy rules to meet the transit domain SLA (TRSLA). Particularly, we propose an engineered priority resource sharing scheme for a voice/data integrated wireless domain, where the policy rules allow cellular-only access or cellular/WLAN interworked access. By such a resource sharing scheme, the CASLA for each service class is met with efficient resource utilization, and the interdomain TRSLA bandwidth requirement can be easily determined. In the transit domain, the traffic load fluctuation from upstream access domains is tackled by an inter-TRSLA resource sharing technique, where the spare capacity from underloaded TRSLAs can be exploited by the overloaded TRSLAs to improve resource utilization. Advantages of the inter-SLA resource sharing technique are that the core network service provider can freely design the policy rules that define underload and overload status, determine the bandwidth reservation, and distribute the spare resources among bandwidth borrowers, while all the policies are supported by a common set of resource allocation techniques.  相似文献   

12.
The most recent trend in the Information and Communication Technology world is toward an ever growing demand of mobile heterogeneous services that imply the management of different quality of service requirements and priorities among different type of users. The long‐term evolution (LTE)/LTE‐advanced standards have been introduced aiming to cope with this challenge. In particular, the resource allocation problem in downlink needs to be carefully considered. Herein, a solution is proposed by resorting to a modified multidimensional multiple‐choice knapsack problem modeling, leading to an efficient solution. The proposed algorithm is able to manage different traffic flows taking into account users priority, queues delay, and channel conditions achieving quasi‐optimal performance results with a lower complexity. The numerical results show the effectiveness of the proposed solution with respect to other alternatives. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

13.
We propose a joint optimization network management framework for quality‐of‐service (QoS) routing with resource allocation. Our joint optimization framework provides a convenient way of maximizing the reliability or minimizing the jitter delay of paths. Data traffic is sensitive to droppage at buffers, while it can tolerate jitter delay. On the other hand, multimedia traffic can tolerate loss but it is very sensitive to jitter delay. Depending on the type of data, our scheme provides a convenient way of selecting the parameters which result in either reliability maximization or jitter minimization. We solve the optimization problem for a GPS network and provide the optimal solutions. We find the values of control parameters which control the type of optimization performed. We use our analytical results in a multi‐objective QoS routing algorithm. Finally, we provide insights into our optimization framework using simulations. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

14.

To attain high quality of service (QoS) with efficient power consumption with minimum delay through Wireless Local Area Network (WLAN) through mesh network is an important research area. But the existing real-time routing system involves multiple hops with time varying mobility channels for fastest data propagation is greatly degraded with power utilization factor through congestion traffic queue. Required allocation and resource management through desired access points plays vital roles in which multiple hops demands delay rates by interconnected data nodes. In order to achieve high throughput with minimum delay the QoS in real-time data communication have to be concentrated by using Viterbi decoder with convolution codes. By undertaking IEEE 802.11 WLAN physical layers afford multiple transmission rates by engaging various modulations and channel coding schemes, major point arises to pinpoint the desired transmission rate to enhance the performance. Because each node exhibits different dynamic characteristics based on the token rings passed from the server to the end links. In order to validate the real-time traffic with power consumption and average delay communication, an improved Viterbi decoder is designed with convolution codes to determine accurate channel estimation based on learning the utilization ration of the needed to execute the current wireless channel optimization. The proposed methodology can attain accurate channel estimation without additional implementation effort and modifications to the current 802.11 standard. And each node is capable to choose the optimized transmission rate, so that the system performance can be improved with very minimum power with high packet transmission ratio with minimum traffic rate to improve QoS. The proposed scheme also offers an appealing combination of the allocation of transmission rate and the current link condition. Based on the basic relationship between them, the proposed decoding scheme maximizes the throughput with periodic learning of channel variation and system status.

  相似文献   

15.
Quality-driven cross-layer optimized video delivery over LTE   总被引:2,自引:0,他引:2  
3GPP Long Term Evolution is one of the major steps in mobile communication to enhance the user experience for next-generation mobile broadband networks. In LTE, orthogonal frequency- division multiple access is adopted in the downlink of its E-UTRA air interface. Although cross-layer techniques have been widely adopted in literature for dynamic resource allocation to maximize data rate in OFDMA wireless networks, application-oriented quality of service for video delivery, such as delay constraint and video distortion, have been largely ignored. However, for wireless video delivery in LTE, especially delay-bounded real-time video streaming, higher data rate could lead to higher packet loss rate, thus degrading the user-perceived video quality. In this article we present a new QoS-aware LTE OFDMA scheduling algorithm for wireless real-time video delivery over the downlink of LTE cellular networks to achieve the best user-perceived video quality under the given application delay constraint. In the proposed approach, system throughput, application QoS constraints, and scheduling fairness are jointly integrated into a cross-layer design framework to dynamically perform radio resource allocation for multiple users, and to effectively choose the optimal system parameters such as modulation and coding scheme and video encoding parameters to adapt to the varying channel quality of each resource block. Experimental results have shown significant performance enhancement of the proposed system.  相似文献   

16.
A dynamic fair resource allocation scheme is proposed to efficiently support real-time and non-real-time multimedia traffic with guaranteed statistical quality of service (QoS) in the uplink of a wideband code-division multiple access (CDMA) cellular network. The scheme uses the generalized processor sharing (GPS) fair service discipline to allocate uplink channel-resources, taking into account the characteristics of channel fading and intercell interference. In specific, the resource allocated to each traffic flow is proportional to an assigned weighting factor. For real-time traffic, the assigned weighting factor is a constant in order to guarantee the traffic statistical delay bound requirement; for non-real-time traffic, the assigned weighting factor can be adjusted dynamically according to fading, channel states and the traffic statistical fairness bound requirement. Compared with the conventional static-weight scheme, the proposed dynamic-weight scheme achieves capacity gain. A flexible trade-off between the GPS fairness and efficient resource utilization can also be achieved. Analysis and simulation results demonstrate that the proposed scheme enhances radio resource utilization and guarantees statistical QoS under different fairness bound requirements.  相似文献   

17.
余云河  孙君 《电信科学》2021,37(11):41-50
针对海量机器类通信(massive machine type communication,mMTC)场景,以最大化系统吞吐量为目标,且在保证部分机器类通信设备(machine type communication device,MTCD)的服务质量(quality of service,QoS)要求前提下,提出两种基于Q学习的资源分配算法:集中式Q学习算法(team-Q)和分布式Q学习算法(dis-Q)。首先基于余弦相似度(cosine similarity,CS)聚类算法,考虑到MTCD地理位置和多级别QoS要求,构造代表MTCD和数据聚合器(data aggregator,DA)的多维向量,根据向量间CS值完成分组。然后分别利用team-Q学习算法和dis-Q学习算法为MTCD分配资源块(resource block,RB)和功率。吞吐量性能上,team-Q 和 dis-Q 算法相较于动态资源分配算法、贪婪算法分别平均提高了 16%、23%;复杂度性能上,dis-Q算法仅为team-Q算法的25%及以下,收敛速度则提高了近40%。  相似文献   

18.
The next‐generation packet‐based wireless cellular network will provide real‐time services for delay‐sensitive applications. To make the next‐generation cellular network successful, it is critical that the network utilizes the resource efficiently while satisfying quality of service (QoS) requirements of real‐time users. In this paper, we consider the problem of power control and dynamic channel allocation for the downlink of a multi‐channel, multi‐user wireless cellular network. We assume that the transmitter (the base‐station) has the perfect knowledge of the channel gain. At each transmission slot, a scheduler allots the transmission power and channel access for all the users based on the instantaneous channel gains and QoS requirements of users. We propose three schemes for power control and dynamic channel allocation, which utilize multi‐user diversity and frequency diversity. Our results show that compared to the benchmark scheme, which does not utilize multi‐user diversity and power control, our proposed schemes substantially reduce the resource usage while explicitly guaranteeing the users' QoS requirements. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

19.
This paper presents a prioritized resource allocation algorithm to share the limited communication channel resource among multiple wireless body area networks. The proposed algorithm is designed based on an active superframe interleaving scheme, one of the coexistence mechanisms in the IEEE 802.15.6 standard. It is the first study to consider the resource allocation method among wireless body area networks within a communication range. The traffic source of each wireless body area network is parameterized using the traffic specification, and required service rate for each wireless body area networks can be derived. The prioritized resource allocation algorithm employs this information to allocate the channel resource based on the wireless body area networks’ service priority. The simulation results verified that the traffic specification and the wireless body area network service priority based resource allocation are able to increase quality of service satisfaction, particularly for health and medical services.  相似文献   

20.
A simulation‐based optimization is a decision‐making tool that helps in identifying an optimal solution or a design for a system. An optimal solution and design are more meaningful if they enhance a smart system with sensing, computing, and monitoring capabilities with improved efficiency. In situations where testing the physical prototype is difficult, a computer‐based simulation and its optimization processes are helpful in providing low‐cost, speedy and lesser time‐ and resource‐consuming solutions. In this work, a comparative analysis of the proposed heuristic simulation‐optimization method for improving quality‐of‐service (QoS) is performed with generalized integrated optimization (a simulation approach based on genetic algorithms with evolutionary simulated annealing strategies having simplex search). In the proposed approach, feature‐based local (group) and global (network) formation processes are integrated with Internet of Things (IoT) based solutions for finding the optimum performance. Further, the simulated annealing method is applied for finding local and global optimum values supporting minimum traffic conditions. A small‐scale network of 50 to 100 nodes shows that genetic simulation optimization with multicriteria and multidimensional features performs better as compared to other simulation‐optimization approaches. Further, a minimum of 3.4% and a maximum of 16.2% improvement is observed in faster route identification for small‐scale IoT networks with simulation‐optimization constraints integrated model as compared to the traditional method. The proposed approach improves the critical infrastructure monitoring performance as compared to the generalized simulation‐optimization process in complex transportation scenarios with heavy traffic conditions. The communicational and computational‐cost complexities are least for the proposed approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号