首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 593 毫秒
1.
To reduce the transmission latency and mitigate the backhaul burden of the centralized cloud-based network services, the mobile edge computing (MEC) has been drawing increased attention from both industry and academia recently. This paper focuses on mobile users’ computation offloading problem in wireless cellular networks with mobile edge computing for the purpose of optimizing the computation offloading decision making policy. Since wireless network states and computing requests have stochastic properties and the environment’s dynamics are unknown, we use the model-free reinforcement learning (RL) framework to formulate and tackle the computation offloading problem. Each mobile user learns through interactions with the environment and the estimate of its performance in the form of value function, then it chooses the overhead-aware optimal computation offloading action (local computing or edge computing) based on its state. The state spaces are high-dimensional in our work and value function is unrealistic to estimate. Consequently, we use deep reinforcement learning algorithm, which combines RL method Q-learning with the deep neural network (DNN) to approximate the value functions for complicated control applications, and the optimal policy will be obtained when the value function reaches convergence. Simulation results showed that the effectiveness of the proposed method in comparison with baseline methods in terms of total overheads of all mobile users.  相似文献   

2.
The traditional multi-access edge computing (MEC) capacity is overwhelmed by the increasing demand for vehicles, leading to acute degradation in task offloading performance. There is a tremendous number of resource-rich and idle mobile connected vehicles (CVs) in the traffic network, and vehicles are created as opportunistic ad-hoc edge clouds to alleviate the resource limitation of MEC by providing opportunistic computing services. On this basis, a novel scalable system framework is proposed in this paper for computation task offloading in opportunistic CV-assisted MEC. In this framework, opportunistic ad-hoc edge cloud and fixed edge cloud cooperate to form a novel hybrid cloud. Meanwhile, offloading decision and resource allocation of the user CVs must be ascertained. Furthermore, the joint offloading decision and resource allocation problem is described as a Mixed Integer Nonlinear Programming (MINLP) problem, which optimizes the task response latency of user CVs under various constraints. The original problem is decomposed into two subproblems. First, the Lagrange dual method is used to acquire the best resource allocation with the fixed offloading decision. Then, the satisfaction-driven method based on trial and error (TE) learning is adopted to optimize the offloading decision. Finally, a comprehensive series of experiments are conducted to demonstrate that our suggested scheme is more effective than other comparison schemes.  相似文献   

3.
Mobile edge cloud networks can be used to offload computationally intensive tasks from Internet of Things (IoT) devices to nearby mobile edge servers, thereby lowering energy consumption and response time for ground mobile users or IoT devices. Integration of Unmanned Aerial Vehicles (UAVs) and the mobile edge computing (MEC) server will significantly benefit small, battery-powered, and energy-constrained devices in 5G and future wireless networks. We address the problem of maximising computation efficiency in U-MEC networks by optimising the user association and offloading indicator (OI), the computational capacity (CC), the power consumption, the time duration, and the optimal location planning simultaneously. It is possible to assign some heavy tasks to the UAV for faster processing and small ones to the mobile users (MUs) locally. This paper utilizes the k-means clustering algorithm, the interior point method, and the conjugate gradient method to iteratively solve the non-convex multi-objective resource allocation problem. According to simulation results, both local and offloading schemes give optimal solution.  相似文献   

4.
With the emergence of 5G mobile multimedia services, end users’ demand for high-speed, low-latency mobile communication network access is increasing. Among them, the device-to-device (D2D) communication is one of the considerable technology. In D2D communication, the data does not need to be relayed and forwarded by the base station, but under the control of the base station, a direct local link is allowed between two adjacent mobile devices. This flexible communication mode reduces the processing bottlenecks and coverage blind spots of the base station, and can be widely used in dense user communication scenarios such as heterogeneous ultra-dense wireless networks. One of the important factors which affects the quality-of-service (QoS) of D2D communications is co-channel interference. In order to solve this problem of co-channel interference, this paper proposes a graph coloring based algorithm. The main idea is to utilize the weighted priority of spectrum resources and enables multiple D2D users to reuse the single cellular user resource. The proposed algorithm also provides simpler power control. The heterogeneous pattern of interference is determined using different types of interferences and UE and the priority of color is acquired. Simulation results show that the proposed algorithm effectively reduced the co-channel interference, power consumption and improved the system throughput as compared with existing algorithms.  相似文献   

5.
With the growing amounts of multi-micro grids, electric vehicles, smart home, smart cities connected to the Power Distribution Internet of Things (PD-IoT) system, greater computing resource and communication bandwidth are required for power distribution. It probably leads to extreme service delay and data congestion when a large number of data and business occur in emergence. This paper presents a service scheduling method based on edge computing to balance the business load of PD-IoT. The architecture, components and functional requirements of the PD-IoT with edge computing platform are proposed. Then, the structure of the service scheduling system is presented. Further, a novel load balancing strategy and ant colony algorithm are investigated in the service scheduling method. The validity of the method is evaluated by simulation tests. Results indicate that the mean load balancing ratio is reduced by 99.16% and the optimized offloading links can be acquired within 1.8 iterations. Computing load of the nodes in edge computing platform can be effectively balanced through the service scheduling.  相似文献   

6.
Internet of Things (IoT) technology is rapidly evolving, but there is no trusted platform to protect user privacy, protect information between different IoT domains, and promote edge processing. Therefore, we integrate the blockchain technology into constructing trusted IoT platforms. However, the application of blockchain in IoT is hampered by the challenges posed by heavy computing processes. To solve the problem, we put forward a blockchain framework based on mobile edge computing, in which the blockchain mining tasks can be offloaded to nearby nodes or the edge computing service providers and the encrypted hashes of blocks can be cached in the edge computing service providers. Moreover, we model the process of offloading and caching to ensure that both edge nodes and edge computing service providers obtain the maximum profit based on game theory and auction theory. Finally, the proposed mechanism is compared with the centralized mode, mode A (all the miners offload their tasks to the edge computing service providers), and mode B (all the miners offload their tasks to a group of neighbor devices). Simulation results show that under our mechanism, mining networks obtain more profits and consume less time on average.  相似文献   

7.
In today’s world, smart phones offer various applications namely face detection, augmented-reality, image and video processing, video gaming and speech recognition. With the increasing demand for computing resources, these applications become more complicated. Cloud Computing (CC) environment provides access to unlimited resource pool with several features, including on demand self-service, elasticity, wide network access, resource pooling, low cost, and ease of use. Mobile Cloud Computing (MCC) aimed at overcoming drawbacks of smart phone devices. The task remains in combining CC technology to the mobile devices with improved battery life and therefore resulting in significant performance. For remote execution, recent studies suggested downloading all or part of mobile application from mobile device. On the other hand, in offloading process, mobile device energy consumption, Central Processing Unit (CPU) utilization, execution time, remaining battery life and amount of data transmission in network were related to one or more constraints by frameworks designed. To address the issues, a Heuristic and Bent Key Exchange (H-BKE) method can be considered by both ways to optimize energy consumption as well as to improve security during offloading. First, an energy efficient offloading model is designed using Reactive Heuristic Offloading algorithm where, the secondary users are allocated with the unused primary users’ spectrum. Next, a novel AES algorithm is designed that uses a Bent function and Rijndael variant with the advantage of large block size is hard to interpret and hence is said to ensure security while accessing primary users’ unused spectrum by the secondary user. Simulations are conducted for efficient offloading in mobile cloud and performance valuations are carried on the way to demonstrate that our projected technique is successful in terms of time consumption, energy consumption along with the security aspects covered during offloading in MCC.  相似文献   

8.
With the continuous evolution of smart grid and global energy interconnection technology, amount of intelligent terminals have been connected to power grid, which can be used for providing resource services as edge nodes. Traditional cloud computing can be used to provide storage services and task computing services in the power grid, but it faces challenges such as resource bottlenecks, time delays, and limited network bandwidth resources. Edge computing is an effective supplement for cloud computing, because it can provide users with local computing services with lower latency. However, because the resources in a single edge node are limited, resource-intensive tasks need to be divided into many subtasks and then assigned to different edge nodes by resource cooperation. Making task scheduling more efficient is an important issue. In this paper, a two-layer resource management scheme is proposed based on the concept of edge computing. In addition, a new task scheduling algorithm named GA-EC(Genetic Algorithm for Edge Computing) is put forth, based on a genetic algorithm, that can dynamically schedule tasks according to different scheduling goals. The simulation shows that the proposed algorithm has a beneficial effect on energy consumption and load balancing, and reduces time delay.  相似文献   

9.
Healthcare is a fundamental part of every individual’s life. The healthcare industry is developing very rapidly with the help of advanced technologies. Many researchers are trying to build cloud-based healthcare applications that can be accessed by healthcare professionals from their premises, as well as by patients from their mobile devices through communication interfaces. These systems promote reliable and remote interactions between patients and healthcare professionals. However, there are several limitations to these innovative cloud computing-based systems, namely network availability, latency, battery life and resource availability. We propose a hybrid mobile cloud computing (HMCC) architecture to address these challenges. Furthermore, we also evaluate the performance of heuristic and dynamic machine learning based task scheduling and load balancing algorithms on our proposed architecture. We compare them, to identify the strengths and weaknesses of each algorithm; and provide their comparative results, to show latency and energy consumption performance. Challenging issues for cloud-based healthcare systems are discussed in detail.  相似文献   

10.
With the rapid development of Internet technology, users have an increasing demand for data. The continuous popularization of traffic-intensive applications such as high-definition video, 3D visualization, and cloud computing has promoted the rapid evolution of the communications industry. In order to cope with the huge traffic demand of today’s users, 5G networks must be fast, flexible, reliable and sustainable. Based on these research backgrounds, the academic community has proposed D2D communication. The main feature of D2D communication is that it enables direct communication between devices, thereby effectively improve resource utilization and reduce the dependence on base stations, so it can effectively improve the throughput of multimedia data. One of the most considerable factor which affects the performance of D2D communication is the co-channel interference which results due to the multiplexing of multiple D2D user using the same channel resource of the cellular user. To solve this problem, this paper proposes a joint algorithm time scheduling and power control. The main idea is to effectively maximize the number of allocated resources in each scheduling period with satisfied quality of service requirements. The constraint problem is decomposed into time scheduling and power control subproblems. The power control subproblem has the characteristics of mixed-integer linear programming of NP-hard. Therefore, we proposed a gradual power control method. The time scheduling subproblem belongs to the NP-hard problem having convex-cordinality, therefore, we proposed a heuristic scheme to optimize resource allocation. Simulation results show that the proposed algorithm effectively improved the resource allocation and overcome the co-channel interference as compared with existing algorithms.  相似文献   

11.
Adapting wireless devices to communicate within grid networks empower users. This makes deployment of a wide range of applications possible within appropriate limits, including intermittent network connectivity and non-dedicated heterogeneous system capacity. The performance prediction model is used to improve the performance of the mobile grid job scheduling algorithm (MG-JSA). The proposed algorithm predicts the response time for processing the distributed application in each mobile node, although considering wireless network environments and inherent non-dedicated heterogeneous system capacity. Using this prediction model, the algorithm partitions and allocates the distributed jobs to available mobile nodes for rapid job processing. The efficiency of the MG-JSA model is demonstrated by evaluating its performance.  相似文献   

12.
One of the most effective technology for the 5G mobile communications is Device-to-device (D2D) communication which is also called terminal pass-through technology. It can directly communicate between devices under the control of a base station and does not require a base station to forward it. The advantages of applying D2D communication technology to cellular networks are: It can increase the communication system capacity, improve the system spectrum efficiency, increase the data transmission rate, and reduce the base station load. Aiming at the problem of co-channel interference between the D2D and cellular users, this paper proposes an efficient algorithm for resource allocation based on the idea of Q-learning, which creates multi-agent learners from multiple D2D users, and the system throughput is determined from the corresponding state-learning of the Q value list and the maximum Q action is obtained through dynamic power for control for D2D users. The mutual interference between the D2D users and base stations and exact channel state information is not required during the Q-learning process and symmetric data transmission mechanism is adopted. The proposed algorithm maximizes the system throughput by controlling the power of D2D users while guaranteeing the quality-of-service of the cellular users. Simulation results show that the proposed algorithm effectively improves system performance as compared with existing algorithms.  相似文献   

13.
研究了宽带无线移动通信系统的业务测试技术,提出了无线信道条件下的IPTV业务性能的测试方法,并根据该方法,搭建了一套测试系统.通过实地的测试,获得了室内/室外宽带无线信道环境下的时延、抖动、丢包率等实际的测试结果,基于测试数据,分析了系统的相关性能,同时验证了测试方法的可操作性.  相似文献   

14.
The Internet of Things (IoT) is gaining attention because of its broad applicability, especially by integrating smart devices for massive communication during sensing tasks. IoT-assisted Wireless Sensor Networks (WSN) are suitable for various applications like industrial monitoring, agriculture, and transportation. In this regard, routing is challenging to find an efficient path using smart devices for transmitting the packets towards big data repositories while ensuring efficient energy utilization. This paper presents the Robust Cluster Based Routing Protocol (RCBRP) to identify the routing paths where less energy is consumed to enhances the network lifespan. The scheme is presented in six phases to explore flow and communication. We propose the two algorithms: i) energy-efficient clustering and routing algorithm and ii) distance and energy consumption calculation algorithm. The scheme consumes less energy and balances the load by clustering the smart devices. Our work is validated through extensive simulation using Matlab. Results elucidate the dominance of the proposed scheme is compared to counterparts in terms of energy consumption, the number of packets received at BS and the number of active and dead nodes. In the future, we shall consider edge computing to analyze the performance of robust clustering.  相似文献   

15.
Asynchronous federated learning (AsynFL) can effectively mitigate the impact of heterogeneity of edge nodes on joint training while satisfying participant user privacy protection and data security. However, the frequent exchange of massive data can lead to excess communication overhead between edge and central nodes regardless of whether the federated learning (FL) algorithm uses synchronous or asynchronous aggregation. Therefore, there is an urgent need for a method that can simultaneously take into account device heterogeneity and edge node energy consumption reduction. This paper proposes a novel Fixed-point Asynchronous Federated Learning (FixedAsynFL) algorithm, which could mitigate the resource consumption caused by frequent data communication while alleviating the effect of device heterogeneity. FixedAsynFL uses fixed-point quantization to compress the local and global models in AsynFL. In order to balance energy consumption and learning accuracy, this paper proposed a quantization scale selection mechanism. This paper examines the mathematical relationship between the quantization scale and energy consumption of the computation/communication process in the FixedAsynFL. Based on considering the upper bound of quantization noise, this paper optimizes the quantization scale by minimizing communication and computation consumption. This paper performs pertinent experiments on the MNIST dataset with several edge nodes of different computing efficiency. The results show that the FixedAsynFL algorithm with an 8-bit quantization can significantly reduce the communication data size by 81.3% and save the computation energy in the training phase by 74.9% without significant loss of accuracy. According to the experimental results, we can see that the proposed AsynFixedFL algorithm can effectively solve the problem of device heterogeneity and energy consumption limitation of edge nodes.  相似文献   

16.
In this work, we consider the performance analysis of state dependent priority traffic and scheduling in device to device (D2D) heterogeneous networks. There are two priority transmission types of data in wireless communication, such as video or telephone, which always meet the requirements of high priority (HP) data transmission first. If there is a large amount of low priority (LP) data, there will be a large amount of LP data that cannot be sent. This situation will cause excessive delay of LP data and packet dropping probability. In order to solve this problem, the data transmission process of high priority queue and low priority queue is studied. Considering the priority jump strategy to the priority queuing model, the queuing process with two priority data is modeled as a two-dimensional Markov chain. A state dependent priority jump queuing strategy is proposed, which can improve the discarding performance of low priority data. The quasi birth and death process method (QBD) and fixed point iteration method are used to solve the causality, and the steady-state probability distribution is further obtained.Then, performance parameters such as average queue length, average throughput, average delay and packet dropping probability for both high and low priority data can be expressed. The simulation results verify the correctness of the theoretical derivation. Meanwhile, the proposed priority jump queuing strategy can significantly improve the drop performance of low-priority data.  相似文献   

17.
In this paper, we investigate video quality enhancement using computation offloading to the mobile cloud computing (MCC) environment. Our objective is to reduce the computational complexity required to covert a low-resolution video to high-resolution video while minimizing computation at the mobile client and additional communication costs. To do so, we propose an energy-efficient computation offloading framework for video streaming services in a MCC over the fifth generation (5G) cellular networks. In the proposed framework, the mobile client offloads the computational burden for the video enhancement to the cloud, which renders the side information needed to enhance video without requiring much computation by the client. The cloud detects edges from the upsampled ultra-high-resolution video (UHD) and then compresses and transmits them as side information with the original low-resolution video (e.g., full HD). Finally, the mobile client decodes the received content and integrates the SI and original content, which produces a high-quality video. In our extensive simulation experiments, we observed that the amount of computation needed to construct a UHD video in the client is 50%-60% lower than that required to decode UHD video compressed by legacy video encoding algorithms. Moreover, the bandwidth required to transmit a full HD video and its side information is around 70% lower than that required for a normal UHD video. The subjective quality of the enhanced UHD is similar to that of the original UHD video even though the client pays lower communication costs with reduced computing power.  相似文献   

18.
With the rapid development of artificial intelligence, face recognition systems are widely used in daily lives. Face recognition applications often need to process large amounts of image data. Maintaining the accuracy and low latency is critical to face recognition systems. After analyzing the two-tier architecture “client-cloud” face recognition systems, it is found that these systems have high latency and network congestion when massive recognition requirements are needed to be responded, and it is very inconvenient and inefficient to deploy and manage relevant applications on the edge of the network. This paper proposes a flexible and efficient edge computing accelerated architecture. By offloading part of the computing tasks to the edge server closer to the data source, edge computing resources are used for image preprocessing to reduce the number of images to be transmitted, thus reducing the network transmission overhead. Moreover, the application code does not need to be rewritten and can be easily migrated to the edge server. We evaluate our schemes based on the open source Azure IoT Edge, and the experimental results show that the three-tier architecture “Client-Edge-Cloud” face recognition system outperforms the state-of-art face recognition systems in reducing the average response time.  相似文献   

19.
The vehicular cloud computing is an emerging technology that changes vehicle communication and underlying traffic management applications. However, cloud computing has disadvantages such as high delay, low privacy and high communication cost, which can not meet the needs of real-time interactive information of Internet of vehicles. Ensuring security and privacy in Internet of Vehicles is also regarded as one of its most important challenges. Therefore, in order to ensure the user information security and improve the real-time of vehicle information interaction, this paper proposes an anonymous authentication scheme based on edge computing. In this scheme, the concept of edge computing is introduced into the Internet of vehicles, which makes full use of the redundant computing power and storage capacity of idle edge equipment. The edge vehicle nodes are determined by simple algorithm of defining distance and resources, and the improved RSA encryption algorithm is used to encrypt the user information. The improved RSA algorithm encrypts the user information by reencrypting the encryption parameters . Compared with the traditional RSA algorithm, it can resist more attacks, so it is used to ensure the security of user information. It can not only protect the privacy of vehicles, but also avoid anonymous abuse. Simulation results show that the proposed scheme has lower computational complexity and communication overhead than the traditional anonymous scheme.  相似文献   

20.
一种低计算复杂度的无线传感器网络分簇定位算法   总被引:1,自引:0,他引:1  
针对已有的集中式定位算法定位精度低,而分布式定位算法计算复杂度高、通信量大的问题,提出了一种适用于无线传感器网络的计算复杂度低的节点分簇定位算法.首先,提出满足最大连通度的多边界节点分簇算法,采用此算法把网络划分为若干个簇,各簇分别进行簇内节点定位;其次,各簇进行融合,最终实现全网节点的定位.仿真结果表明,这种分簇定位算法比分布式定位算法计算复杂度低、通信量小、定位精度相当或略差,比集中式定位算法计算复杂度低、通信量小、定位精度高.采用该算法可以降低传感器网络节点定位过程中的能耗,提高计算效率,延长网络寿命.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号