首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In the paper, we investigate the heterogeneous resource allocation scheme for virtual machines with slicing technology in the 5G/B5G edge computing environment. In general, the different slices for different task scenarios exist in the same edge layer synchronously. A lot of researches reveal that the virtual machines of different slices indicate strong heterogeneity with different reserved resource granularity. In the condition, the allocation process is a NP hard problem and difficult for the actual demand of the tasks in the strongly heterogeneous environment. Based on the slicing and container concept, we propose the resource allocation scheme named Two-Dimension allocation and correlation placement Scheme (TDACP). The scheme divides the resource allocation and management work into three stages in this paper: In the first stage, it designs reasonably strategy to allocate resources to different task slices according to demand. In the second stage, it establishes an equivalent relationship between the virtual machine reserved resource capacity and the Service-Level Agreement (SLA) of the virtual machine in different slices. In the third stage, it designs a placement optimization strategy to schedule the equivalent virtual machines in the physical servers. Thus, it is able to establish a virtual machine placement strategy with high resource utilization efficiency and low time cost. The simulation results indicate that the proposed scheme is able to suppress the problem of uneven resource allocation which is caused by the pure preemptive scheduling strategy. It adjusts the number of equivalent virtual machines based on the SLA range of system parameter, and reduces the SLA probability of physical servers effectively based on resource utilization time sampling series linear. The scheme is able to guarantee resource allocation and management work orderly and efficiently in the edge datacenter slices.  相似文献   

2.
In mobile edge computing (MEC), one of the important challenges is how much resources of which mobile edge server (MES) should be allocated to which user equipment (UE). The existing resource allocation schemes only consider CPU as the requested resource and assume utility for MESs as either a random variable or dependent on the requested CPU only. This paper presents a novel comprehensive utility function for resource allocation in MEC. The utility function considers the heterogeneous nature of applications that a UE offloads to MES. The proposed utility function considers all important parameters, including CPU, RAM, hard disk space, required time, and distance, to calculate a more realistic utility value for MESs. Moreover, we improve upon some general algorithms, used for resource allocation in MEC and cloud computing, by considering our proposed utility function. We name the improved versions of these resource allocation schemes as comprehensive resource allocation schemes. The UE requests are modeled to represent the amount of resources requested by the UE as well as the time for which the UE has requested these resources. The utility function depends upon the UE requests and the distance between UEs and MES, and serves as a realistic means of comparison between different types of UE requests. Choosing (or selecting) an optimal MES with the optimal amount of resources to be allocated to each UE request is a challenging task. We show that MES resource allocation is sub-optimal if CPU is the only resource considered. By taking into account the other resources, i.e., RAM, disk space, request time, and distance in the utility function, we demonstrate improvement in the resource allocation algorithms in terms of service rate, utility, and MES energy consumption.  相似文献   

3.
Resource allocation in auctions is a challenging problem for cloud computing. However, the resource allocation problem is NP-hard and cannot be solved in polynomial time. The existing studies mainly use approximate algorithms such as PTAS or heuristic algorithms to determine a feasible solution; however, these algorithms have the disadvantages of low computational efficiency or low allocate accuracy. In this paper, we use the classification of machine learning to model and analyze the multi-dimensional cloud resource allocation problem and propose two resource allocation prediction algorithms based on linear and logistic regressions. By learning a small-scale training set, the prediction model can guarantee that the social welfare, allocation accuracy, and resource utilization in the feasible solution are very close to those of the optimal allocation solution. The experimental results show that the proposed scheme has good effect on resource allocation in cloud computing.  相似文献   

4.
In edge computing, a reasonable edge resource bidding mechanism can enable edge providers and users to obtain benefits in a relatively fair fashion. To maximize such benefits, this paper proposes a dynamic multi-attribute resource bidding mechanism (DMRBM). Most of the previous work mainly relies on a third-party agent to exchange information to gain optimal benefits. It is worth noting that when edge providers and users trade with third-party agents which are not entirely reliable and trustworthy, their sensitive information is prone to be leaked. Moreover, the privacy protection of edge providers and users must be considered in the dynamic pricing/transaction process, which is also very challenging. Therefore, this paper first adopts a privacy protection algorithm to prevent sensitive information from leakage. On the premise that the sensitive data of both edge providers and users are protected, the prices of providers fluctuate within a certain range. Then, users can choose appropriate edge providers by the price-performance ratio (PPR) standard and the reward of lower price (LPR) standard according to their demands. The two standards can be evolved by two evaluation functions. Furthermore, this paper employs an approximate computing method to get an approximate solution of DMRBM in polynomial time. Specifically, this paper models the bidding process as a non-cooperative game and obtains the approximate optimal solution based on two standards according to the game theory. Through the extensive experiments, this paper demonstrates that the DMRBM satisfies the individual rationality, budget balance, and privacy protection and it can also increase the task offloading rate and the system benefits.  相似文献   

5.
Edge Computing is a new technology in Internet of Things (IoT) paradigm that allows sensitive data to be sent to disperse devices quickly and without delay. Edge is identical to Fog, except its positioning in the end devices is much nearer to end-users, making it process and respond to clients in less time. Further, it aids sensor networks, real-time streaming apps, and the IoT, all of which require high-speed and dependable internet access. For such an IoT system, Resource Scheduling Process (RSP) seems to be one of the most important tasks. This paper presents a RSP for Edge Computing (EC). The resource characteristics are first standardized and normalized. Next, for task scheduling, a Fuzzy Control based Edge Resource Scheduling (FCERS) is suggested. The results demonstrate that this technique enhances resource scheduling efficiency in EC and Quality of Service (QoS). The experimental study revealed that the suggested FCERS method in this work converges quicker than the other methods. Our method reduces the total computing cost, execution time, and energy consumption on average compared to the baseline. The ES allocates higher processing resources to each user in case of limited availability of MDs; this results in improved task execution time and a reduced total task computation cost. Additionally, the proposed FCERS m 1m may more efficiently fetch user requests to suitable resource categories, increasing user requirements.  相似文献   

6.
Infrastructure of fog is a complex system due to the large number of heterogeneous resources that need to be shared. The embedded devices deployed with the Internet of Things (IoT) technology have increased since the past few years, and these devices generate huge amount of data. The devices in IoT can be remotely connected and might be placed in different locations which add to the network delay. Real time applications require high bandwidth with reduced latency to ensure Quality of Service (QoS). To achieve this, fog computing plays a vital role in processing the request locally with the nearest available resources by reduced latency. One of the major issues to focus on in a fog service is managing and allocating resources. Queuing theory is one of the most popular mechanisms for task allocation. In this work, an efficient model is designed to improve QoS with the efficacy of resource allocation based on a Queuing Theory based Cuckoo Search (QTCS) model which will optimize the overall resource management process.  相似文献   

7.
To reduce the transmission latency and mitigate the backhaul burden of the centralized cloud-based network services, the mobile edge computing (MEC) has been drawing increased attention from both industry and academia recently. This paper focuses on mobile users’ computation offloading problem in wireless cellular networks with mobile edge computing for the purpose of optimizing the computation offloading decision making policy. Since wireless network states and computing requests have stochastic properties and the environment’s dynamics are unknown, we use the model-free reinforcement learning (RL) framework to formulate and tackle the computation offloading problem. Each mobile user learns through interactions with the environment and the estimate of its performance in the form of value function, then it chooses the overhead-aware optimal computation offloading action (local computing or edge computing) based on its state. The state spaces are high-dimensional in our work and value function is unrealistic to estimate. Consequently, we use deep reinforcement learning algorithm, which combines RL method Q-learning with the deep neural network (DNN) to approximate the value functions for complicated control applications, and the optimal policy will be obtained when the value function reaches convergence. Simulation results showed that the effectiveness of the proposed method in comparison with baseline methods in terms of total overheads of all mobile users.  相似文献   

8.
The traditional multi-access edge computing (MEC) capacity is overwhelmed by the increasing demand for vehicles, leading to acute degradation in task offloading performance. There is a tremendous number of resource-rich and idle mobile connected vehicles (CVs) in the traffic network, and vehicles are created as opportunistic ad-hoc edge clouds to alleviate the resource limitation of MEC by providing opportunistic computing services. On this basis, a novel scalable system framework is proposed in this paper for computation task offloading in opportunistic CV-assisted MEC. In this framework, opportunistic ad-hoc edge cloud and fixed edge cloud cooperate to form a novel hybrid cloud. Meanwhile, offloading decision and resource allocation of the user CVs must be ascertained. Furthermore, the joint offloading decision and resource allocation problem is described as a Mixed Integer Nonlinear Programming (MINLP) problem, which optimizes the task response latency of user CVs under various constraints. The original problem is decomposed into two subproblems. First, the Lagrange dual method is used to acquire the best resource allocation with the fixed offloading decision. Then, the satisfaction-driven method based on trial and error (TE) learning is adopted to optimize the offloading decision. Finally, a comprehensive series of experiments are conducted to demonstrate that our suggested scheme is more effective than other comparison schemes.  相似文献   

9.
In the present scenario, cloud computing service provides on-request access to a collection of resources available in remote system that can be shared by numerous clients. Resources are in self-administration; consequently, clients can adjust their usage according to their requirements. Resource usage is estimated and clients can pay according to their utilization. In literature, the existing method describes the usage of various hardware assets. Quality of Service (QoS) needs to be considered for ascertaining the schedule and the access of resources. Adhering with the security arrangement, any additional code is forbidden to ensure the usage of resources complying with QoS. Thus, all monitoring must be done from the hypervisor. To overcome the issues, Robust Resource Allocation and Utilization (RRAU) approach is developed for optimizing the management of its cloud resources. The work hosts a numerous virtual assets which could be expected under the circumstances and it enforces a controlled degree of QoS. The asset assignment calculation is heuristic, which is based on experimental evaluations, RRAU approach with J48 prediction model reduces Job Completion Time (JCT) by 4.75 s, Make Span (MS) 6.25, and Monetary Cost (MC) 4.25 for 15, 25, 35 and 45 resources are compared to the conventional methodologies in cloud environment.  相似文献   

10.
Deep learning technology has been widely used in computer vision, speech recognition, natural language processing, and other related fields. The deep learning algorithm has high precision and high reliability. However, the lack of resources in the edge terminal equipment makes it difficult to run deep learning algorithms that require more memory and computing power. In this paper, we propose MoTransFrame, a general model processing framework for deep learning models. Instead of designing a model compression algorithm with a high compression ratio, MoTransFrame can transplant popular convolutional neural networks models to resources-starved edge devices promptly and accurately. By the integration method, Deep learning models can be converted into portable projects for Arduino, a typical edge device with limited resources. Our experiments show that MoTransFrame has good adaptability in edge devices with limited memories. It is more flexible than other model transplantation methods. It can keep a small loss of model accuracy when the number of parameters is compressed by tens of times. At the same time, the computational resources needed in the reasoning process are less than what the edge node could handle.  相似文献   

11.
多项目管理中企业资源配置效率模型   总被引:2,自引:0,他引:2  
针对信息环境下企业多项目管理中资源配置这一核心问题,应用随机理论确定了企业资源多项目并行配置中的资源等效效率概念和效率转换系数概念,建立了资源配置效率模型,通过对其数学方程的分析给出了相应的算法.通过资源配置效率模型实现企业资源的合理配置,有效支持多项目管理.  相似文献   

12.
中国企业的资源配置集约化水平不高,应对经营资源投入效果予以改善就成为企业的战略重点。RPM是一种有效的分析工具,可用于分析企业内资源配置有效性,也可用于分析企业间竞争态势。  相似文献   

13.
The number of mobile devices accessing wireless networks is skyrocketing due to the rapid advancement of sensors and wireless communication technology. In the upcoming years, it is anticipated that mobile data traffic would rise even more. The development of a new cellular network paradigm is being driven by the Internet of Things, smart homes, and more sophisticated applications with greater data rates and latency requirements. Resources are being used up quickly due to the steady growth of smartphone devices and multimedia apps. Computation offloading to either several distant clouds or close mobile devices has consistently improved the performance of mobile devices. The computation latency can also be decreased by offloading computing duties to edge servers with a specific level of computing power. Device-to-device (D2D) collaboration can assist in processing small-scale activities that are time-sensitive in order to further reduce task delays. The task offloading performance is drastically reduced due to the variation of different performance capabilities of edge nodes. Therefore, this paper addressed this problem and proposed a new method for D2D communication. In this method, the time delay is reduced by enabling the edge nodes to exchange data samples. Simulation results show that the proposed algorithm has better performance than traditional algorithm.  相似文献   

14.
Cold-chain logistics system (CCLS) plays the role of collecting and managing the logistics data of frozen food. However, there always exist problems of information loss, data tampering, and privacy leakage in traditional centralized systems, which influence frozen food security and people’s health. The centralized management form impedes the development of the cold-chain logistics industry and weakens logistics data availability. This paper first introduces a distributed CCLS based on blockchain technology to solve the centralized management problem. This system aggregates the production base, storage, transport, detection, processing, and consumer to form a cold-chain logistics union. The blockchain ledger guarantees that the logistics data cannot be tampered with and establishes a traceability mechanism for food safety incidents. Meanwhile, to improve the value of logistics data, a Stackelberg game-based resource allocation model has been proposed between the logistics data resource provider and the consumer. The competition between resource price and volume balances the resource supplement and consumption. This model can help to achieve an optimal resource price when the Stackelberg game obtains Nash equilibrium. The two participants also can maximize their revenues with the optimal resource price and volume by utilizing the backward induction method. Then, the performance evaluations of transaction throughput and latency show that the proposed distributed CCLS is more secure and stable. The simulations about the variation trend of data price and amount, optimal benefits, and total benefits comparison of different forms show that the resource allocation model is more efficient and practical. Moreover, the blockchain-based CCLS and Stackelberg game-based resource allocation model also can promote the value of logistic data and improve social benefits.  相似文献   

15.
Load-time series data in mobile cloud computing of Internet of Vehicles (IoV) usually have linear and nonlinear composite characteristics. In order to accurately describe the dynamic change trend of such loads, this study designs a load prediction method by using the resource scheduling model for mobile cloud computing of IoV. Firstly, a chaotic analysis algorithm is implemented to process the load-time series, while some learning samples of load prediction are constructed. Secondly, a support vector machine (SVM) is used to establish a load prediction model, and an improved artificial bee colony (IABC) function is designed to enhance the learning ability of the SVM. Finally, a CloudSim simulation platform is created to select the per-minute CPU load history data in the mobile cloud computing system, which is composed of 50 vehicles as the data set; and a comparison experiment is conducted by using a grey model, a back propagation neural network, a radial basis function (RBF) neural network and a RBF kernel function of SVM. As shown in the experimental results, the prediction accuracy of the method proposed in this study is significantly higher than other models, with a significantly reduced real-time prediction error for resource loading in mobile cloud environments. Compared with single-prediction models, the prediction method proposed can build up multidimensional time series in capturing complex load time series, fit and describe the load change trends, approximate the load time variability more precisely, and deliver strong generalization ability to load prediction models for mobile cloud computing resources.  相似文献   

16.
The goal of delivering high-quality service has spurred research of 6G satellite communication networks. The limited resource-allocation problem has been addressed by next-generation satellite communication networks, especially multilayer networks with multiple low-Earth-orbit (LEO) and non-low-Earth-orbit (NLEO) satellites. In this study, the resource-allocation problem of a multilayer satellite network consisting of one NLEO and multiple LEO satellites is solved. The NLEO satellite is the authorized user of spectrum resources and the LEO satellites are unauthorized users. The resource allocation and dynamic pricing problems are combined, and a dynamic game-based resource pricing and allocation model is proposed to maximize the market advantage of LEO satellites and reduce interference between LEO and NLEO satellites. In the proposed model, the resource price is formulated as the dynamic state of the LEO satellites, using the resource allocation strategy as the control variable. Based on the proposed dynamic game model, an open-loop Nash equilibrium is analyzed, and an algorithm is proposed for the resource pricing and allocation problem. Numerical simulations validate the model and algorithm.  相似文献   

17.
本文研究多种移动设备、单个移动边缘计算(Mobile Edge Computing,MEC)服务器网络场景下,移动设备计算任务的动态卸载决策及资源分配问题.在移动设备的计算任务队列稳定与时延限制、及最大发射功率约束等条件下,建立以系统长期平均能耗最小化为优化目标,实现任务卸载决策、计算资源分配、无线信道及发射功率分配的...  相似文献   

18.
指出了应用二进制编码遗传算法进行项目组合选择的局限性,提出利用实数编码代替二进制编码进行项目组合规模决策.借鉴双赌论选择遗传算法的思想建立了基于实数编码的项目组合规模决策遗传算法,最后通过算例探讨了实数编码和二进制编码两种遗传算法在项目组合决策中的优劣.  相似文献   

19.
In this paper, we have proposed a differential game model to optimally solve the resource allocation problems in the edge-computing based wireless networks. In the proposed model, a wireless network with one cloud-computing center (CC) and lots of edge services providers (ESPs) is investigated. In order to provide users with higher services quality, the ESPs in the proposed wireless network should lease the computing resources from the CC and the CC can allocate its idle cloud computing resource to the ESPs. We will try to optimally allocate the edge computing resources between the ESPs and CC using the differential game and feedback control. Based on the proposed model, the ESPs can choose the amount of computing resources from the CC using feedback control, which is affected by the unit price of computing resources controlled by the CC. In the simulation part, the optimal allocated resources for users’ services are obtained based on the Nash equilibrium of the proposed differential game. The effectiveness and correctness of the proposed scheme is also verified through the numerical simulations and results.  相似文献   

20.
Logistics resource planning is an integration model of materials requirement planning and distribution resource planning which is a resource allocation technology. It is a technology of satisfying both production material supply and resource allocation optimization which is based on inventory management. For the remanufacturing supply chain, recycling and rebuilding of products form a reverse materials movement loop which challenges the traditional logistics resource planning system. For the characteristics of reverse logistics of remanufacturing supply chain, we propose a closed-loop supply chain resource allocation model based on autonomous multi-entity. We focus on integration resource allocation model of materials requirement planning and distribution resource planning considering remanufacturing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号