首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
As a newly emerging computing paradigm, edge computing shows great capability in supporting and boosting 5G and Internet-of-Things (IoT) oriented applications, e.g., scientific workflows with low-latency, elastic, and on-demand provisioning of computational resources. However, the geographically distributed IoT resources are usually interconnected with each other through unreliable communications and ever-changing contexts, which brings in strong heterogeneity, potential vulnerability, and instability of computing infrastructures at different levels. It thus remains a challenge to enforce high fault-tolerance of edge-IoT scientific computing task flows, especially when the supporting computing infrastructures are deployed in a collaborative, distributed, and dynamic environment that is prone to faults and failures. This work proposes a novel fault-tolerant scheduling approach for edge-IoT collaborative workflows. The proposed approach first conducts a dependency-based task allocation analysis, then leverages a Primary-Backup (PB) strategy for tolerating task failures that occur at edge nodes, and finally designs a deep Q-learning algorithm for identifying the near-optimal workflow task scheduling scheme. We conduct extensive simulative case studies on multiple randomly-generated workflow and real-world edge-IoT server position datasets. Results clearly suggest that our proposed method outperforms the state-of-the-art competitors in terms of task completion ratio, server active time, and resource utilization.  相似文献   

2.
Lei  Ming  Yu  Bin  Zhang  Xingjun  Fowler  Scott  Yu  Bocheng  Wang  Peng 《Telecommunication Systems》2022,81(1):41-52

In a backbone-assisted industrial wireless network (BAIWN), the technology of successive interference cancellation (SIC) based non-orthogonal multiple access (NOMA) provides potential solutions for improving the delay performance. Previous work emphasizes minimizing the transmission delay by user scheduling without considering power control. However, power control is beneficial for SIC-based NOMA to exploit the power domain and manage co-channel interference to simultaneously serve multiple user nodes with the high spectral and time resource utilization characteristics. In this paper, we consider joint power control and user scheduling to study the scheduling time minimization problem (STMP) with given traffic demands in BAIWNs. Specifically, STMP is formulated as an integer programming problem, which is NP-hard. To tackle the NP-hard problem, we propose a conflict graph-based greedy algorithm, to obtain a sub-optimal solution with low complexity. As a good feature, the decisions of power control and user scheduling can be made by the proposed algorithm only according to the channel state information and traffic demands. The experimental results show that compared with the other methods, the proposed method effectively improves the delay performance regardless of the channel states or the network scales.

  相似文献   

3.
为解决无人机(UAV)集群任务调度时面临各节点动态、不稳定的情况,该文提出一种面向多计算节点的可尽量避免任务中断且具有容错性的任务调度方法。该方法首先为基于多计算节点构建了一个以最小化任务平均完成时间为优化目标的任务分配策略;然后基于任务的完成时间和边缘计算节点的存留时间两者的概率分布,将任务计算节点上的执行风险量化成额外开销时间;最后以任务的完成时间与额外开销时间之和替换原本的完成时间,设计了风险感知的任务分配策略。在仿真环境下将该文提出的任务调度方法与3种基准调度方法进行了对比实验,实验结果表明该方法能够有效地降低任务平均响应时间、任务平均执行次数以及任务截止时间错失率。证明该文提出的方法降低了任务重调度和重新执行带来的额外开销,可实现分布式协同计算任务的调度工作,为复杂场景下的无人机集群网络提供新的技术支持。  相似文献   

4.

Internet of Things (IoT) is a widely adoptable technology in industrial, smart home, smart grid, smart city and smart healthcare applications. The real world objects are remotely connected through internet and it provides services with the help of friendly devices. Currently IEEE 802.15.4e Time Slotted Channel Hopping (TSCH) standard is gaining a part of consideration among the IoT research community because of its effectiveness to improvise the reliability of communication which is orchestrated by the scheduling. As TSCH is an emerging Medium Access Control (MAC) protocol, it is used in the proposed work to enhance the network scheduling by throughput maximization and delay minimization. The paper focuses on proper utilization of the channel through node scheduling. NeuroGenetic Algorithm (NGA) has been proposed for TSCH scheduling and its performance is evaluated with respect to time delay and throughput. The system is implemented in real time IoT devices and results are perceived and analyzed. The proposed algorithm is compared with existing TSCH scheduling algorithms.

  相似文献   

5.
Cloud computing provides solutions to many scientific and business applications. Large‐scale scientific applications, which are structured as scientific workflows, are evaluated through cloud computing. In this paper, we proposed a Quality‐of‐Service‐aware fault‐tolerant workflow management system (QFWMS) for scientific workflows in cloud computing. We have considered two real‐time scientific workflows, i.e., Montage and CyberShake, for an evaluation of the proposed QFWMS. The results of the proposed QFWMS scheduling were evaluated through simulation environment WorkflowSim and compared with three well‐known heuristic scheduling policies: (a) minimum completion time (MCT), (b) Maximum‐minimum (Max‐min), and (c) Minimum‐minimum (Min‐min). By considering Montage scientific workflow, the proposed QFWMS reduces the make‐span 8.86%, 8.94%, and 5.53% compared with existing three heuristic policies. Similarly, the proposed QFWMS reduces the cost 6.19%, 3.52%, and 3.60% compared with existing three heuristic policies. Likewise, by considering CyberShake scientific workflow, the proposed QFWMS reduces the make‐span 19.54%, 21.41%, and 25.71% compared with existing three heuristic policies. Similarly, the proposed QFWMS reduces the cost 8.78%, 8.40%, and 8.61% compared with existing three heuristic policies. More so, for QFWMS, SLA is neither violated for time constraints nor for cost constraints. While for MCT, Max‐min and Min‐min scheduling policies, SLA is violated 32, 37, and 23 times, respectively. Conclusively, the proposed QFWMS scheduling and management system is one of the significant workflow management systems for execution and management of scientific workflows in cloud computing.  相似文献   

6.

Asynchronous transfer mode (ATM) tends to be the most mature infrastructure to serve the imminent media broadband networks and is intended to eliminate network latency and improve traffic output versatility. In Weighted Round Robin (WRR) scheme, the main cause of concern is the delay that is being caused by high volumes of workloads at the wireless source nodes. In this paper, proposed an efficient intelligent time slice based round Robin scheduling algorithm for ATM with the use of integrated speed bit protocol (ISBP). The intelligent time slice offers priority, the blast of the CPU, and the time meaning transfer. The speed bit manager is maintained by the ISBP that contains prioritized objects. Then the intelligent time slice is done to generate the quantum time value. The purpose of this paper to enhance network performance in ATM networks by improving the throughput and average end-to-end delay. The new proposed scheme also offers a significant reduction in the average end-to-end delay when compared to the existing WRR scheme.

  相似文献   

7.
Software‐defined networking (SDN) facilitates network programmability through a central controller. It dynamically modifies the network configuration to adapt to the changes in the network. In SDN, the controller updates the network configuration through flow updates, ie, installing the flow rules in network devices. However, during the network update, improper scheduling of flow updates can lead to a number of problems including overflowing of the switch flow table memory and the link bandwidth. Another challenge is minimizing the network update completion time during large‐network updates triggered by events such as traffic engineering path updates. The existing centralized approaches do not search the solution space for flow update schedules with optimal completion time. We proposed a hybrid genetic algorithm‐based flow update scheduling method (the GA‐Flow Scheduler). By searching the solution space, the GA‐Flow Scheduler attempts to minimize the completion time of the network update without overflowing the flow table memory of the switches and the link bandwidth. It can be used in combination with other existing flow scheduling methods to improve the network performance and reduce the flow update completion time. In this paper, the GA‐Flow Scheduler is combined with a stand‐alone method called the three‐step method. Through large‐scale experiments, we show that the proposed hybrid approach could reduce the network update time and packet loss. It is concluded that the proposed GA‐Flow Scheduler provides improved performance over the stand‐alone three‐step method. Also, it handles the above‐mentioned network update problems in SDN.  相似文献   

8.
The concept of usage of heterogeneous networks (HetNets) is about improving the LTE system performance by increasing the capacity and coverage of the Macro cell. In this paper, a performance comparison of various packet scheduling algorithms such as Proportional Fair, Maximum Largest Weighted Delay First and Exponential/Proportional Fair is studied in detail in the HetNets environment. The key performance indicators such as throughput, packet loss ratio, delay and fairness are considered to judge the performance of the scheduling algorithms. Various strategies such as increasing the number of Pico cells in the cell edge were used in the simulation for the performance evaluation study. The results achieved through various simulations show that adding Pico cells to the existing Macros enhances the overall system performance in addition to the various scheduling algorithms implemented in Macros. For reader’s convenience, various types of graphs have been used to represent the simulation results to better understand the performance metrics of various scheduling algorithms. Simulation results shows that overall system gain has increased because of adding Picos and thereby providing better coverage in the cell edge areas and thereby increasing the capacity of the network to provide better quality of service.  相似文献   

9.
Wang  Ke  Yu  XiaoYi  Lin  WenLiang  Deng  ZhongLiang  Liu  Xin 《Wireless Networks》2021,27(6):4229-4245

Mobile edge computing (MEC) is an emerging technology recognized as an effective solution to guarantee the Quality of Service for computation-intensive and latency-critical traffics. In MEC system, the mobile computing, network control and storage functions are deployed by the servers at the network edges (e.g., base station and access points). One of the key issue in designing the MEC system is how to allocate finite computational resources to multi-users. In contrast with previous works, in this paper we solve this issue by combining the real-time traffic classification and CPU scheduling. Specifically, a support vector machine based multi-class classifier is adopted, the parameter tunning and cross-validation are designed in the first place. Since the traffic of same class has similar delay budget and characteristics (e.g. inter-arrival time, packet length), the CPU scheduler could efficiently scheduling the traffic based on the classification results. In the second place, with the consideration of both traffic delay budget and signal baseband processing cost, a preemptive earliest deadline first (EDF) algorithm is deployed for the CPU scheduling. Furthermore, an admission control algorithm that could get rid off the domino effect of the EDF is also given. The simulation results show that, by our proposed scheduling algorithm, the classification accuracy for specific traffic class could be over 82 percent, meanwhile the throughput is much higher than the existing scheduling algorithms.

  相似文献   

10.
Cloud computing emerges as a new computing pattern that can provide elastic services for any users around the world. It provides good chances to solve large scale scientific problems with fewer efforts. Application deployment remains an important issue in clouds. Appropriate scheduling mechanisms can shorten the total completion time of an application and therefore improve the quality of service (QoS) for cloud users. Unlike current scheduling algorithms which mostly focus on single task allocation, we propose a deadline based scheduling approach for data-intensive applications in clouds. It does not simply consider the total completion time of an application as the sum of all its subtasks’ completion time. Not only the computation capacity of virtual machine (VM) is considered, but also the communication delay and data access latencies are taken into account. Simulations show that our proposed approach has a decided advantage over the two other algorithms.  相似文献   

11.

The mobile edge cloud has developed as the main platform to offer low latency network services from the edge of networks for stringent delay necessities of mobile applications. In mobile edge cloud networks, network functions virtualization (NFV) creates the frameworks for building up a new dynamic resource management framework structure to effectively utilize network resources. Delay tolerance NFV-enabled multicast request admissions in a mobile edge-cloud network are explored in this paper to limit request admission delays or maximizing system performance for a group of requests arriving individually. At first, for the cost reduction issue of a single NFV-empowered multicast request admission, the admission cost of each multicast request is assessed, and the Support based graph is constructed. Here, the multicast requests are prioritized dependent on their admission cost. Subsequently, trust and the delay-based local gradient are assessed for the prioritized multicast requests. At long last, delay tolerance NFV multicasting is accomplished by successful (First Come First Serve) FCFS queuing reliant on the assessed local gradient of requests. When compared to existing approaches, the exploratory results show that the proposed methodology is superior in terms of throughput, admission cost, and running time.

  相似文献   

12.
李波  牛力  黄鑫  丁洪伟 《电子与信息学报》2020,42(11):2664-2670
车载云计算环境中的计算卸载存在回程网络延迟高、远程云端负载大等问题,车载边缘计算利用边缘服务器靠近车载终端,就近提供云计算服务的特点,在一定程度上解决了上述问题。但由于汽车运动造成的通信环境动态变化进而导致任务完成时间增加,为此该文提出一种基于移动路径可预测的计算卸载切换策略MPOHS,即在车辆移动路径可预测情况下,引入基于最小完成时间的计算切换策略,以降低车辆移动性对计算卸载的影响。实验结果表明,相对于现有研究,该文所提算法能够在减少平均任务完成时间的同时,减少切换次数和切换时间开销,有效降低汽车运动对计算卸载的影响。  相似文献   

13.
针对智能电网环境中电力数据量庞大且对处理时效性要求高的问题,将5G边缘计算引入智能电网系统.研究了基于5G边缘计算的智能电网任务调度问题,在满足电网任务完成需求的同时,最大限度地降低成本.基于此提出了一种基于贪心策略的启发式任务调度算法,通过与两种算法在包括输入任务数、传输数据大小和延迟要求等参数下的比较,验证了所提算...  相似文献   

14.
针对异构云无线接入网络(H-CRAN)网络下基于网络切片的在线无线资源动态优化问题,该文通过综合考虑业务接入控制、拥塞控制、资源分配和复用,建立一个以最大化网络平均和吞吐量为目标,受限于基站(BS)发射功率、系统稳定性、不同切片的服务质量(QoS)需求和资源分配等约束的随机优化模型,并进而提出了一种联合拥塞控制和资源分配的网络切片动态资源调度算法。该算法会在每个资源调度时隙内动态地为性能需求各异的网络切片中的用户分配资源。仿真结果表明,该文算法能在满足各切片用户QoS需求和维持网络稳定的基础上,提升网络整体吞吐量,并且还可通过调整控制参量的取值实现时延和吞吐量间的动态平衡。  相似文献   

15.
李斌  徐天成 《电讯技术》2023,63(12):1894-1901
针对具有依赖关系的计算密集型应用任务面临的卸载决策难题,提出了一种基于优先级的深度优先搜索调度策略。考虑到用户能量受限和移动性,构建了一种联合用户下行能量捕获和上行计算任务卸载的网络模型,并在此基础上建立了端到端优化目标函数。结合任务优先级及时延约束,利用深度强化学习自学习的优势,将任务卸载决策问题建模为马尔科夫模型,并设计了基于任务相关性的Dueling Double DQN(D3QN)算法对问题进行求解。仿真数据表明,所提算法较其他算法能够满足更多用户的时延要求,并能减少9%~10%的任务执行时延。  相似文献   

16.
Cloud computing is the key and frontier field of the current domestic and international computer technology, workflow task scheduling plays an important part of cloud computing, which is a policy that maps tasks to appropriate resources to execute. Effective task scheduling is essential for obtaining high performance in cloud environment. In this paper, we present a workflow task scheduling algorithm based on the resources' fuzzy clustering named FCBWTS. The major objective of scheduling is to minimize makespan of the precedence constrained applications, which can be modeled as a directed acyclic graph. In FCBWTS, the resource characteristics of cloud computing are considered, a group of characteristics, which describe the synthetic performance of processing units in the resource system, are defined in this paper. With these characteristics and the execution time influence of the ready task in the critical path, processing unit network is pretreated by fuzzy clustering method in order to realize the reasonable partition of processor network. Therefore, it largely reduces the cost in deciding which processor to execute the current task. Comparison on performance evaluation using both the case data in the recent literature and randomly generated directed acyclic graphs shows that this algorithm has outperformed the HEFT, DLS algorithms both in makespan and scheduling time consumed. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

17.
Li  Weiwen  Mao  Jia  Chen  Qiang 《Wireless Personal Communications》2021,119(4):3053-3062

Based on a position-independent and computationally simple node scheduling algorithm, a scheduling algorithm based on energy balance is proposed. The analysis and simulation results showed that the algorithm can extend the lifespan of the entire network whereas ensuring energy balance. Data aggregation was a relatively time-consuming operation in sensor networks, especially in high-density networks. Therefore, minimizing the problem of data aggregation delay had become a hot topic of research. The algorithm adopted a clustering idea of low power in the cluster and high power between clusters, combined with channel allocation to reduce data aggregation delay, and data aggregation between clusters can be performed without collisions. The number of channels used in different network topologies tends to be constant.

  相似文献   

18.
边缘计算服务器的负载不均衡将严重影响服务能力,该文提出一种适用于边缘计算场景的任务调度策略(RQ-AIP)。首先,根据服务器的负载分布情况衡量整个网络的负载均衡度,结合强化学习方法为任务匹配合适的边缘服务器,以满足传感器节点任务的资源差异化需求;进而,构造任务时延和终端发射功率的映射关系来满足物理域的约束,结合终端用户社会属性,为任务不断地选择合适的中继终端,通过终端辅助调度的方式实现网络的负载均衡。仿真结果表明,所提出的策略与其他负载均衡策略相比能有效地缓解边缘服务器之间的负载和核心网的流量,降低任务处理时延。  相似文献   

19.
The unused slot remainder (USR) problem in Ethernet Passive Optical Networks and long-reach Passive Optical Networks (LR-PONs) results in both a lower bandwidth utilization and a greater packet delay. In a previous study by the current group, an Active intra-ONU Scheduling with predictive queue report scheme was proposed for resolving the USR problem by predicting the granted bandwidth in advance based on the arrival traffic estimates of the optical network units (ONUs). However, it was found that the higher bandwidth prediction error in the proposed scheme prevents the network performance from being improved. Accordingly, the present study proposes a non-predictive-based ONU scheduling method designated as Active intra-ONU Scheduling with proportional guaranteed bandwidth (ASPGB) to improve the performance of LR-PONs. In the proposed method, the maximum guaranteed bandwidth of each ONU is adapted dynamically in accordance with the ratio of the ONU traffic load to the overall system load. Importantly, the proposed dynamic bandwidth allocation approach reduces the dependence of the network performance on the granted bandwidth prediction since the maximum guaranteed bandwidth determined by the Optical Line Terminal more closely approaches the actual bandwidth demand of each ONU. To solve the idle time problem arising in the event of an excess bandwidth reallocation, ASPGB is integrated with an improved early allocation (IEA) algorithm (a kind of Just-In-Time scheduling). The simulation results show that the IEA-ASPGB scheme outperforms previously published methods in terms of bandwidth utilization and average packet delay under both balanced and unbalanced traffic load conditions.  相似文献   

20.
In the energy saving mechanism with random delay in broadband wireless network, an adaptive algorithm based on the dual-threshold and dynamic scheduling model is presented. First, to solve the demand assignment problem of bandwidth allocation and improve the system overall performance in broadband wireless network, using dynamic scheduling method and best-effort and non-real-time polling service traffic are analyzed. Then, an adaptive dual-threshold PSM for WiMAX is proposed, which not only tunes the tradeoff to satisfy various QoS requirements, but also makes adaptive adjustment based on traffic. Finally, Simulation results show that the mechanism has superior performance in comparison with the ideal assumption of Poisson arriving.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号