首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Vehicular Cloud Computing (VCC) facilitates real-time execution of many emerging user and intelligent transportation system (ITS) applications by exploiting under-utilized on-board computing resources available in nearby vehicles. These applications have heterogeneous time criticality, i.e., they demand different Quality-of-Service levels. In addition to that, mobility of the vehicles makes the problem of scheduling different application tasks on the vehicular computing resources a challenging one. In this article, we have formulated the task scheduling problem as a mixed integer linear program (MILP) optimization that increases the computation reliability even as reducing the job execution delay. Vehicular on-board units (OBUs), manufactured by different vendors, have different architecture and computing capabilities. We have exploited MapReduce computation model to address the problem of resource heterogeneity and to support computation parallelization. Performance of the proposed solution is evaluated in network simulator version 3 (ns-3) by running MapReduce applications in urban road environment and the results are compared with the state-of-the-art works. The results show that significant performance improvements in terms of reliability and job execution time can be achieved by the proposed task scheduling model.  相似文献   

2.
李焱  郑亚松  李婧  朱春鸽  刘欣然 《电子学报》2017,45(10):2416-2424
云环境下,因数据局部性或是任务对资源的特殊偏好,一个作业所包含的任务往往需要在不同的数据中心局点上运行,此类作业称为跨域作业.跨域作业的完成时间取决于最慢任务的执行效率,即存在木桶效应.针对各域资源能力异构条件下不合理的调度策略导致跨域作业执行时间跨度过长的问题,本文提出一种面向跨域作业的启发式调度方法MIN-Max-Min,优先选择期望完成时间最短的作业执行.通过实验表明,与先来先服务的策略相比,该方法能将跨域作业平均执行时间跨度减少40%以上.  相似文献   

3.
Mobile cloud computing combines wireless access service and cloud computing to improve the performance of mobile applications. Mobile cloud computing can balance the application distribution between the mobile device and the cloud, in order to achieve faster interactions, battery savings and better resource utilization. To support mobile cloud computing, the paper proposes a phased scheduling model of mobile cloud such that mobile device’s users experience lower interaction times and extended battery life. The phased scheduling optimization is solved by two subproblems: mobile device’s batch application optimization and mobile device’s job level optimization. At the first stage, the mobile cloud global scheduling optimization implements the allocation of the cloud resources to the mobile device’s batch applications. At the second stage, mobile device’s job level optimization adjusts the cloud resource usages to optimize the utility of single mobile device’s application. In the simulations, compared with other algorithm, our proposed mobile cloud phased scheduling algorithms achieve the better performance with acceptable overhead.  相似文献   

4.
在云计算环境中存在庞大的任务数,为了能更加高效地完成任务请求,如何进行有效地任务调度是云计算环境下实现按需分配资源的关键。针对调度问题提出了一种基于蚁群优化的任务调度算法,该算法能适应云计算环境下的动态特性,且集成了蚁群算法在处理NP-Hard问题时的优点。该算法旨在减少任务调度完成时间。通过在CloudSim平台进行仿真实验,实验结果表明,改进后的算法能减少任务平均完成时间、并能在云计算环境下有效提高调度效率。  相似文献   

5.
为了解决智慧城市管理过程中常出现资源调度速度过慢问题,设计了云计算平台的智慧城市管理系统。该系统采用云管理模块下监控各硬件设备,并构建云计算资源调度目标函数,利用文化粒子群算法对目标函数求解,得到云计算资源调度方案,最后测试结果表明,该系统能够实现智慧城市有效管理,并能实时监测城市情况,在实行资源调度时,任务完成时间较短且系统利用率较高,能够实现资源最大化利用。  相似文献   

6.
童钊  肖正  李肯立 《电子学报》2016,44(7):1679-1688
多用户网络应用是分布式计算中最主要的形式之一。为了充分挖掘分布式系统中的计算资源,任务调度是解决该问题的关键。然而,由于多用户网络应用中存在的不确定性,使得当前的调度方法在动态性、实时性、适应性等方面都存在诸多不足。考虑到用户实时性需求,本文提出了概率型调度的思想。该思想将任务的分配看作概率事件,以用户角度的最短响应时间为目标,给出了多用户网络应用的排队模型,并进一步将调度定义为一个非线性规划问题。分析表明上述方法在任务到达过程、服务率方面存在限制,进而提出了一个基于强化学习理论自适应调度算法。该算法首先利用Markov决策过程(MDP)描述该调度问题,然后对任务到达过程和服务率知识进行在线的学习。一旦获得任务分配概率,遵从该概率可进行快速的任务调度。实验表明上述两个算法相比于Min-Min、Max-Min、Suffrage、ECT四种经典调度算法具有更短的平均响应时间。除此性能外,通过实验分析了该概率型调度方法的稳定性。  相似文献   

7.
With expansion of cloud-based applications, it is necessarily for business service provider to deal with how to provision resources to server customers. In cloud download, the service provider faces great challenge on determining when to process job and when to notify the user to meet the conflicting objectives, i.e., the service profit maximization and user satisfaction maximization. An economic model was proposed to find job and notification scheduling strategies for making tradeoffs between user satisfaction and profit of the system, in this model, optimization problems are formulated as follows, profit maximization under a satisfaction target and then satisfaction optimization with a profit bound. By solving these optimization problems, the optimal scheduling strategies are obtained under various quality requirements of service. It is demonstrated that the proposed strategy can obtain better performance than the system without the intelligent scheduling strategy.  相似文献   

8.
In this paper, we consider the problem of scheduling lightpaths and computing resources for sliding grid demands in WDM networks. Each sliding grid demand is represented by a tuple (v,R,c,p,q,l) , where v is the client node, R is the resource-group which includes a group of predefined resource nodes, c is the required amount of computing resources, [p,q] is the time window and l is the demand duration. With each demand, the scheduling algorithm is required to decide the start time t (p les t les q - l), reserve an amount of c computing resources at a resource node v ' isin R and provision a primary lightpath as well as a backup lightpath from v ' to v . The reserved computing resources and lightpaths are used during [t,t + l]. Unlike the sequential approach wherein the start time, the network resources (lightpaths) and the computing resources are considered one after another, in our work we use the joint scheduling approach wherein the resources and the start time are examined jointly. We consider sliding demands with static and dynamic arrival patterns. We develop an integer linear programming (ILP) formulation to obtain optimal results. For the reason of scalability, we propose heuristic algorithms based on joint resource scheduling and study their effectiveness through simulation experiments.  相似文献   

9.
Cloud computing emerges as a new computing pattern that can provide elastic services for any users around the world. It provides good chances to solve large scale scientific problems with fewer efforts. Application deployment remains an important issue in clouds. Appropriate scheduling mechanisms can shorten the total completion time of an application and therefore improve the quality of service (QoS) for cloud users. Unlike current scheduling algorithms which mostly focus on single task allocation, we propose a deadline based scheduling approach for data-intensive applications in clouds. It does not simply consider the total completion time of an application as the sum of all its subtasks’ completion time. Not only the computation capacity of virtual machine (VM) is considered, but also the communication delay and data access latencies are taken into account. Simulations show that our proposed approach has a decided advantage over the two other algorithms.  相似文献   

10.
This research work considers a scenario of cloud computing job-shop scheduling problems. We consider m realtime jobs with various lengths and n machines with different computational speeds and costs. Each job has a deadline to be met, and the profit of processing a packet of a job differs from other jobs. Moreover, considered deadlines are either hard or soft and a penalty is applied if a deadline is missed where the penalty is considered as an exponential function of time. The scheduling problem has been formulated as a mixed integer non-linear programming problem whose objective is to maximize net-profit. The formulated problem is computationally hard and not solvable in deterministic polynomial time. This research work proposes an algorithm named the Tube-tap algorithm as a solution to this scheduling optimization problem. Extensive simulation shows that the proposed algorithm outperforms existing solutions in terms of maximizing net-profit and preserving deadlines.  相似文献   

11.
Cloud computing is the key and frontier field of the current domestic and international computer technology, workflow task scheduling plays an important part of cloud computing, which is a policy that maps tasks to appropriate resources to execute. Effective task scheduling is essential for obtaining high performance in cloud environment. In this paper, we present a workflow task scheduling algorithm based on the resources' fuzzy clustering named FCBWTS. The major objective of scheduling is to minimize makespan of the precedence constrained applications, which can be modeled as a directed acyclic graph. In FCBWTS, the resource characteristics of cloud computing are considered, a group of characteristics, which describe the synthetic performance of processing units in the resource system, are defined in this paper. With these characteristics and the execution time influence of the ready task in the critical path, processing unit network is pretreated by fuzzy clustering method in order to realize the reasonable partition of processor network. Therefore, it largely reduces the cost in deciding which processor to execute the current task. Comparison on performance evaluation using both the case data in the recent literature and randomly generated directed acyclic graphs shows that this algorithm has outperformed the HEFT, DLS algorithms both in makespan and scheduling time consumed. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
The static provisioning problem in wavelength-routed optical networks has been studied for many years. However, service providers are still facing the challenges arising from the special requirements for provisioning services at the optical layer. In this paper, we incorporate some realistic constraints into the static provisioning problem, and formulate it under different network resource availability conditions. We consider three classes of shared risk link group (SRLG)-diverse path protection schemes: dedicated, shared, and unprotected. We associate with each connection request a lightpath length constraint and a revenue value. When the network resources are not sufficient to accommodate all the connection requests, the static provisioning problem is formulated as a revenue maximization problem, whose objective is maximizing the total revenue value. When the network has sufficient resources, the problem becomes a capacity minimization problem with the objective of minimizing the number of used wavelength-links. We provide integer linear programming (ILP) formulations for these problems. Because solving these ILP problems is extremely time consuming, we propose a tabu search heuristic to solve these problems within a reasonable amount of time. We also develop a rerouting optimization heuristic, which is based on previous work. Experimental results are presented to compare the solutions obtained by the tabu search heuristic and the rerouting optimization heuristic. For both problems, the tabu search heuristic outperforms the rerouting optimization heuristic.  相似文献   

13.
We consider the problem of collecting a large amount of data from several different hosts to a single destination in a wide-area network. This problem is important since improvements in data collection times in many applications such as wide-area upload applications, high-performance computing applications, and data mining applications are crucial to performance of those applications. Often, due to congestion conditions, the paths chosen by the network may have poor throughput. By choosing an alternate route at the application level, we may be able to obtain substantially faster completion time. This data collection problem is a nontrivial one because the issue is not only to avoid congested link(s), but to devise a coordinated transfer schedule which would afford maximum possible utilization of available network resources. Our approach for computing coordinated data collection schedules makes no assumptions about knowledge of the topology of the network or the capacity available on individual links of the network. This approach provides significant performance improvements under various degrees and types of network congestions. To show this, we give a comprehensive comparison study of the various approaches to the data collection problem which considers performance, robustness, and adaptation characteristics of the different data collection methods. The adaptation to network conditions characteristics are important as the above applications are long lasting, i.e., it is likely changes in network conditions will occur during the data transfer process. In general, our approach can be used for solving arbitrary data movement problems over the Internet. We use the Bistro platform to illustrate one application of our techniques.  相似文献   

14.
In grid computing, a key issue is how limited network resources can be shared by communications by various applications more effectively in order to improve application-level performance, e.g., by reducing the completion time for an individual application and/or set of applications. Communication by an application changes the condition of the network resources, which may, in turn, affect communications by other applications, and thus may degrade their performance. In this paper, we examine the characteristics of traffic generated by typical grid applications, and the effect of the round-trip time and bottleneck bandwidth on the application-level performance (i.e., completion time) of these applications. Our experiments showed that the impact of network conditions on the performance of various applications and the impact of application traffic on network conditions differed considerably depending on the application. These results suggest that effective allocation of network resources must take into account the network-related properties of individual applications.  相似文献   

15.
Interactive multimedia applications such as peer‐to‐peer (P2P) video services over the Internet have gained increasing popularity during the past few years. However, the adopted Internet‐based P2P overlay network architecture hides the underlying network topology, assuming that channel quality is always in perfect condition. Because of the time‐varying nature of wireless channels, this hardly meets the user‐perceived video quality requirement when used in wireless environments. Considering the tightly coupled relationship between P2P overlay networks and the underlying networks, we propose a distributed utility‐based scheduling algorithm on the basis of a quality‐driven cross‐layer design framework to jointly optimize the parameters of different network layers to achieve highly improved video quality for P2P video streaming services in wireless networks. In this paper, the quality‐driven P2P scheduling algorithm is formulated into a distributed utility‐based distortion‐delay optimization problem, where the expected video distortion is minimized under the constraint of a given packet playback deadline to select the optimal combination of system parameters residing in different network layers. Specifically, encoding behaviors, network congestion, Automatic Repeat Request/Query (ARQ), and modulation and coding are jointly considered. Then, we provide the algorithmic solution to the formulated problem. The distributed optimization running on each peer node adopted in the proposed scheduling algorithm greatly reduces the computational intensity. Extensive experimental results also demonstrate 4–14 dB quality enhancement in terms of peak signal‐to‐noise ratio by using the proposed scheduling algorithm. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.
In this paper, we propose a nonlocal low-rank matrix completion method using edge detection and neural network to effectively exploit the nonlocal inter-pixel correlation for image interpolation and other possible applications. We first interpolate the images using some basic techniques, such as bilinear and edge-directed methods. Then, each image patch is categorized as smooth regions, edge regions, or texture regions and adaptive interpolating mechanisms are applied to each specific type of regions. Finally, for each specific type of regions, neural networks and low-rank matrix completion are employed to accurately update the results. An iteratively re-weighted minimization algorithm is used to solve the low-rank energy minimization function. Our experiments on benchmark images clearly indicate that the proposed method produces much better results than some existing algorithms using a variety of image quality metric in terms of both objective image quality assessment and subjective quality assessment.  相似文献   

17.
In recent years, Docker container technology is being applied in the field of cloud computing at an explosive speed. The scheduling of Docker container resources has gradually become a research hotspot. Existing big data computing and storage platforms apply with traditional virtual machine technology, which often results in low resource utilization, a long time for flexible scaling and expanding clusters. In this paper, we propose an improved container scheduling algorithm for big data applications named Kubernetes-based particle swarm optimization(K-PSO). Experimental results show that the proposed K-PSO algorithm converges faster than the basic PSO algorithm, and the running time of the algorithm is cut in about half. The K-PSO container scheduling algorithm and algorithm experiment for big data applications are implemented in the Kubernetes container cloud system. Our experimental results show that the node resource utilization rate of the improved scheduling strategy based on K-PSO algorithm is about 20% higher than that of the Kube-scheduler default strategy, balanced QoS priority strategy, ESS strategy, and PSO strategy, while the average I/O performance and average computing performance of Hadoop cluster are not degraded.  相似文献   

18.
以Kubernetes为代表的云原生编排系统在多云环境中被云租户广泛使用,随之而来的网络观测性问题愈发突出,跨云跨地区的网络流量成本尤为突出。在Kubernetes中引入扩展的伯克利数据包过滤器(extended Berkeleypacketfilter,eBPF)技术采集操作系统内核态的网络数据特征解决网络观测问题,随后将网络数据特征建模为二次分配问题(quadratic assignment problem,QAP),使用启发式搜索与随机搜索组合的方法在实时计算的场景下求得最佳近优解。此模型在网络资源成本优化中优于Kubernetes原生调度器中仅基于计算资源的调度策略,在可控范围内增加了调度链路的复杂度,有效降低了多云多地区部署环境中的网络资源成本。  相似文献   

19.
Software‐defined networking (SDN) facilitates network programmability through a central controller. It dynamically modifies the network configuration to adapt to the changes in the network. In SDN, the controller updates the network configuration through flow updates, ie, installing the flow rules in network devices. However, during the network update, improper scheduling of flow updates can lead to a number of problems including overflowing of the switch flow table memory and the link bandwidth. Another challenge is minimizing the network update completion time during large‐network updates triggered by events such as traffic engineering path updates. The existing centralized approaches do not search the solution space for flow update schedules with optimal completion time. We proposed a hybrid genetic algorithm‐based flow update scheduling method (the GA‐Flow Scheduler). By searching the solution space, the GA‐Flow Scheduler attempts to minimize the completion time of the network update without overflowing the flow table memory of the switches and the link bandwidth. It can be used in combination with other existing flow scheduling methods to improve the network performance and reduce the flow update completion time. In this paper, the GA‐Flow Scheduler is combined with a stand‐alone method called the three‐step method. Through large‐scale experiments, we show that the proposed hybrid approach could reduce the network update time and packet loss. It is concluded that the proposed GA‐Flow Scheduler provides improved performance over the stand‐alone three‐step method. Also, it handles the above‐mentioned network update problems in SDN.  相似文献   

20.
Cloud data centers have become overwhelmed with data-intensive applications due to the limited computational capabilities of mobile terminals. Mobile edge computing is emerging as a potential paradigm to host application execution at the edge of networks to reduce transmission delays. Compute nodes are usually distributed in edge environments, enabling crucially efficient task scheduling among those nodes to achieve reduced processing time. Moreover, it is imperative to conserve edge server energy, enhancing their lifetimes. To this end, this paper proposes a novel task scheduling algorithm named Energy-aware Double-fitness Particle Swarm Optimization (EA-DFPSO) that is based on an improved particle swarm optimization algorithm for achieving energy efficiency in an edge computing environment along with minimal task execution time. The proposed EA-DFPSO algorithm applies a dual fitness function to search for an optimal tasks-scheduling scheme for saving edge server energy while maintaining service quality for tasks. Extensive experimentation demonstrates that our proposed EA-DFPSO algorithm outperforms the existing traditional scheduling algorithms to achieve reduced task completion time and conserve energy in an edge computing environment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号