首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
针对基础设施即服务(IaaS)环境下多租户使用安全服务时由于安全资源有限和安全资源分配不均导致的效率低下问题,提出了一个租户安全资源调度框架。首先以最小最大公平算法为基础,结合Fair Scheduler的调度思想为租户设定了最小共享量和资源需求量属性;然后通过安全服务资源分配算法在保证租户最小共享量满足的前提下,尽可能公平地满足租户的资源需求;最后结合租户内任务调度和租户间资源抢占算法,实现了租户安全服务调度框架。实验结果表明,在随机资源分配条件下,安全服务资源分配算法与传统资源分配算法相比在资源利用率和作业效率上均有明显提高,安全服务调度框架可以有效解决多租户安全资源的分配和强占问题。  相似文献   

2.
云计算环境下的服务调度和资源调度研究   总被引:1,自引:0,他引:1  
云计算中的服务调度与资源调度对云计算的性能有重要影响,在分析现有云计算调度模式的基础上,针对云计算数据密集与计算密集的特点,提出分层调度策略以实现云计算中的服务与资源调度。分层调度策略对任务进行划分确定作业优先级,并通过数据局部性和总任务完成率对资源进行分配。数值评价部分应用分层调度与已有调度进行比较。实验结果表明,所采用的调度有效提高了资源利用率,为云服务的进一步研究提供了思路。  相似文献   

3.
王留洋  俞扬信  周淮 《计算机应用》2012,32(12):3291-3294
针对随着网络数据传输速度和复杂性的不断增加,网络管理变得更加困难的现状,提出了一种虚拟资源的智能多代理模型。描述了虚拟资源的智能多代理的处理过程,讨论了不同代理的处理机制。通过分析用户上下文和系统状态,可实时地分析社会媒体资源。根据虚拟资源的使用类型,对用户上下信息的需求进行分析和推断,自动地给用户分配资源。采用云计算中虚拟资源动态调度方法及MovieLens系统评估该模型,结果证明所提出的模型具有较好的性能,可实现虚拟资源的动态调度,动态地实现负载均衡,使云计算中的虚拟资源得到高效的利用。  相似文献   

4.
提出与描述了一种面向任务运行时间预测和容错感知(Fault-Aware)的网格资源分配策略,采用主动容错的方式,在资源出错之前尽量提前避免它出错或异常的情况发生。该策略把网格中任务的运行时间(runtime)预测和资源的在线时间(uptime)预测结合起来,相对于普通的调度策略具有比较高的资源利用率。在具体的CoBRA网格中间件中实现了该容错感知调度,描述了实现该容错感知调度策略模块的功能。测试过程中选择了睡眠任务技术,划分四种不同的场景进行实验,把该容错感知资源分配与普通的FCFS调度策略进行比较,结果证明在可变化的资源可用性的情况下系统可以加快应用的整体执行时间,具有很小的偏差。  相似文献   

5.
With the maturation of grid computing facilities and recent explosion of cloud computing data centers, midscale computational science has more options than ever before to satisfy computational needs. But heterogeneity brings complexity. We propose a simple abstraction for interaction with heterogeneous resource managers spanning grid and cloud computing and on features that make the tool useful for the midscale physical or natural scientist. Key strengths of the abstraction are its support for multiple standard job specification languages, preservation of direct user interaction with the service, removing the delay that can come through layers of services, and the predictable behavior under heavy loads. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

6.
现有多DAG调度研究主要在多个DAG共享资源调度的时间最小化、公平性最大化、吞吐量最大化等问题方面提出了相关的解决方案,然而,现有的方法还不能很好地解决云计算环境下多DAG共享资源调度的资源分配优化问题.为此,首先分析讨论了一组多DAG共享云计算资源调度中的多DAG数量、属性结构分布特点与资源需求量之间的关系,并在此基础上提出了基于资源需求强度预测变异方法的进化算法EFRD,有效地解决了云计算环境下多DAG共享资源调度的资源分配优化问题,既保证了多DAG的调度执行时间最小化,也避免了资源的浪费.实验表明,EFRD算法能够有效地收敛到最优解.  相似文献   

7.
Today, in an energy‐aware society, job scheduling is becoming an important task for computer engineers and system analysts that may lead to a performance per Watt trade‐off of computing infrastructures. Thus, new algorithms, and a simulator of computing environments, may help information and communications technology and data center managers to make decisions with a solid experimental basis. There are several simulators that try to address performance and, somehow, estimate energy consumption, but there are none in which the energy model is based on benchmark data that have been countersigned by independent bodies such as the Standard Performance Evaluation Corporation. This is the reason why we have implemented a performance and energy‐aware scheduling (PEAS) simulator for high‐performance computing. Furthermore, to evaluate the simulator, we propose an implementation of the non‐dominated sorting genetic algorithm‐II (NSGA‐II) algorithm, a fast and elitist multiobjective genetic algorithm, for the resource selection. With the help of the PEAS simulator, we have studied if it is possible to provide an intelligent job allocation policy that may be able to save energy and time without compromising performance. The results of our simulations show a great improvement in response time and power consumption. In most of the cases, NSGA‐II performs better than other ‘intelligent’ algorithms like multiobjective heterogeneous earliest finish time and clearly outperforms the first‐fit algorithm. We demonstrate the usefulness of the simulator for this type of studies and conclude that the superior behavior of multiobjective algorithms makes them recommended for use in modern scheduling systems. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
Loosely coordinated (implicit/dynamic) coscheduling is a time‐sharing approach that originates from network of workstations environments of mixed parallel/serial workloads and limitedsoftware support. It is meant to be an easy‐to‐implement and scalable approach. Considering that the percentage of clusters in parallel computing is increasing and easily portable software is needed, loosely coordinated coscheduling becomes an attractive approach for dedicated machines. Loose coordination offers attractive features as a dynamic approach. Static approaches for local job scheduling assign resources exclusively and non‐preemptively. Such approaches still remain beyond the desirable resource utilization and average response times. Conversely, approaches for dynamic scheduling of jobs can preempt resources and/or adapt their allocation. They typically provide better resource utilization and response times. Existing dynamic approaches are full preemption with checkpointing, dynamic adaptation of node/CPU allocation, and time sharing via gang or loosely coordinated coscheduling. This survey presents and compares the different approaches, while particularly focusing on the less well‐explored loosely coordinated time sharing. The discussion particularly focuses on the implementation problems, in terms of modification of standard operating systems, the runtime system and the communication libraries. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

9.
10.
In the last few years, remarkable efforts have been made to extend the Grid paradigm to commercial solutions. Business‐oriented grids call for effective Quality of Service strategies able to adapt to different user requirements and to address Service Level Agreements. Performance analysis and prediction with respect to different load conditions or management policies are required to define such strategies. However, the highly distributed nature of Grid systems and the presence of distinct administrative domains make it difficult to carry out performance estimations. In fact, several parameters are involved and the autonomy of each site could make it complex to set them in a proper way. In this paper, we present a non‐Markovian Stochastic Petri Net methodology that allows to conduct performance analysis of Grid systems focusing on aspects related to the Virtual Organization as a whole. In particular, different job allocation techniques can be evaluated with respect to both user and provider points‐of‐view. The influence of different information update policies on the accuracy of the allocation schemes can also be investigated, highlighting the costs/benefits in terms of job waiting time, service availability, and system utilization. The proposed methodology is designed to be as general as possible and it can be applied to analyze a gLite Grid infrastructure taken as case study. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

11.
At present, most of the resource allocation methods in mobile edge computing allocate computing resources according to the time order in which task requests are calculated and unloaded, without considering the priority of tasks in practical applications. According to the computing requirements in such cases, a priority task-oriented resource allocation method is proposed. According to the average processing time of the task execution, the corresponding priority for task is given. The tasks with different priorities are weighted to allocate computing resources, which not only ensures that the high-priority tasks obtain sufficient computing resources, but also reduces the total time and energy consumption to complete the calculation of all tasks, thus improving the quality of service. The experimental results show that the proposed method can achieve better performance.  相似文献   

12.
资源分配和任务调度是网格计算的一个关键问题之一。提出一种融合离散粒子群优化算法和蚁群算法的新型算法来解决网格资源分配问题。该算法通过在粒子群算法中引入蚂蚁算法,可有效克服粒子群算法后期的局部搜索能力差和蚁群算法前期盲目搜索的缺陷。理论分析及模拟实验表明该算法具有良好的性能。  相似文献   

13.
14.
In the cloud age, heterogeneous application modes on large-scale infrastructures bring about the challenges on resource utilization and manageability to data centers. Many resource and runtime management systems are developed or evolved to address these challenges and relevant problems from different perspectives. This paper tries to identify the main motivations, key concerns, common features, and representative solutions of such systems through a survey and analysis. A typical kind of these systems is generalized as the consolidated cluster system, whose design goal is identified as reducing the overall costs under the quality of service premise. A survey on this kind of systems is given, and the critical issues concerned by such systems are summarized as resource consolidation and runtime coordination. These two issues are analyzed and classified according to the design styles and external characteristics abstracted from the surveyed work. Five representative consolidated cluster systems from both academia and industry are illustrated and compared in detail based on the analysis and classifications. We hope this survey and analysis to be conducive to both design implementation and technology selection of this kind of systems, in response to the constantly emerging challenges on infrastructure and application management in data centers.  相似文献   

15.
资源分配是云计算的核心之一,对云计算资源分配算法的性能进行评价可为云计算平台设计提供指导.讨论了两种云计算资源分配算法,提出了一种基于PEPA的资源分配算法的性能评价模型,该模型通过建立云计算系统中各组件之间的交互关系进行形式化分析和推理,获得了云计算系统性能的评价指标.实验通过分析资源分配过程中不同参数变化对系统性能的影响,结果表明,PEPA模型方法可以直接评估资源分配算法性能的优劣,并能够确定算法性能提升的关键因素,从而减少云平台设计过程的周期.  相似文献   

16.
Resource allocation has been a critical issue in manufacturing. This paper presents an intelligent data management induced resource allocation system (RAS) which aims at providing effective and timely decision making for resource allocation. This sophisticated system is comprised of product materials, people, information, control and supporting function for the effectiveness in production. The said system incorporates a Database Management System (DBMS) and fuzzy logic to analyze data for intelligent decision making, and Radio Frequency Identification (RFID) for result verification. Numerical data from diverse sources are managed in the DBMS and used for resource allocation determination by using fuzzy logic. The output, representing the essential resources level for production, is then verified with reference to the resource utilization status captured by RFID. The effectiveness of the developed system is verified with a case study carried out in a Hong Kong-based garment manufacturing company. Results show that data gathering before resource allocation determination is more efficient with the use of developed system where the resource allocation decision parameters in the centralize database are effectively determined by using fuzzy logic. Decision makers such as production managers are allowed to determine resource allocation in a standardized approach in a more efficient way. The system also incorporates RFID with Artificial Intelligence techniques for result verification and knowledge refinement. Therefore, fuzzy logic results of resource allocation can be more responsive and adaptive to the actual production situation by refining the fuzzy rules with reference to the RFID-captured data.  相似文献   

17.
Within the context of cloud computing, efficient resource management is of great importance as it can result in higher scalability and significant energy and cost reductions over time. Because of the high complexity and costs of cloud environments, however, newly developed resource allocation strategies are often only validated by means of simulations, for example, by using CloudSim or custom-developed simulation tools. This article describes a general approach for the validation of cloud resource allocation strategies, illustrating the importance of experimental validation on physical testbeds. Furthermore, the design and implementation of Raspberry Pi as a Service (RPiaaS), a low-cost embedded testbed built using Raspberry Pi nodes, is presented. RPiaaS aims to facilitate the step from simulations toward experimental evaluations on larger cloud testbeds and is designed using a microservice architecture, where experiments and all required management services are running inside containers. The performance of the RPiaaS testbed is evaluated using several benchmark experiments. The obtained results not only illustrate that the overhead of both using containers and running the required RPiaaS services is minimal but also provide useful insights for scaling up experiments between the Raspberry Pi testbed and a larger more traditional cloud testbed. The introduced validation approach is then illustrated using a case study focusing on the allocation of hierarchically structured tenant data. The results obtained through simulations are compared to the experimental results. The RPiaaS testbed proved to be a very useful tool for the initial experimental validation before moving the experiments to a large-scale testbed.  相似文献   

18.
The resource management system is the central component of distributed network computing systems. There have been many projects focused on network computing that have designed and implemented resource management systems with a variety of architectures and services. In this paper, an abstract model and a comprehensive taxonomy for describing resource management architectures is developed. The taxonomy is used to identify approaches followed in the implementation of existing resource management systems for very large‐scale network computing systems known as Grids. The taxonomy and the survey results are used to identify architectural approaches and issues that have not been fully explored in the research. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

19.
面向高性能计算环境的作业优化调度模型的设计与实现   总被引:1,自引:0,他引:1  
高性能计算环境聚合了多个分布在不同地域、不同组织机构的高性能计算资源,面向用户提供统一的访问入口和使用方式,由系统中间件根据用户作业请求匹配合适的高性能计算资源。随着环境应用编程接口的开放以及作业请求数量的大幅增加,面对高并发作业提交请求时,目前采用的即时调度模型会由于网络等原因导致一定数量的请求处理失败,同时缺乏灵活性。针对此问题,优化了环境作业调度模型,引入作业环境队列,细化了作业系统层状态,增加了作业调度策略可配置性,并基于环境中间件SCE实现了系统原型。经测试,在单核心服务每分钟处理近200个作业提交请求的工作负载下,无因系统和网络原因引起的作业提交出错现象;在共计1 000个作业中,近500个作业提交命令请求在0.3s以内完成,800余个作业提交命令请求在0.5s以内完成。  相似文献   

20.
Understanding the behavior of large scale distributed systems is generally extremely difficult as it requires to observe a very large number of components over very large time. Most analysis tools for distributed systems gather basic information such as individual processor or network utilization. Although scalable because of the data reduction techniques applied before the analysis, these tools are often insufficient to detect or fully understand anomalies in the dynamic behavior of resource utilization and their influence on the applications performance. In this paper, we propose a methodology for detecting resource usage anomalies in large scale distributed systems. The methodology relies on four functionalities: characterized trace collection, multi‐scale data aggregation, specifically tailored user interaction techniques, and visualization techniques. We show the efficiency of this approach through the analysis of simulations of the volunteer computing Berkeley Open Infrastructure for Network Computing architecture. Three scenarios are analyzed in this paper: analysis of the resource sharing mechanism, resource usage considering response time instead of throughput, and the evaluation of input file size on Berkeley Open Infrastructure for Network Computing architecture. The results show that our methodology enables to easily identify resource usage anomalies, such as unfair resource sharing, contention, moving network bottlenecks, and harmful short‐term resource sharing. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号