首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 171 毫秒
1.
介绍应用于RAID控制器的I/O调度算法的设计与实现.主要目标是把来自RAID模块针对每个磁盘的具体读写请求按照响应的策略放入对应磁盘的读写I/O队列.然后根据具体请求的优先级和读写特性,对响应请求在队列中的次序进行调整或者对前后项进行合并,实现I/O请求的调度策略.  相似文献   

2.
在高性能计算环境中数据和应用程序往往分散在不同的节点,远程I/O的效率成为影响高性能计算性能的一个重要因素。为了提高系统的I/O效率,引入了一种基于远程I/O负载平衡调度算法。该算法采用了一种预约机制,可以对节点I/O负载进行动态调整、更好地利用网络带宽。详细介绍了该算法的实现过程,并且在一个模拟环境下对该算法的效果进行了评测。  相似文献   

3.
利用机器学习方法解决存储领域中若干技术难题是目前存储领域的研究热点之一。强化学习作为一种以环境反馈作为输入、自适应环境的特殊的机器学习方法,能通过观测环境状态的变化,评估控制决策对系统性能的影响来选择最优的控制策略,基于强化学习的智能RAID控制技术具有重要的研究价值。本文针对高性能计算应用特点,将机器学习领域中的强化学习技术引入RAID控制器中,提出了基于强化学习的智能I/O调度算法RL-scheduler,利用Q-学习策略实现了面向并行应用的自治调度策略。RL-scheduler综合考虑了调度的公平性、磁盘寻道时间和MPI应用的I/O访问效率,并提出多Q-表交叉组织方法提高Q-表的更新效率。实验结果表明,RL-scheduler缩短了并行应用的平均I/O服务时间,提高了大规模并行计算系统的I/O吞吐率。  相似文献   

4.
朱勇  张江陵 《计算机工程》2002,28(2):39-40,277
介绍了pSOS实时操作系统RAID(独立的冗余磁盘阵列)I/O调度策略的实现,并给出了几种高性能的RAID结构,最后是有关性能分析及测试方法。  相似文献   

5.
提出了一种跨多阵列通道的海量存储RAID50模型,通过采取多阵列卡的RAID0分条和阵列卡上多磁盘RAID5分条和校验的二级并发的数据组织与分块方式,以扩展块(大小等于阵列卡上的一个RAID5校验组)作为Cache和阵列之间数据交换的单位,实现了将阵列矩阵中所有磁盘的容量聚合及全并发访问。设计了该模型逻辑卷管理的最佳适配算法及二级地址映射算法。理论分析与实验结果表明:该策略将I/O响应时间降到了最低,且获得了与阵列通道数线性相关的逻辑卷容量和I/O性能。  相似文献   

6.
近日,德国汉诺威2012年CeBIT展-LSI公司推出具有更高I/O事务处理性能的全新MegaRAID SATA+SAS控制器卡产品,可为高端数据库应用和数据中心负载提速。最新控制器采用LSI双核6Gb/sSAS片上RAID(ROC),具有业界最高的服务器RAID性能(每秒RAID5  相似文献   

7.
RAID的并行I/O调度算法分析   总被引:6,自引:1,他引:6  
由于越来越多的应用受限于I/O,存储系统正起着越来越重要的作用,磁盘阵列RAID是一种提供高性能I/O的最常见存储设备,本文分析了RAID并行I/O调度算法的I/O执行时间和磁盘利用率,为合理配置高性能阵列提供了依据。  相似文献   

8.
设计了一种多个控制卡分条的RAID50模型和主机虚拟卷地址到所属高级别阵列、低级别阵列分条的二级地址映射模式,以提高I/O访问的并发性。定义多个连续I/O块为扩展块并等同于Cache页大小,且作为RAID50传输粒度,进一步改善内存与存储设备之间的传输效率。设计了基于Cache描述符控制块的哈希链式查找算法和基于Cache页访问频率计数的二次机会置换算法,实现了一种主机数据接收与RAID50存储设备预读并发进行的策略。结果表明,该设计有效地提高了存储系统的I/O性能。  相似文献   

9.
一种基于效益-代价均衡的磁带库调度算法   总被引:3,自引:0,他引:3  
石晶  邢春晓  周立柱 《软件学报》2002,13(2):239-244
诸如数字图书馆等规模在1012字节以上的大型数据库需要在线存取大容量磁带库中的海量数据.由于这些需求都是对海量数据的随机存取,而磁带库的随机存取性能很差,所以,研究有效的磁带库随机I/O调度策略和算法是改善磁带库系统性能的重要课题.提出并研究了一种基于效益-代价均衡的调度算法,给出一种有效的效益-代价加权比的估算方法.该算法根据系统的工作负载特点,动态调节调度的效益和代价的加权比,从而改善了磁带库系统在各种负载下的系统性能.研究解决了已有磁带库调度算法的对工作负载敏感的问题,极大改善了调度算法在重负载下的有效性.  相似文献   

10.
磁带库系统的随机I/O调度算法   总被引:1,自引:0,他引:1  
石晶  周立柱 《软件学报》2002,13(8):1612-1620
由于磁带库随机存取的性能很差,需要研究有效的随机I/O调度策略和算法以改善其在线存取的效率.对已有调度算法进行了分类、提炼和总结,利用仿真实验对静态调度、动态调度和基于复制的调度算法进行了深入研究,讨论了影响各种算法有效性的因素.针对已有算法在较重的负载条件下使系统性能急剧恶化的问题,还提出并研究了一种基于效益-代价均衡的调度算法.该算法引入效益-代价加权的概念,通过调节不同负载下的效益-代价加权比,极大地改善了已有算法在重负载下的有效性.该项研究为设计海量存储系统中的自适应调度算法提供了重要依据.  相似文献   

11.
Providing differentiated service in a consolidated storage environment is a challenging task. To address this problem, we introduce FAIRIO, a cycle-based I/O scheduling algorithm that provides differentiated service to workloads concurrently accessing a consolidated RAID storage system. FAIRIO enforces proportional sharing of I/O service through fair scheduling of disk time. During each cycle of the algorithm, I/O requests are scheduled according to workload weights and disk-time utilization history. Experiments, which were driven by the I/O request streams of real and synthetic I/O benchmarks and run on a modified version of DiskSim, provide evidence of FAIRIO’s effectiveness and demonstrate that fair scheduling of disk time is key to achieving differentiated service in a RAID storage system. In particular, the experimental results show that, for a broad range of workload request types, sizes, and access characteristics, the algorithm provides differentiated storage throughput that is within 10% of being perfectly proportional to workload weights; and, it achieves this with little or no degradation of aggregate throughput. The core design concepts of FAIRIO, including service-time allocation and history-driven compensation, potentially can be used to design I/O scheduling algorithms that provide workloads with differentiated service in storage systems comprised of RAIDs, multiple RAIDs, SANs, and hypervisors for Clouds.  相似文献   

12.
Performance of disk I/O schedulers is affected by many factors, such as workloads, file systems, and disk systems. Disk scheduling performance can be improved by tuning scheduler parameters, such as the length of read timers. Scheduler performance tuning is mostly done manually. To automate this process, we propose four self-learning disk scheduling schemes: Change-sensing Round-Robin, Feedback Learning, Per-request Learning, and Two-layer Learning. Experiments show that the novel Two-layer Learning Scheme performs best. It integrates the workload-level and request-level learning algorithms. It employs feedback learning techniques to analyze workloads, change scheduling policy, and tune scheduling parameters automatically. We discuss schemes to choose features for workload learning, divide and recognize workloads, generate training data, and integrate machine learning algorithms into the Two-layer Learning Scheme. We conducted experiments to compare the accuracy, performance, and overhead of five machine learning algorithms: Decision Tree, Logistic Regression, Naïve Bayes, Neural Network, and Support Vector Machine Algorithms. Experiments with real-world and synthetic workloads show that self-learning disk scheduling can adapt to a wide variety of workloads, file systems, disk systems, and user preferences. It outperforms existing disk schedulers by as much as 15.8% while consuming less than 3%-5% of CPU time.  相似文献   

13.
The selection of the right I/O scheduler for a given workload can significantly improve the I/O performance. However, this is not an easy task because several factors should be considered, and even the “best” scheduler can change over the time, specially if the workload’s characteristics change too. To address this problem, we present a Dynamic and Automatic Disk Scheduling framework (DADS) that simultaneously compares two different Linux I/O schedulers, and dynamically selects that which achieves the best I/O performance for any workload at any time. The comparison is made by running two instances of a disk simulator inside the Linux kernel. Results show that, by using DADS, the performance achieved is always close to that obtained by the best scheduler. Thus, system administrators are exempted from selecting a suboptimal scheduler which can provide a good performance for some workloads, but may downgrade the system throughput when the workloads change.  相似文献   

14.
Virtualization technology is an effective approach to improving the energy-efficiency in cloud platforms; however, it also introduces many energy-efficiency losses especially when I/O virtualization is involved. In this paper, we present an energy-efficiency enhanced virtual machine (VM) scheduling policy, namely Share-Reclaiming with Collective I/O (SRC-I/O), with aiming at reducing the energy-efficiency losses caused by I/O virtualization. The proposed SRC-I/O scheduler allows VMs to reclaim extra CPU shares in certain conditions so as to increase CPU utilization. Meanwhile, it separates I/O-intensive VMs from CPU-intensive ones and schedules them in a collective manner, so as to reduce the context-switching cost when scheduling mixed workloads. Extensive experiments are conducted on various platforms to investigate the performance of the proposed scheduler. The results indicate that when the system is in presence of mixed workloads, SRC-I/O scheduler outperforms many existing VM schedulers in terms of energy-efficiency and I/O responsiveness.  相似文献   

15.
The typical workload in a database system consists of a mix of multiple queries of different types that run concurrently. Interactions among the different queries in a query mix can have a significant impact on database performance. Hence, optimizing database performance requires reasoning about query mixes rather than considering queries individually. Current database systems lack the ability to do such reasoning. We propose a new approach based on planning experiments and statistical modeling to capture the impact of query interactions. Our approach requires no prior assumptions about the internal workings of the database system or the nature and cause of query interactions, making it portable across systems. To demonstrate the potential of modeling and exploiting query interactions, we have developed a novel interaction-aware query scheduler for report-generation workloads. Our scheduler, called QShuffler, uses two query scheduling algorithms that leverage models of query interactions. The first algorithm is optimized for workloads where queries are submitted in large batches. The second algorithm targets workloads where queries arrive continuously, and scheduling decisions have to be made online. We report an experimental evaluation of QShuffler using TPC-H workloads running on IBM DB2. The evaluation shows that QShuffler, by modeling and exploiting query interactions, can consistently outperform (up to 4x) query schedulers in current database systems.  相似文献   

16.
In virtualized environments, the VMM (virtual machine monitor) scheduler is critical to overall performance, as it allocates the physical resources. However, traditional schedulers have poor I/O performance of mixed workloads. Although recent research significantly improves I/O performance, they degrade the performance of computational tasks by shortening time slices and reducing cache efficiency. In order to eliminate these problems while guaranteeing I/O performance, this paper presents a multicore periodical preemption scheduling scheme with three optimization techniques: (1) periodically coalescing and handling I/O events to reduce the preemption rate and scheduling latency, which guarantees I/O performance; (2) taking advantage of multicore environments and centrally handling I/O events on different cores in a Round-Robin manner to lengthen time slices, which improves the performance of computational tasks; (3) using a dedicated priority for I/O event handling to keep the CPU fairness. We implement a Xen-based prototype and evaluate the performance of I/O workloads and computation-intensive workloads. The experimental results demonstrate that our scheduling scheme efficiently lengthens time slices and improves the performance of computational tasks, achieving the same I/O performance as the existing approaches optimized for I/O.  相似文献   

17.
随着处理器速度和网络传输速度的快速提升,I/O成为限制计算机性能的一大瓶颈,并行I/O提供了解决这一问题的有效途径。在并行I/O的实现中,数据访问调度策略对系统的性能有着重要的影响。文章通过对集群计算中现有I/O数据访问调度策略的研究,针对调度器瓶颈问题,提出了自主协助动态I/O调度策略,实验结果表明了所提出策略的有效性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号