共查询到20条相似文献,搜索用时 0 毫秒
2.
《Parallel Computing》2014,40(10):710-721
In this paper, we investigate the problem of fair storage cache allocation among multiple competing applications with diversified access rates. Commonly used cache replacement policies like LRU and most LRU variants are inherently unfair in cache allocation for heterogeneous applications. They implicitly give more cache to the applications that has high access rate and less cache to the applications of slow access rate. However, applications of fast access rate do not always gain higher performance from the additional cache blocks. In contrast, the slow application suffer poor performance with a reduced cache size. It is beneficial in terms of both performance and fairness to allocate cache blocks by their utility.In this paper, we propose a partition-based cache management algorithm for a shared cache. The goal of our algorithm is to find an allocation such that all heterogeneous applications can achieve a specified fairness degree as least performance degradation as possible. To achieve this goal, we present an adaptive partition framework, which partitions the shared cache among competing applications and dynamically adjusts the partition size based on predicted utility on both fairness and performance. We implement our algorithm in a storage simulator and evaluate the fairness and performance with various workloads. Experimental results show that, compared with LRU, our algorithm achieves large improvement in fairness and slightly in performance. 相似文献
3.
Scalable cache invalidation algorithms for mobile data access 总被引:2,自引:0,他引:2
Elmagarmid A. Jin Jing Helal A. Choonhwa Lee 《Knowledge and Data Engineering, IEEE Transactions on》2003,15(6):1498-1511
In this paper, we address the problem of cache invalidation in mobile and wireless client/server environments. We present cache invalidation techniques that can scale not only to a large number of mobile clients, but also to a large number of data items that can be cached in the mobile clients. We propose two scalable algorithms: the Multidimensional Bit-Sequence (MD-BS) algorithm and the Multilevel Bit-Sequence (ML-BS) algorithm. Both algorithms are based on our prior work on the Basic Bit-Sequences (BS) algorithm. Our study shows that the proposed algorithms are effective for a large number of cached data items with low update rates. The study also illustrates that the algorithms ran be used with other complementary techniques to address the problem of cache invalidation for data items with varied update and access rates. 相似文献
4.
Heterogeneous storage architectures combine the strengths of different storage devices in a synergistically useful fashion, and are increasingly being used in mobile storage systems. In this paper, we propose ARC-H, an adaptive cache replacement algorithm for heterogeneous storage systems consisting of a hard disk and a NAND flash memory. ARC-H employs a dynamically adaptive management policy based on ghost buffers and takes account of recency, I/O cost per device, and workload patterns in making cache replacement decisions. Realistic trace-driven simulations show that ARC-H reduces service time by up to 88% compared with existing caching algorithms with a 20 Mb cache. ARC-H also reduces energy consumption by up to 81%. 相似文献
5.
6.
7.
马杰 《计算机工程与设计》2010,31(20)
在移动网络环境下为了降低访问终端位置变化给流媒体缓存应用效果带来的影响,借鉴分段缓存的思路,提出一种在流媒体缓存应用中使用的分散存储转换方法.在分散式存储转换方法中,利用分散函数的技术对流媒体缓存算法的缓存内容选取和释放内容选取两方面工作进行了修改.针对典型的流媒体缓存算法,通过模拟测试表明了分散式存储转换方法能提升流媒体缓存在移动网络环境下的工作效果. 相似文献
8.
This paper presents findings of a study of high school students participating in a tablet PC (TPC) programme. Primary areas of interest were students' experiences with and attitudes about the TPCs, physical discomfort associated with use of TPCs and temporal and task-driven patterns of TPC use. Data were collected via questionnaire and computer use-monitoring software. Results showed students' attitudes were generally quite positive towards the TPCs, although they did not tend to think TPCs had improved their grades, few disagreed that TPCs were a distraction in class, and visual and musculoskeletal discomfort was prevalent. Understanding how to use the TPC and recognizing its organizational capacity were associated with several positive attitudes towards the TPC, including making school more enjoyable. Children's exposure to computers will only increase, so study of the many dimensions of their impact is critical in order to understand what is effective, constructive and healthful for children. 相似文献
9.
《Ergonomics》2012,55(5):706-727
This paper presents findings of a study of high school students participating in a tablet PC (TPC) programme. Primary areas of interest were students' experiences with and attitudes about the TPCs, physical discomfort associated with use of TPCs and temporal and task-driven patterns of TPC use. Data were collected via questionnaire and computer use-monitoring software. Results showed students' attitudes were generally quite positive towards the TPCs, although they did not tend to think TPCs had improved their grades, few disagreed that TPCs were a distraction in class, and visual and musculoskeletal discomfort was prevalent. Understanding how to use the TPC and recognizing its organizational capacity were associated with several positive attitudes towards the TPC, including making school more enjoyable. Children's exposure to computers will only increase, so study of the many dimensions of their impact is critical in order to understand what is effective, constructive and healthful for children. 相似文献
10.
Many network applications requires access to most up-to-date information. An update event makes the corresponding cached data item obsolete, and cache hits due to obsolete data items become simply useless to those applications. Frequently accessed but infrequently updated data items should get higher preference while caching, and infrequently accessed but frequently updated items should have lower preference. Such items may not be cached at all or should be evicted from the cache to accommodate items with higher preference. In wireless networks, remote data access is typically more expensive than in wired networks. Hence, an efficient caching scheme considers both data access and update patterns can better reduce data transmissions in wireless networks. In this paper, we propose a step-wise optimal update-based replacement policy, called the Update-based Step-wise Optimal (USO) policy, for wireless data networks to optimize transmission cost by increasing effective hit ratio. Our cache replacement policy is based on the idea of giving preference to frequently accessed but infrequently updated data, and is supported by an analytical model with quantitative analysis. We also present results from our extensive simulations. We demonstrate that (1) the analytical model is validated by the simulation results and (2) the proposed scheme outperforms the Least Frequently Used (LFU) scheme in terms of effective hit ratio and communication cost. 相似文献
11.
12.
13.
多维数据以线性形式在存储系统中进行访问操作,二维及以上维度空间中的相邻节点被不同的映射算法映射到一维空间的不相邻位置。高维空间中进行相邻节点访问时,其一维存储映射位置有着不同的访问距离和访问延迟。提出了基于空间填充曲线Z-Ordering的存储映射方法及其访问距离的度量指标,并和常规优先算法进行了对比,发现能更好地将高维相邻的数据节点簇集到一维存储位置,加强了局部性。调整缓存空间中用于预取的空间大小,可以利用增强的局部性,提高了缓存命中率。实验结果表明,改善了多维数据的访问速度,优化了系统性能。 相似文献
14.
Xiaodong MENG Chentao WU Minyi GUO Long ZHENG Jingyu ZHANG 《Frontiers of Computer Science》2019,13(4):850
Energy consumption is one of the most significant aspects of large-scale storage systems where multilevel caches are widely used. In a typical hierarchical storage structure, upper-level storage serves as a cache for the lower level, forming a distributed multilevel cache system. In the past two decades, several classic LRU-based multilevel cache policies have been proposed to improve the overall I/O performance of storage systems. However, few power-aware multilevel cache policies focus on the storage devices in the bottom level, which consume more than 27% of the energy of the whole system [1]. To address this problem, we propose a novel power-aware multilevel cache (PAM) policy that can reduce the energy consumption of high-performance and I/O bandwidth storage devices. In our PAM policy, an appropriate number of cold dirty blocks in the upper level cache are identified and selected to flush directly to the storage devices, providing high probability extension of the lifetime of disks in standby mode. To demonstrate the effectiveness of our proposed policy, we conduct several simulations with real-world traces. Compared to existing popular cache schemes such as PALRU, PB-LRU, and Demote, PAM reduces power consumption by up to 15% under different I/O workloads, and improves energy efficiency by up to 50.5%. 相似文献
15.
Mark McElroy 《Information Systems Management》1987,4(3):47-53
The MIS department has coordinated the acquisition and distribution of PCs and software for many business purposes but has often overlooked word processing, a major office application. MIS may prefer to leave it where it is — on rapidly aging dedicated word processors. Consolidating word processing with other applications, however, not only results in cost savings but can also increase the flexibility of the organization's PC investment and enhance the productivity of both WP and non-WP users. This article presents a case for transferring word processing to PCs and highlights its advantages. 相似文献
16.
Mark McElroy 《Information Systems Management》2013,30(3):47-53
The MIS department has coordinated the acquisition and distribution of PCs and software for many business purposes but has often overlooked word processing, a major office application. MIS may prefer to leave it where it is — on rapidly aging dedicated word processors. Consolidating word processing with other applications, however, not only results in cost savings but can also increase the flexibility of the organization's PC investment and enhance the productivity of both WP and non-WP users. This article presents a case for transferring word processing to PCs and highlights its advantages. 相似文献
17.
Adaptive per-user per-object cache consistency management for mobile data access in wireless mesh networks 总被引:1,自引:0,他引:1
Yinan LiAuthor Vitae Ing-Ray Chen Author Vitae 《Journal of Parallel and Distributed Computing》2011,71(7):1034-1046
We propose and analyze an adaptive per-user per-object cache consistency management (APPCCM) scheme for mobile data access in wireless mesh networks. APPCCM supports strong data consistency semantics through integrated cache consistency and mobility management. The objective of APPCCM is to minimize the overall network cost incurred due to data query/update processing, cache consistency management, and mobility management. In APPCCM, data objects can be adaptively cached at the mesh clients directly or at mesh routers dynamically selected by APPCCM. APPCCM is adaptive, per-user and per-object as the decision regarding where to cache a data object accessed by a mesh client is made dynamically, depending on the mesh client’s mobility and data query/update characteristics, and the network’s conditions. We develop analytical models for evaluating the performance of APPCCM and devise a computational procedure for dynamically calculating the overall network cost incurred. We demonstrate via both model-based analysis and simulation validation that APPCCM outperforms non-adaptive cache consistency management schemes that always cache data objects at the mesh client, or at the mesh client’s current serving mesh router for mobile data access in wireless mesh networks. 相似文献
18.
NVM存储设备系统具备提供高吞吐的潜质,包括接近内存的读写速度、字节寻址特性和支持多路转发等优势。但现有的系统软件栈并没有针对NVM去设计,使得系统软件栈存在许多影响系统访问性能的因素。通过分析发现文件系统的锁机制具有较大的开销,这使得数据的并发访问在多核心环境下成为一个难题。为了缓解这些问题,设计了无锁的文件读写机制以及基于字节的读写接口。通过取消基于文件的锁机制改变了粗粒度的访问控制,利用自主管理请求提高了进程的并发度;在设计能够利用字节寻址的新的文件访问接口时,不仅考虑了NVM存储设备的读写非对称,还考虑了其读写操作的不同特性。这些设计减少了软件栈的开销,有利于发挥NVM特性来提供一个高并发、高吞吐和耐久的存储系统。最后利用开源NVM模拟器PMEM实现了FPMRW原型系统,使用Filebench通用测试工具对FPMRW进行测试与分析,结果显示,FPMRW相对EXT+PMEM和XFS+PMEM能提高3%~40%的系统吞吐率。 相似文献
19.
Two graph models are developed to determine the minimum required buffer size for achieving the theoretical lower bound on the number of disk accesses for performing relational joins. Here, the lower bound implies only one disk access per joining block or page. The first graph model is based on the block connectivity of the joining relations. Using this model, the problem of determining an ordered list of joining blocks that requires the smallest buffer is considered. It is shown that this problem as well as the problem of computing the least upper bound on the buffer size is NP-hard. The second graph model represents the page connectivity of the joining relations. It is shown that the problem of computing the least upper bound on the buffer size for the page connectivity model is also NP-hard. Heuristic procedures are presented for the page connectivity model and it is shown that the sequence obtained using the heuristics requires a near-optimal buffer size The authors also show the performance improvement of the proposed heuristics over the hybrid-has join algorithm for a wide range of join factors 相似文献