共查询到18条相似文献,搜索用时 80 毫秒
1.
2.
一种基于内存服务的内存共享网格系统 总被引:1,自引:0,他引:1
内存密集型应用对运行环境的物理内存要求严格,在物理内存不足时将会引发大量磁盘IO,降低系统性能.传统的网络内存致力于在集群内部通过共享空闲节点的物理内存解决该问题,但受集群负载和内部网络影响较大.通过结合网络内存和服务计算、网格计算等技术,提出一种基于内存服务的内存共享网格系统——内存网格,并分析和讨论了实现内存服务的关键技术和算法.内存网格弥补了网络内存的不足,扩展了网格计算的应用范围.通过基于真实应用运行状态的模拟,证明了内存网格与网络内存相比具有性能的提高. 相似文献
3.
因数据网格系统部署在广域网环境,所以网络传输延时将成为影响其服务质量(QoS)的主要因素之一。在对预取技术进行了深入细致的探讨之后,本文利用空间换取时间的思想提出了一种元数据的预取与缓存策略。考虑到Grid-DaEn系统中元数据自身的特点,还提出了一种新颖的元数据预取算法DHMP。 相似文献
4.
5.
内存对计算机系统的性能具有重要影响,内存网格能够共享跨域的开放网络环境中的内存资源,以磁盘缓存的形式提高系统性能.为实现缓存对应用的透明性,提出了动态修改操作系统内核的二进制代码.实现文件系统读写流程的截获和重定向;并提出了基于内核线程的异步缓存写入方法.提高写缓存的效率.通过原型系统及实验,说明上述方法既不需要修改鹰用程序、也不需要修改操作系统源代码,并且能充分利用共享的内存资源+提高系统的I/O性能. 相似文献
6.
7.
序列模式挖掘能够发现隐含在Web日志中的用户的访问规律,可以被用来在Web预取模型中预测即将访问的Web对象。目前大多数序列模式挖掘是基于Apriori的宽度优先算法。提出了基于位图深度优先挖掘算法,采用基于字典树数据结构的深度优先策略,同时采用位图保存和计算各序列的支持度,能够较迅速地挖掘出频繁序列。将该序列模式挖掘算法应用于Web预取模型中,在预取缓存一体化的条件下实验表明具有较好的性能。 相似文献
8.
针对专题型应用中普遍存在的大数据查询的频繁性和模式固定性特点,提出一种基于模板的数据预取和缓存算法,用于加快数据查询响应速度并减轻服务器端负载压力。通过构建数据查询模板,在触发器被激发时调用模板以构建预取数据,提出基于模板的数据预取方法和基于触发器的预取算法;考虑缓存空间中一些大数据的存在对查询响应速度的优化性,建立缓存对象模型并提出改进的Hybrid算法。以东方红湿地环境监测平台为例进行算法实验与分析,实验结果表明,在不同的缓存百分比下,较之典型的缓存算法,改进的Hybrid算法在访问延迟率上均有改进,且在大数据量查询时表现出了优越的应用效果。 相似文献
9.
LinuX作为一个多任务、分时、通用的开源操作系统,越来越广泛地应用于各种商业和企业的服务器。为了提高系统的性能,LinuX采用预取技术将应用程序所需的数据提前加载到缓存中,减少应用程序的I/O等待时间。然而由于服务器系统负载的多样化,导致了预取算法遇到越来越多的挑战。该文主要从分析Linux-2.6.29张涛rc2内核源代码人手,对Linux预取算法的体系结构与内部机制进行了深入分析与研究,并提出了一些改进预取算法的方法.对于进一步提高Linux系统服务器的性能以及LinuX的推广与使用具有重要的意义。 相似文献
10.
数据预取常用来提升系统的性能与吞吐量,对磁盘的能耗考虑甚少。针对此问题,在传统算法之上通过延迟磁盘的异步预取,合并磁盘I/O操作,减少磁盘的能耗状态切换,延长连续休眠时间来达到节能的目的。也通过基于真实运行状态的模拟,对预取算法进行了评估和验证,得出改进后的预取在不影响性能的前提下比标准预取节省17%的能量。 相似文献
11.
A Data Cube Model for Prediction-Based Web Prefetching 总被引:7,自引:0,他引:7
Qiang Yang Joshua Zhexue Huang Michael Ng 《Journal of Intelligent Information Systems》2003,20(1):11-30
Reducing the web latency is one of the primary concerns of Internet research. Web caching and web prefetching are two effective techniques to latency reduction. A primary method for intelligent prefetching is to rank potential web documents based on prediction models that are trained on the past web server and proxy server log data, and to prefetch the highly ranked objects. For this method to work well, the prediction model must be updated constantly, and different queries must be answered efficiently. In this paper we present a data-cube model to represent Web access sessions for data mining for supporting the prediction model construction. The cube model organizes session data into three dimensions. With the data cube in place, we apply efficient data mining algorithms for clustering and correlation analysis. As a result of the analysis, the web page clusters can then be used to guide the prefetching system. In this paper, we propose an integrated web-caching and web-prefetching model, where the issues of prefetching aggressiveness, replacement policy and increased network traffic are addressed together in an integrated framework. The core of our integrated solution is a prediction model based on statistical correlation between web objects. This model can be frequently updated by querying the data cube of web server logs. This integrated data cube and prediction based prefetching framework represents a first such effort in our knowledge. 相似文献
12.
提出了一个基于时间窗口的数据预处理算法.面向具体应用,根据已有知识,此算法可以智能化地滤去一些“噪声”数据.与一般的定义不同.本文所谓的“噪声”数据是指那些由一些已知的规则决定性地影响着的数据,研究显示它们会对进一步的数据挖掘形成极大干扰.实际测试结果表明,本算法能够改善一些已有数据挖掘算法的执行效果. 相似文献
13.
The future storage systems are expected to contain a wide variety of storage media and layers due to the rapid development of NVM(non-volatile memory)techniques.For NVM-based read caches,many kinds of NVM devices cannot stand frequent data updates due to limited write endurance or high energy consumption of writing.However,traditional cache algorithms have to update cached blocks frequently because it is difficult for them to predict long-term popularity according to such limited information about data blocks,such as only a single value or a queue that reflects frequency or recency.In this paper,we propose a new MacroTrend(macroscopic trend)prediction method to discover long-term hot blocks through blocks'macro trends illustrated by their access count histograms.And then a new cache replacement algorithm is designed based on the MacroTrend prediction to greatly reduce the write amount while improving the hit ratio.We conduct extensive experiments driven by a series of real-world traces and find that compared with LRU,MacroTrend can reduce the write amounts of NVM cache devices significantly with similar hit ratios,leading to longer NVM lifetime or less energy consumption. 相似文献
14.
A Novel Memory Structure for Embedded Systems: Flexible Sequential and Random Access Memory 下载免费PDF全文
Ying Chen Karthik Ranganathan Vasudev V. Pai DavidJ. Lilja and Kia Bazargan 《计算机科学技术学报》2005,20(5):596-606
The on-chip memory performance of embedded systems directly affects the system designers' decision about how to allocate expensive silicon area. A novel memory architecture, flexible sequential and random access memory (FSRAM), is investigated for embedded systems. To realize sequential accesses, small “links”are added to each row in the RAM array to point to the next row to be prefetched. The potential cache pollution is ameliorated by a small sequential access buyer (SAB). To evaluate the architecture-level performance of FSRAM, we ran the Mediabench benchmark programs on a modified version of the SimpleScalar simulator. Our results show that the FSRAM improves the performance of a baseline processor with a 16KB data cache up to 55%, with an average of 9%; furthermore, the FSRAM reduces 53.1% of the data cache miss count on average due to its prefetching effect. We also designed RTL and SPICE models of the FSRAM, which show that the FSRAM significantly improves memory access time, while reducing power consumption, with negligible area overhead. 相似文献
15.
16.
17.
序列模式挖掘就是在时序数据库中挖掘相对时间或其他模式出现频率高的模式.序列模式发现是最重要的数据挖掘任务之一,并有着广阔的应用前景.针对静态数据库,序列模式挖掘已经被深入的研究.近年来,出现了一种新的数据形式:数据流.针对基于数据流的序列模式挖掘的研究还不是十分深入.提出一个有效的基于数据流的挖掘频繁序列模式的算法SSPM,利用到2个数据结构(F-list和Tatree)来处理基于数据流的序列模式挖掘的复杂性问题.SSPM的优点是可以最大限度地降低负正例的产生,实验表明SSPM具有较高的准确率. 相似文献
18.
多数处理器中采用多级包含的cache存储层次,现有的末级cache块替换算法带来的性能开销较大.针对该问题,提出一种优化的末级cache块替换算法PLI,在选择丢弃块时考虑其在上级cache的访问频率,以较小的代价选出最优的LLC替换块.在时钟精确模拟器上的评测结果表明,该算法较原算法性能平均提升7%. 相似文献