首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
在基于关系数据库和对象关系映射的持久对象框架中,对象之间通常通过对象引用和各种集合属性将对象相互关联起来,组合成更为复杂的复合对象。应用程序对这些复合对象的访问则是通过使用这些属性逐个访问成员对象来完成。这种在多个成员对象之间的导航操作导致了客户端和后端数据库系统之间的获取操作大幅度增加,从而导致严重的性能问题。对象预取技术根据某种策略,将应用程序可能访问到的对象成组或批量地预先从数据库中装载到客户端,从而减少了应用程序对后端数据库系统进行查询的次数。本文对现有各种对象预取技术并对其进行分析分类,在此基础上,提出了一种基于多级访问模式的对象预取技术。最后,介绍了该算法在软件构件平台StarC-CM的持久对象框架中的实现。  相似文献   

2.
预取作为一种主动缓存技术可用于提高缓存命中率,但其效果好坏很大程度上取决于所采用的预取策略。本文提出了一种代理服务器下基于对象的限定预取策略,通过调整代理服务器中预取空间的大小,防止无效页面占用过多的缓存空间,提高了缓存的利用率,从而获得较高的命中率。实验表明,基于对象的限定预取策略命中率远远高于LRU策略,并且相对于基于对象的LRU策略也有明显的改善。  相似文献   

3.
在对象存储系统中,对象存储设备具有很高的智能和处理能力,它负责对象及其属性的数据组织和管理,向外提供基于对象的访问接口。每个对象均具有属性,它反映了对象的某些特征。通过为每个对象自定义一个预取属性页,对用户访问对象存储设备的规律进行智能的自我学习,从而实现一种高效的、自适应的动态预取算法,可以提高对象存储设备的预取命中率,从而提高对象存储设备的性能。  相似文献   

4.
基于LOM的网络教学资源管理系统研究   总被引:3,自引:1,他引:3  
当前网络教育虽然拥有丰富的教学资源,但是由于资源格式的复杂多样,资源建设缺乏规范化,使得用户无法获取、重用这些网上的丰富资源。针对这种状况,设计了一种基于学习对象元数据的资源管理模型,通过对资源元数据库的管理,而实现对各类教学资源的管理,为用户查询、获取资源提供了有效途径。  相似文献   

5.
多个成员对象之间的访问大幅度增加了客户端和后端数据库系统之间的读取操作,从而给系统带来了严重的性能问题。针对这一问题,提出了一种解决方案,在客户端增加一个缓存来减少对象获取操作,建立多层访问的对象查询预取模式,从而大幅提高对象持久化系统的访问效率和可用性。  相似文献   

6.
基于设计模式,角色对象模式是Composite模式和Decorate模式复合得到的新模式。角色对象模式将一个系统中的各级用户划分成不同对象,分别描述各对象的权限,并提供管理功能,从而降低程序耦合度,使系统结构清晰,易于维护。本文介绍了角色对象模式在电力监控系统中的应用。  相似文献   

7.
CORBA安全中的基于角色访问控制   总被引:2,自引:1,他引:2  
王传标  吴敏 《计算机工程》2002,28(12):201-202
该文提出了在CORBA安全参考模型中引入基于角色的访问控制模型RBAC,将访问对象方法的权限和角色相关联,用户通过成为角色的成员而获取相应的权限,解决了原有访问控制方法存在的问题。  相似文献   

8.
序列模式挖掘能够发现隐含在Web日志中的用户的访问规律,可以被用来在Web预取模型中预测即将访问的Web对象。目前大多数序列模式挖掘是基于Apriori的宽度优先算法。提出了基于位图深度优先挖掘算法,采用基于字典树数据结构的深度优先策略,同时采用位图保存和计算各序列的支持度,能够较迅速地挖掘出频繁序列。将该序列模式挖掘算法应用于Web预取模型中,在预取缓存一体化的条件下实验表明具有较好的性能。  相似文献   

9.
首先,我们基于对象间的语义关系,定义对象依赖,并讨论了其相关的定义及其性质,然后定义了对象范式--规范化对象模式树和规范化对象模式森林,指出规范化对象模式森林是我们所要求的复杂对象模式的结构。  相似文献   

10.
对象依赖集合性质的研究   总被引:2,自引:1,他引:2  
在复杂对象模式的规范化设计中,对象间的语义关系起了重要作用。然而,目前还没有关系的研究报告,讨论了对象依赖集合的一些性质,首先概括了基本概念和性质;然后给出并证明获取一个OD集合所有关键字的JINGSI算法;最后证明要作为复杂对象模式规范化设计的MIMI算法输入的OD集合的特性,并基于此改进复杂对象模式规范化设计。  相似文献   

11.
多媒体应用需要大容量、高速磁盘阵列的支持。为了分析采用读预取策略后磁盘阵列的性能,本文建立了读预取Petri网[1]模型。此模型可用以判断系统各环节对读速率的影响和求取读平均数传率。实验表明,采用顺序读预取策略的实际读平均数传率提高了60%,可以满足数字化系统对I/O性能的高要求。  相似文献   

12.
在当前网络地理信息系统(以下简称WebGIS)中,空间数据等大型数据在用户访问请求下,从远程服务器端向客户端传输时,往往引起传输延迟,延长了响应请求的时间,降低了效率.预取本来是用在操作系统中提高高速缓存存储器(Cache)性能的一种技术,但在WebGIS中,采用适当的预取机制可以减少用户在浏览器端的等待时间,提高客户端的工作效率.本文分析了几种预取机制,并根据GIS空间对象的特点,对预取机制在WebGIS上的应用作了讨论,以期提高WebGIS的性能.  相似文献   

13.
In this paper we propose and evaluate a new data-prefetching technique for cache coherent multiprocessors. Prefetches are issued by a functional unit called a prefetch engine which is controlled by the compiler. We let second-level cache misses generate cache miss traps and start the prefetch engine in a trap handler. The trap handler is fast (40–50 cycles) and does not normally delay the program beyond the memory latency of the miss. Once started, the prefetch engine executes on its own and causes no instruction overhead. The only instruction overhead in our approach is when a trap handler completes after data arrives. The advantages of this technique are (1) it exploits static compiler analysis to determine what to prefetch, which is hard to do in hardware, (2) it uses prefetching with very little instruction overhead, which is a limitation for traditional software-controlled prefetching, and (3) it is accurate in the sense that it generates very little useless traffic while maintaining a high prefetching coverage. We also study whether one could emulate the prefetch engine in software, which would not require any additional hardware beyond support for generating cache miss traps and ordinary prefetch instructions. In this paper we present the functionality of the prefetch engine and a compiler algorithm to control it. We evaluate our technique on six parallel scientific and engineering applications using an optimizing compiler with our algorithm and a simulated multiprocessor. We find that the prefetch engine removes up to 67% of the memory access stall time at an instruction overhead less than 0.42%. The emulated prefetch engine removes in general less stall time at a higher instruction overhead.  相似文献   

14.
马明理  陈刚  董金祥 《计算机测量与控制》2006,14(11):1551-1553,1556
介绍了一种新的多线程内存分配技术(NIXMalloc)的设计和实现,提出了两种高效的分配策略及其自适应调优方法,有效地提高多线程应用程序的内存管理性能;其中Local分配策略对超级块对象Span进行了线程私有化,基于超级块对象为单位的垃圾回收和内存布局调整使多线程性能更优越;Global分配策略采用了自适应调优方法,在动态检测应用程序内存使用情况的基础上进行内存预取和线程缓存限值的动态调整;实验证明NIXMalloc可改善内存管理性能,提高吞吐量,同时降低内存使用量;在多线程应用系统中能获得较好的时空效率。  相似文献   

15.
预取是提高存储系统性能的主要手段之一.但现有存储系统的设备层并不知道任何I/O访问的语义信息,因而不能充分利用I/O访问的语义来预取下一时刻要访问的数据,只能利用较简单的方式如I/O访问的局部性、顺序访问和循环访问等特性来实现简单的预测.为此,本文根据存储系统的特点提出了实用且高效的基于连续度的聚类算法来发现密集读请求访问的区域,并采用ARMA时间序列模型来预测密集读请求可能访问的区域及访问时刻,为正确的预取提供了准确的信息.为提高预取的准确性,并采用了动态参数估计的策略.通过大量实验的结果验证了这两种算法的正确性和预测的准确性,能较大的提高存储系统的预取效率.  相似文献   

16.
Client cache is an important technology for the optimization of distributed and centralized storage systems. As a representative client cache system, the performance of CacheFiles is limited by transition faults. Furthermore, CacheFiles just supports a simple LRU policy with a tightly-coupled design. To overcome these limitations, we propose to employ Stable Set Model (SSM) to improve CacheFiles and design an enhanced CacheFiles, SAC. SSM assumes that data access can be decomposed to access on some stable sets, in which elements are always repeatedly accessed or not accessed together. Using SSM methods can improve the cache management and reduce the effect of transition faults. We also adopt loosely- coupled methods to design prefetch and replacement policies. We implement our scheme on Linux 2.6.32 and measure the execution time of the scheme with various file I/O benchmarks. Experiments show that SAC can significantly improve I/O performance and reduce execution time up to 84%0, compared with the existing CacheFiles.  相似文献   

17.
Stride prefetching is recognized as an important technique to improve memory access performance. The prior work usually profiles and/or analyzes the program behavior offline, and uses the identified stride patterns to guide the compilation process by injecting the prefetch instructions at appropriate places. There are some researches trying to enable stride prefetching in runtime systems with online profiling, but they either cannot discover cross-procedural prefetch opportunity, or require special supports in hardware or garbage collection. In this paper, we present a prefetch engine for JVM (Java Virtual Machine). It firstly identifies the candidate load operations during just-in-time (JIT) compilation, and then instruments the compiled code to profile the addresses of those loads. The runtime profile is collected in a trace buffer, which triggers a prefetch controller upon a protection fault. The prefetch controller analyzes the trace to discover any stride patterns, then modifies the compiled code to inject the prefetch instructions in place of the instrumentations. One of the major advantages of this engine is that, it can detect striding loads in any virtual code places for both regular and irregular code, not being limited with plain loop or procedure scopes. Actually we found the cross-procedural patterns take about 30% of all the prefetchings in the representative Java benchmarks. Another major advantage of the engine is that it has runtime overhead much smaller (the maximal is less than 4.0%) than the benefits it brings. Our evaluation with Apache Harmony JVM shows that the engine can achieve an average 6.2% speed-up with SPECJVM98 and DaCapo on Intel Pentium 4 platform, in spite of the runtime overhead.  相似文献   

18.
介绍基于对象存储的机群文件系统——LionFS所采用的关键技术,包括直接递送的数据传输机制以及基于前端负载访问信息的预取技术。性能测试表明,采用预取技术后数据通路可以“并发流水”,读带宽增长了70%,直接递送操作使读、写性能分别提高24%和28%。  相似文献   

19.
Helper threaded prefetching based on chip multiprocessor has been shown to reduce memory latency and improve overall system performance, and has been explored in linked data structures accesses. In our earlier work, we had proposed an effective threaded prefetching technique that balances delinquent loads between main thread and helper thread to improve effectiveness of prefetching. In this paper, we analyze memory access characteristic of specific application to estimate effective prefetch distance range for our proposed threaded prefetching technique. The effect of hardware prefetchers on the estimation is also exploited. We discuss key design issues of our proposed method and present preliminary experimental results. Our experimental evaluations indicated that the bounded range of effective prefetch distance can be determined using our method, and the optimal prefetch distances can be determined based on the estimated effective prefetch distance range by few trial runs.  相似文献   

20.
Data deduplication has been widely utilized in large-scale storage systems, particularly backup systems. Data deduplication systems typically divide data streams into chunks and identify redundant chunks by comparing chunk fingerprints. Maintaining all fingerprints in memory is not cost-effective because fingerprint indexes are typically very large. Many data deduplication systems maintain a fingerprint cache in memory and exploit fingerprint prefetching to accelerate the deduplication process. Although fingerprint prefetching can improve the performance of data deduplication systems by leveraging the locality of workloads, inaccurately prefetched fingerprints may pollute the cache by evicting useful fingerprints. We observed that most of the prefetched fingerprints in a wide variety of applications are never used or used only once, which severely limits the performance of data deduplication systems. We introduce a prefetch-aware fingerprint cache management scheme for data deduplication systems (PreCache) to alleviate prefetch-related cache pollution. We propose three prefetch-aware fingerprint cache replacement policies (PreCache-UNU, PreCache-UOO, and PreCache-MIX) to handle different types of cache pollution. Additionally, we propose an adaptive policy selector to select suitable policies for prefetch requests. We implement PreCache on two representative data deduplication systems (Block Locality Caching and SiLo) and evaluate its performance utilizing three real-world workloads (Kernel, MacOS, and Homes). The experimental results reveal that PreCache improves deduplication throughput by up to 32.22% based on a reduction of on-disk fingerprint index lookups and improvement of the deduplication ratio by mitigating prefetch-related fingerprint cache pollution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号