共查询到20条相似文献,搜索用时 156 毫秒
1.
为了提高网络内存的访存性能,基于一种页面级流缓存和预取结构提出了可变步长的带状流检测算法VSS(variable stride stream)和基于时钟步长的流预取优化算法来优化网络访存性能.带状流检测算法解决了固定步长流检测下循环访问中虚拟页地址的跳跃问题,消除了断流,可以有效提高流检测的覆盖率.基于时钟步长的流预取优化动态调整预取长度,可以解决有些预取不能及时取回的问题,进一步提高预取性能.通过和顺序预取算法的比较可以看出,VSS算法可以实现高准确率、低通信开销的预取.通过模拟分析了这种流缓存和预取机制在网络访存系统中的应用,验证了以少量性能下降换取灵活的远程内存扩展方法的可行性. 相似文献
2.
结合访存失效队列状态的预取策略 总被引:1,自引:0,他引:1
随着存储系统的访问速度与处理器的运算速度的差距越来越显著,访存性能已成为提高计算机系统性能的瓶颈.通过对指令Cache和数据Cache失效行为的分析,提出一种预取策略--结合访存失效队列状态的预取策略.该预取策略保持了指令和数据访问的次序,有利于预取流的提取.并将指令流和数据流的预取相分离,避免相互替换.在预取发起时机的选择上,不但考虑当前总线是否空闲,而且结合访存失效队列的状态,减小对处理器正常访存请求的影响.通过流过滤机制提高预取准确性,降低预取对访存带宽的需求.结果表明,采用结合访存失效队列状态的预取策略,处理器的平均访存延时减少30%,SPEC CPU2000程序的IPC值平均提高8.3%. 相似文献
3.
4.
对基因表达谱分块,使之符合GPU并行计算的线程结构特性,根据GPU线程结构特性设计双层并行模式,并利用纹理缓存实现访存高效;依据CPU二级缓存容量对基本块进一步细分成子块以提高缓存命中率,利用数据预取技术减少访存次数,利用线程绑定技术减少线程在核心之间的迁移;依据多核CPU和GPU的计算能力分配CPU和GPU的基因互信息计算任务以平衡CPU与GPU的计算负载;在设计新的阈值计算算法基础上,设计实现了访存高效的构建全局基因调控网络CPU/GPU并行算法.实验结果表明,与已有算法相比,本文算法加速更明显,并且能够构建更大规模的全局基因调控网络. 相似文献
5.
6.
粗粒度可重构阵列架构兼具灵活性和高效性,但高计算吞吐量的特性也会给访存带来压力。在片下动态存储器带宽相对固定的情况下,设计一种存算解耦合的访存结构。将控制逻辑集成在轻量级的存储空间中,通过可配置的存储空间隔离访存和计算的循环迭代,从而掩盖内存延时,同时利用该结构进行串联和对齐操作,以适配不同的计算访存频率比并优化间接访问过程。实验结果表明,该访存结构在目标架构中能够获得1.84倍的性能优化,其中乱序操作可使间接访问得到平均22%的性能提升。 相似文献
7.
计算机系统整体性能的提高不仅仅依赖于处理器计算能力的提升也需要高性能芯片组的有力支持.芯片组承担着CPU和外围设备通信的重任,而且目前大多数系统中采用把内存控制器集成在北桥中的方法,这更加突出了北桥在访存性能以至于在整个系统中的关键作用.以高性能为目标,龙芯2C处理器配套北桥芯片NB2005的设计和优化采用了很多新的方法和技术,其中包括根据程序行为进行动态Page管理的内存控制电路,一种与内存控制电路状态相结合的预取策略和具备高吞吐量低延迟的PCI通道设计等.性能测试和分析表明,搭配NB2005的龙芯2C系统访存带宽要比搭配Marvell GT64240北桥的系统提高40%以上,运行SPEC CPU2000浮点和定点程序的性能分别提高了12.2%和2.5%,磁盘I/O的性能也提高了30%. 相似文献
8.
H.264/AVC的运动补偿处理环节需要消耗大量的内存访问带宽,这成为制约其性能的关键因素.分析表明,如此巨大的带宽消耗具体来自5个方面:像素数据的重复读取、地址对齐、突发访问、SDRAM页切换和内存竞争冲突.提出一种基于2D Cache结构的运动补偿带宽优化方法,充分利用像素的重用以减少数据的重复读取.同时通过结合数据在SDRAM中映射方式的优化,将众多短而随机的访问整合为地址对齐的突发访问,并减少了访问过程中页切换的次数.此外还提出了访存的组突发访问模式,以解决SDRAM竞争冲突所引入的开销.实验结果表明采用上述优化设计后,运动补偿的访存带宽降低了82.9~87.6%,同现存优化效率较高的方法相比,带宽进一步减少了64%~87%.在达到相同带宽减少幅度的前提下,所提出的新方法比传统Cache结构电路面积减少91%.该方法目前已在一款多媒体SoC芯片设计中实际应用. 相似文献
9.
在多核计算机时代,多道程序在整个共享内存体系上的“访存干扰”是制约系统总体性能和服务质量的重要因素.即使当前内存资源已相对丰富,但如何优化内存体系的性能、降低访存干扰并高效地管理内存资源,仍是计算机体系结构领域的研究热点.为深入研究该问题,详述将“页着色(pagecoloring)”内存划分技术应用于整个内存体系(包括Cache、内存通道以及内存DRAM Bank),进而消除了并行多道程序在共享内存体系上的访存干扰的一系列先进方法.从DRAM Bank、Channel与Cache以及非易失性内存(non-volatile memory, NVM)等内存体系中介质为切入点,层次分明地展开论述:首先,详述将页着色应用于多道程序在DRAM Bank与通道的划分,消除多道程序间的访存冲突;随后是将页着色应用于在内存体系中Cache和DRAM的“垂直”协同划分,可同时消除多级内存介质上的访存干扰;最后是将页着色应用于包含NVM的混合内存体系,以提高程序运行效率和系统整体效能.实验结果表明,所提内存划分方法提高了系统整体性能(平均5%-15%)、服务质量(QoS),并有效地降低了系统能耗.通过梳理... 相似文献
10.
SMP机群系统因其良好的性价比、卓越的可扩展性与可用性,逐渐成为当前高性能计算机领域的主流结构.这种结点内共享存储、结点间消息传递的两级混合结构是目前并行计算研究的热点,在单个SMP结点中,总线和内存带宽是否满足CPU和I/O的需求对于访存密集型应用的性能影响很大。本文针对访存密集型应用的特点测试分析了在SMP机群中访存冲突对系统性能的影响,结果表明我们的SMP结点存在性能瓶颈,这种量化分析对于设计大规模的基于SMP的机群系统有很好的指导意义. 相似文献
11.
We consider a network of workstations (NOW) organization consisting of bus-based multiprocessors interconnected by a high latency and high bandwidth interconnect, such as ATM, on which a shared-memory programming model using a multiple-writer distributed virtual shared-memory system is imposed. The latencies associated with bringing data into the local memory are a severe performance limitation of such systems. To make the access latencies tolerable, we propose a novel prefetch approach and show how it can be integrated into the software-based coherence layer of a multiple-writer protocol. This approach uses the access history of each page to guide which pages to prefetch. Based on detailed architectural simulations and seven scientific applications we find that our prefetch algorithm can remove a vast majority of the remote operations, which improves the performance of all applications. We also find that the bandwidth provided by ATM switches available today is sufficient to accommodate prefetching. However, the protocol processing overhead of available ATM interfaces limits the gain of the prefetching algorithms. 相似文献
12.
Edward H. Gornish Alexander Veidenbaum 《International journal of parallel programming》1999,27(1):35-70
Both hardware and software prefetching have been shown to be effective in tolerating the large memory latencies inherent in shared-memory multiprocessors; however, both types of prefetching have their shortcomings. While software schemes require less hardware support than hardware schemes, they must generate address calculation instructions and a prefetch instruction for each datum that needs to be prefetched. Hardware schemes, however, must become progressively more complex to be able to compute data access strides and to increase the prefetching lookahead. In this paper, we propose an integrated hardware/software prefetching method that uses simple hardware that can handle most data accesses and software prefetching for the few remaining accesses. A compile time algorithm analyzes the access streams formed by array references and determines sequences of consecutive memory accesses to an access stream that can be prefetched by the hardware mechanism. This analysis is based on the relative memory locations of consecutive accesses to an access stream and the number of intervening data references between consecutive accesses to an access stream. In addition, the prefetching lookahead can be set separately for each access stream. Our approach yields an effective scheme that minimizes both CPU overhead and hardware costs. Execution-driven simulations show our method to be very effective. 相似文献
13.
集成多核CPU-GPU架构已经成为计算机处理器芯片的发展方向。利用这种架构的并行计算能力进行数据处理已经成为了数据库领域的研究热点。为了提高列存储系统的查询性能,首先改进了已有协处理机制中的负载分配策略,通过监测数据库系统CPU占用率,动态地为处理器提供合理的数据划分;然后,针对集成多核CPU-GPU架构上的数据预取机制,提出了一种确定预取数据大小的模型,同时,针对GPU访存的特点,进行了GPU访存优化;最后,使用OpenCL作为编程语言,实现了一种集成多核CPU-GPU架构上的列存储排序归并连接算法,并采用提出的方法对连接处理进行优化。实验证明,所提优化策略可以使列存储系统排序归并连接性能提升33%。 相似文献
14.
Won W. Ro Stephen P. Crago Alvin M. Despain Jean-Luc Gaudiot 《The Journal of supercomputing》2006,38(3):237-259
The speed gap between processor and main memory is the major performance bottleneck of modern computer systems. As a result,
today's microprocessors suffer from frequent cache misses and lose many CPU cycles due to pipeline stalling. Although traditional
data prefetching methods considerably reduce the number of cache misses, most of them strongly rely on the predictability
for future accesses and often fail when memory accesses do not contain much locality.
To solve the long latency problem of current memory systems, this paper presents the design and evaluation of our high-performance
decoupled architecture, the HiDISC (Hierarchical Decoupled Instruction Stream Computer). The motivation for the design originated
from the traditional decoupled architecture concept and its limits. The HiDISC approach implements an additional prefetching
processor on top of a traditional access/execute architecture. Our design aims at providing low memory access latency by separating
and decoupling otherwise sequential pieces of code into three streams and executing each stream on three dedicated processors.
The three streams act in concert to mask the long access latencies by providing the necessary data to the upper level on time.
This is achieved by separating the access-related instructions from the main computation and running them early enough on
the two dedicated processors.
Detailed hardware design and performance evaluation are performed with development of an architectural simulator and compiling
tools. Our performance results show that the proposed HiDISC model reduces 19.7% of the cache misses and improves the overall
IPC (Instructions Per Cycle) by 15.8%. With a slower memory model assuming 200 CPU cycles as memory access latency, our HiDISC
improves the performance by 17.2%. 相似文献
15.
Shiqing ZHANG Zheng QIN Yaohua YANG Li SHEN Zhiying WANG 《Frontiers of Computer Science》2020,14(3):143101-13
Despite the increasing investment in integrated GPU and next-generation interconnect research,discrete GPU connected by PCIe still account for the dominant position of the market,the management of data communication between CPU and GPU continues to evolve.Initially,the programmer explicitly controls the data transfer between CPU and GPU.To simplify programming and enable systemwide atomic memory operations,GPU vendors have developed a programming model that provides a single,virtual address space for accessing all CPU and GPU memories in the system.The page migration engine in this model automatically migrates pages between CPU and GPU on demand.To meet the needs of high-performance workloads,the page size tends to be larger.Limited by low bandwidth and high latency interconnects compared to GDDR,larger page migration has longer delay,which may reduce the overlap of computation and transmission,waste time to migrate unrequested data,block subsequent requests,and cause serious performance decline.In this paper,we propose partial page migration that only migrates the requested part of a page to reduce the migration unit,shorten the migration latency,and avoid the performance degradation of the full page migration when the page becomes larger.We show that partial page migration is possible to largely hide the performance overheads of full page migration.Compared with programmer controlled data transmission,when the page size is 2MB and the PCIe bandwidth is 16GB/sec,full page migration is 72.72×slower,while our partial page migration achieves 1.29×speedup.When the PCIe bandwidth is changed to 96GB/sec,full page migration is 18.85×slower,while our partial page migration provides 1.37×speedup.Additionally,we examine the performance impact that PCIe bandwidth and migration unit size have on execution time,enabling designers to make informed decisions. 相似文献
16.
17.
Graph computation problems that exhibit irregular memory access patterns are known to show poor performance on multiprocessor architectures. Although recent studies use FPGA technology to tackle the memory wall problem of graph computation by adopting a massively multi-threaded architecture, the performance is still far less than optimal memory performance due to the long memory access latency. In this paper, we propose a comprehensive reconfigurable computing approach to address the memory wall problem. First, we present an extended edge-streaming model with massive partitions to provide better load balance while taking advantage of the streaming bandwidth of external memory in processing large graphs. Second, we propose a two-level shuffle network architecture to significantly reduce the on-chip memory requirement while provide high processing throughput that matches the bandwidth of the external memory. Third, we introduce a compact storage design based on graph compression schemes and propose the corresponding encoding and decoding hardware to reduce the data volume transferred between the processing engines and external memory. We validate the effectiveness of the proposed architecture by implementing three frequently-used graph algorithms on ML605 board, showing an up to 3.85 × improvement in terms of performance to bandwidth ratio over previously published FPGA-based implementations. 相似文献
18.
Hsiao-Hsi Wang Kuan-Ching Li Ssu-Hsuan Lu Chun-Chieh Yang 《The Journal of supercomputing》2009,47(2):111-126
High speed networks and rapidly improving microprocessor performance make the network of workstations an extremely important
tool for parallel computing in order to speedup the execution of scientific applications. Shared memory is an attractive programming
model for designing parallel and distributed applications, where the programmer can focus on algorithmic development rather
than data partition and communication. Based on this important characteristic, the design of systems to provide the shared
memory abstraction on physically distributed memory machines has been developed, known as Distributed Shared Memory (DSM).
DSM is built using specific software to combine a number of computer hardware resources into one computing environment. Such
an environment not only provides an easy way to execute parallel applications, but also combines available computational resources
with the purpose of speeding up execution of these applications. DSM systems need to maintain data consistency in memory,
which usually leads to communication overhead. Therefore, there exists a number of strategies that can be used to overcome
this overhead issue and improve overall performance. Strategies as prefetching have been proven to show great performance
in DSM systems, since they can reduce data access communication latencies from remote nodes. On the other hand, these strategies
also transfer unnecessary prefetching pages to remote nodes. In this research paper, we focus on the access pattern during
execution of a parallel application, and then analyze the data type and behavior of parallel applications. We propose an adaptive
data classification scheme to improve prefetching strategy with the goal to improve overall performance. Adaptive data classification
scheme classifies data according to the accessing sequence of pages, so that the home node uses past history access patterns
of remote nodes to decide whether it needs to transfer related pages to remote nodes. From experimental results, we can observe
that our proposed method can increase the accuracy of data access in effective prefetch strategy by reducing the number of
page faults and misprefetching. Experimental results using our proposed classification scheme show a performance improvement
of about 9–25% over the same benchmark applications running on top of an original JIAJIA DSM system.
相似文献
Kuan-Ching Li (Corresponding author)Email: |
19.
结合现有的代理缓存策略和传输方案,针对现有的网络条件,提出了一种自适应的分段方法,解决了已有方法对于流媒体对象流行性的变化和用户访问模式的不确定缺乏自身调整能力的缺欠,和一种优化的传输方案,采用了单播和多播相结合, 主动预取和补丁传输相结合的方法,对于缩短启动延时、提高字节命中率以及节省骨干网带宽等方面取得了较明显的效果. 相似文献
20.
A major overhead in software DSM(Distributed Shared Memory)is the cost of remote memory accesses necessitated by the protocol as well as induced by false sharing.This paper introduces a dynamic prefetching method implemented in the JIAJIA software DSM to reduce system overhead caused by remote accesses.The prefetching method records the interleaving string of INV(invalidation)and GETP (getting a remote page)operations for each cached page and analyzes the periodicity of the string when a page is invalidated on a lock or barrier.A prefetching request is issued after the lock or barrier if the periodicity analysis indicates that GETP will be the next operation in the string.Multiple prefetching requests are merged into the same message if they are to the same host,Performance evaluation with eight well-accepted benchmarks in a cluster of sixteen PowerPC workstations shows that the prefetching scheme can significantly reduce the page fault overhead and as a result achieves a performance increase of 15%-20% in three benchmarks and around 8%-10% in another three.The average extra traffic caused by useless prefetches is only 7%-13% in the evaluation. 相似文献