首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
为了提高网络内存的访存性能,基于一种页面级流缓存和预取结构提出了可变步长的带状流检测算法VSS(variable stride stream)和基于时钟步长的流预取优化算法来优化网络访存性能.带状流检测算法解决了固定步长流检测下循环访问中虚拟页地址的跳跃问题,消除了断流,可以有效提高流检测的覆盖率.基于时钟步长的流预取优化动态调整预取长度,可以解决有些预取不能及时取回的问题,进一步提高预取性能.通过和顺序预取算法的比较可以看出,VSS算法可以实现高准确率、低通信开销的预取.通过模拟分析了这种流缓存和预取机制在网络访存系统中的应用,验证了以少量性能下降换取灵活的远程内存扩展方法的可行性.  相似文献   

2.
结合访存失效队列状态的预取策略   总被引:1,自引:0,他引:1  
随着存储系统的访问速度与处理器的运算速度的差距越来越显著,访存性能已成为提高计算机系统性能的瓶颈.通过对指令Cache和数据Cache失效行为的分析,提出一种预取策略--结合访存失效队列状态的预取策略.该预取策略保持了指令和数据访问的次序,有利于预取流的提取.并将指令流和数据流的预取相分离,避免相互替换.在预取发起时机的选择上,不但考虑当前总线是否空闲,而且结合访存失效队列的状态,减小对处理器正常访存请求的影响.通过流过滤机制提高预取准确性,降低预取对访存带宽的需求.结果表明,采用结合访存失效队列状态的预取策略,处理器的平均访存延时减少30%,SPEC CPU2000程序的IPC值平均提高8.3%.  相似文献   

3.
提出了一种基于异步的请求和应答消息的新型访存协议.相比于传统的同步总线式内存体系结构,可变粒度的访存提高了内存带宽的有效利用率,异步消息访问有利于内存容量的扩展.本文通过分析典型应用程序的访存行为评估了消息式内存可能带来的性能提升,并探讨了实现消息式内存所面临的挑战.  相似文献   

4.
对基因表达谱分块,使之符合GPU并行计算的线程结构特性,根据GPU线程结构特性设计双层并行模式,并利用纹理缓存实现访存高效;依据CPU二级缓存容量对基本块进一步细分成子块以提高缓存命中率,利用数据预取技术减少访存次数,利用线程绑定技术减少线程在核心之间的迁移;依据多核CPU和GPU的计算能力分配CPU和GPU的基因互信息计算任务以平衡CPU与GPU的计算负载;在设计新的阈值计算算法基础上,设计实现了访存高效的构建全局基因调控网络CPU/GPU并行算法.实验结果表明,与已有算法相比,本文算法加速更明显,并且能够构建更大规模的全局基因调控网络.  相似文献   

5.
硬件数据预取技术可以有效提升处理器的访存性能,但传统流预取策略存在预取不及时的问题。为此,提出一种双倍步长流预取策略,并设计对应的预取部件结构。预取部件自动检测数据流的固定步长并将该步长扩大为原有的2倍,以计算预取地址。实验结果表明,加入该预取部件后,运行SPEC2006测试集的整数应用与浮点应用时,处理器性能最高可分别提升45%与57%,针对Cache Miss率较高的应用,该预取部件可以有效隐藏访存延时。  相似文献   

6.
洪途  景乃锋 《计算机工程》2021,47(2):239-245
粗粒度可重构阵列架构兼具灵活性和高效性,但高计算吞吐量的特性也会给访存带来压力。在片下动态存储器带宽相对固定的情况下,设计一种存算解耦合的访存结构。将控制逻辑集成在轻量级的存储空间中,通过可配置的存储空间隔离访存和计算的循环迭代,从而掩盖内存延时,同时利用该结构进行串联和对齐操作,以适配不同的计算访存频率比并优化间接访问过程。实验结果表明,该访存结构在目标架构中能够获得1.84倍的性能优化,其中乱序操作可使间接访问得到平均22%的性能提升。  相似文献   

7.
计算机系统整体性能的提高不仅仅依赖于处理器计算能力的提升也需要高性能芯片组的有力支持.芯片组承担着CPU和外围设备通信的重任,而且目前大多数系统中采用把内存控制器集成在北桥中的方法,这更加突出了北桥在访存性能以至于在整个系统中的关键作用.以高性能为目标,龙芯2C处理器配套北桥芯片NB2005的设计和优化采用了很多新的方法和技术,其中包括根据程序行为进行动态Page管理的内存控制电路,一种与内存控制电路状态相结合的预取策略和具备高吞吐量低延迟的PCI通道设计等.性能测试和分析表明,搭配NB2005的龙芯2C系统访存带宽要比搭配Marvell GT64240北桥的系统提高40%以上,运行SPEC CPU2000浮点和定点程序的性能分别提高了12.2%和2.5%,磁盘I/O的性能也提高了30%.  相似文献   

8.
H.264/AVC的运动补偿处理环节需要消耗大量的内存访问带宽,这成为制约其性能的关键因素.分析表明,如此巨大的带宽消耗具体来自5个方面:像素数据的重复读取、地址对齐、突发访问、SDRAM页切换和内存竞争冲突.提出一种基于2D Cache结构的运动补偿带宽优化方法,充分利用像素的重用以减少数据的重复读取.同时通过结合数据在SDRAM中映射方式的优化,将众多短而随机的访问整合为地址对齐的突发访问,并减少了访问过程中页切换的次数.此外还提出了访存的组突发访问模式,以解决SDRAM竞争冲突所引入的开销.实验结果表明采用上述优化设计后,运动补偿的访存带宽降低了82.9~87.6%,同现存优化效率较高的方法相比,带宽进一步减少了64%~87%.在达到相同带宽减少幅度的前提下,所提出的新方法比传统Cache结构电路面积减少91%.该方法目前已在一款多媒体SoC芯片设计中实际应用.  相似文献   

9.
邱杰凡  华宗汉  范菁  刘磊 《软件学报》2022,33(2):751-769
在多核计算机时代,多道程序在整个共享内存体系上的“访存干扰”是制约系统总体性能和服务质量的重要因素.即使当前内存资源已相对丰富,但如何优化内存体系的性能、降低访存干扰并高效地管理内存资源,仍是计算机体系结构领域的研究热点.为深入研究该问题,详述将“页着色(pagecoloring)”内存划分技术应用于整个内存体系(包括Cache、内存通道以及内存DRAM Bank),进而消除了并行多道程序在共享内存体系上的访存干扰的一系列先进方法.从DRAM Bank、Channel与Cache以及非易失性内存(non-volatile memory, NVM)等内存体系中介质为切入点,层次分明地展开论述:首先,详述将页着色应用于多道程序在DRAM Bank与通道的划分,消除多道程序间的访存冲突;随后是将页着色应用于在内存体系中Cache和DRAM的“垂直”协同划分,可同时消除多级内存介质上的访存干扰;最后是将页着色应用于包含NVM的混合内存体系,以提高程序运行效率和系统整体效能.实验结果表明,所提内存划分方法提高了系统整体性能(平均5%-15%)、服务质量(QoS),并有效地降低了系统能耗.通过梳理...  相似文献   

10.
SMP机群系统因其良好的性价比、卓越的可扩展性与可用性,逐渐成为当前高性能计算机领域的主流结构.这种结点内共享存储、结点间消息传递的两级混合结构是目前并行计算研究的热点,在单个SMP结点中,总线和内存带宽是否满足CPU和I/O的需求对于访存密集型应用的性能影响很大。本文针对访存密集型应用的特点测试分析了在SMP机群中访存冲突对系统性能的影响,结果表明我们的SMP结点存在性能瓶颈,这种量化分析对于设计大规模的基于SMP的机群系统有很好的指导意义.  相似文献   

11.
We consider a network of workstations (NOW) organization consisting of bus-based multiprocessors interconnected by a high latency and high bandwidth interconnect, such as ATM, on which a shared-memory programming model using a multiple-writer distributed virtual shared-memory system is imposed. The latencies associated with bringing data into the local memory are a severe performance limitation of such systems. To make the access latencies tolerable, we propose a novel prefetch approach and show how it can be integrated into the software-based coherence layer of a multiple-writer protocol. This approach uses the access history of each page to guide which pages to prefetch. Based on detailed architectural simulations and seven scientific applications we find that our prefetch algorithm can remove a vast majority of the remote operations, which improves the performance of all applications. We also find that the bandwidth provided by ATM switches available today is sufficient to accommodate prefetching. However, the protocol processing overhead of available ATM interfaces limits the gain of the prefetching algorithms.  相似文献   

12.
Both hardware and software prefetching have been shown to be effective in tolerating the large memory latencies inherent in shared-memory multiprocessors; however, both types of prefetching have their shortcomings. While software schemes require less hardware support than hardware schemes, they must generate address calculation instructions and a prefetch instruction for each datum that needs to be prefetched. Hardware schemes, however, must become progressively more complex to be able to compute data access strides and to increase the prefetching lookahead. In this paper, we propose an integrated hardware/software prefetching method that uses simple hardware that can handle most data accesses and software prefetching for the few remaining accesses. A compile time algorithm analyzes the access streams formed by array references and determines sequences of consecutive memory accesses to an access stream that can be prefetched by the hardware mechanism. This analysis is based on the relative memory locations of consecutive accesses to an access stream and the number of intervening data references between consecutive accesses to an access stream. In addition, the prefetching lookahead can be set separately for each access stream. Our approach yields an effective scheme that minimizes both CPU overhead and hardware costs. Execution-driven simulations show our method to be very effective.  相似文献   

13.
丁祥武  李子通 《计算机科学》2016,43(11):265-271, 308
集成多核CPU-GPU架构已经成为计算机处理器芯片的发展方向。利用这种架构的并行计算能力进行数据处理已经成为了数据库领域的研究热点。为了提高列存储系统的查询性能,首先改进了已有协处理机制中的负载分配策略,通过监测数据库系统CPU占用率,动态地为处理器提供合理的数据划分;然后,针对集成多核CPU-GPU架构上的数据预取机制,提出了一种确定预取数据大小的模型,同时,针对GPU访存的特点,进行了GPU访存优化;最后,使用OpenCL作为编程语言,实现了一种集成多核CPU-GPU架构上的列存储排序归并连接算法,并采用提出的方法对连接处理进行优化。实验证明,所提优化策略可以使列存储系统排序归并连接性能提升33%。  相似文献   

14.
The speed gap between processor and main memory is the major performance bottleneck of modern computer systems. As a result, today's microprocessors suffer from frequent cache misses and lose many CPU cycles due to pipeline stalling. Although traditional data prefetching methods considerably reduce the number of cache misses, most of them strongly rely on the predictability for future accesses and often fail when memory accesses do not contain much locality. To solve the long latency problem of current memory systems, this paper presents the design and evaluation of our high-performance decoupled architecture, the HiDISC (Hierarchical Decoupled Instruction Stream Computer). The motivation for the design originated from the traditional decoupled architecture concept and its limits. The HiDISC approach implements an additional prefetching processor on top of a traditional access/execute architecture. Our design aims at providing low memory access latency by separating and decoupling otherwise sequential pieces of code into three streams and executing each stream on three dedicated processors. The three streams act in concert to mask the long access latencies by providing the necessary data to the upper level on time. This is achieved by separating the access-related instructions from the main computation and running them early enough on the two dedicated processors. Detailed hardware design and performance evaluation are performed with development of an architectural simulator and compiling tools. Our performance results show that the proposed HiDISC model reduces 19.7% of the cache misses and improves the overall IPC (Instructions Per Cycle) by 15.8%. With a slower memory model assuming 200 CPU cycles as memory access latency, our HiDISC improves the performance by 17.2%.  相似文献   

15.
Despite the increasing investment in integrated GPU and next-generation interconnect research,discrete GPU connected by PCIe still account for the dominant position of the market,the management of data communication between CPU and GPU continues to evolve.Initially,the programmer explicitly controls the data transfer between CPU and GPU.To simplify programming and enable systemwide atomic memory operations,GPU vendors have developed a programming model that provides a single,virtual address space for accessing all CPU and GPU memories in the system.The page migration engine in this model automatically migrates pages between CPU and GPU on demand.To meet the needs of high-performance workloads,the page size tends to be larger.Limited by low bandwidth and high latency interconnects compared to GDDR,larger page migration has longer delay,which may reduce the overlap of computation and transmission,waste time to migrate unrequested data,block subsequent requests,and cause serious performance decline.In this paper,we propose partial page migration that only migrates the requested part of a page to reduce the migration unit,shorten the migration latency,and avoid the performance degradation of the full page migration when the page becomes larger.We show that partial page migration is possible to largely hide the performance overheads of full page migration.Compared with programmer controlled data transmission,when the page size is 2MB and the PCIe bandwidth is 16GB/sec,full page migration is 72.72×slower,while our partial page migration achieves 1.29×speedup.When the PCIe bandwidth is changed to 96GB/sec,full page migration is 18.85×slower,while our partial page migration provides 1.37×speedup.Additionally,we examine the performance impact that PCIe bandwidth and migration unit size have on execution time,enabling designers to make informed decisions.  相似文献   

16.
针对智能终端数据共享中的网络延迟问题,本文提出一种两阶段,主动预取与被动预取相结合的数据预取缓存方法,减少网络延迟,提高用户体验。该方法利用网络空闲时间预取数据,减少用户等待时间;通过两阶段预取策略减少网络带宽消耗;通过主被动配合的预取算法来预取数据,提高预取准确率和预取效率;通过一种权重更新函数来更新客户端的缓存,减少对智能终端存储空间的消耗。实验表明使用此方法能减少用户等待时间58.2%,预取命中率为92%,带来的带宽损耗小于5%。  相似文献   

17.
Graph computation problems that exhibit irregular memory access patterns are known to show poor performance on multiprocessor architectures. Although recent studies use FPGA technology to tackle the memory wall problem of graph computation by adopting a massively multi-threaded architecture, the performance is still far less than optimal memory performance due to the long memory access latency. In this paper, we propose a comprehensive reconfigurable computing approach to address the memory wall problem. First, we present an extended edge-streaming model with massive partitions to provide better load balance while taking advantage of the streaming bandwidth of external memory in processing large graphs. Second, we propose a two-level shuffle network architecture to significantly reduce the on-chip memory requirement while provide high processing throughput that matches the bandwidth of the external memory. Third, we introduce a compact storage design based on graph compression schemes and propose the corresponding encoding and decoding hardware to reduce the data volume transferred between the processing engines and external memory. We validate the effectiveness of the proposed architecture by implementing three frequently-used graph algorithms on ML605 board, showing an up to 3.85 × improvement in terms of performance to bandwidth ratio over previously published FPGA-based implementations.  相似文献   

18.
High speed networks and rapidly improving microprocessor performance make the network of workstations an extremely important tool for parallel computing in order to speedup the execution of scientific applications. Shared memory is an attractive programming model for designing parallel and distributed applications, where the programmer can focus on algorithmic development rather than data partition and communication. Based on this important characteristic, the design of systems to provide the shared memory abstraction on physically distributed memory machines has been developed, known as Distributed Shared Memory (DSM). DSM is built using specific software to combine a number of computer hardware resources into one computing environment. Such an environment not only provides an easy way to execute parallel applications, but also combines available computational resources with the purpose of speeding up execution of these applications. DSM systems need to maintain data consistency in memory, which usually leads to communication overhead. Therefore, there exists a number of strategies that can be used to overcome this overhead issue and improve overall performance. Strategies as prefetching have been proven to show great performance in DSM systems, since they can reduce data access communication latencies from remote nodes. On the other hand, these strategies also transfer unnecessary prefetching pages to remote nodes. In this research paper, we focus on the access pattern during execution of a parallel application, and then analyze the data type and behavior of parallel applications. We propose an adaptive data classification scheme to improve prefetching strategy with the goal to improve overall performance. Adaptive data classification scheme classifies data according to the accessing sequence of pages, so that the home node uses past history access patterns of remote nodes to decide whether it needs to transfer related pages to remote nodes. From experimental results, we can observe that our proposed method can increase the accuracy of data access in effective prefetch strategy by reducing the number of page faults and misprefetching. Experimental results using our proposed classification scheme show a performance improvement of about 9–25% over the same benchmark applications running on top of an original JIAJIA DSM system.
Kuan-Ching Li (Corresponding author)Email:
  相似文献   

19.
结合现有的代理缓存策略和传输方案,针对现有的网络条件,提出了一种自适应的分段方法,解决了已有方法对于流媒体对象流行性的变化和用户访问模式的不确定缺乏自身调整能力的缺欠,和一种优化的传输方案,采用了单播和多播相结合, 主动预取和补丁传输相结合的方法,对于缩短启动延时、提高字节命中率以及节省骨干网带宽等方面取得了较明显的效果.  相似文献   

20.
A major overhead in software DSM(Distributed Shared Memory)is the cost of remote memory accesses necessitated by the protocol as well as induced by false sharing.This paper introduces a dynamic prefetching method implemented in the JIAJIA software DSM to reduce system overhead caused by remote accesses.The prefetching method records the interleaving string of INV(invalidation)and GETP (getting a remote page)operations for each cached page and analyzes the periodicity of the string when a page is invalidated on a lock or barrier.A prefetching request is issued after the lock or barrier if the periodicity analysis indicates that GETP will be the next operation in the string.Multiple prefetching requests are merged into the same message if they are to the same host,Performance evaluation with eight well-accepted benchmarks in a cluster of sixteen PowerPC workstations shows that the prefetching scheme can significantly reduce the page fault overhead and as a result achieves a performance increase of 15%-20% in three benchmarks and around 8%-10% in another three.The average extra traffic caused by useless prefetches is only 7%-13% in the evaluation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号