首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 187 毫秒
1.
受限于功耗,十多年前通用微处理器就停止追求更高的主频转而向集成更多处理器核的方向发展;同时,随着晶体管密度按摩尔定律不断提高,单片可集成的处理器核数成倍增长,片上多核、众核处理器已成为高性能微处理器发展的主流。未来千核级通用众核处理器支持共享存储编程模型是一种必然趋势,但传统的Cache一致性目录结构面临着查找延迟高、目录项替换频繁以及硬件代价和功耗可扩展性有限等问题。稀疏目录实现了传统目录结构硬件开销与一致性维护效率的折衷,被认为是众核处理器维护Cache一致性的一种高能效、可扩展结构。综述了近年来提高稀疏目录性能的相关研究与方法,并对其在面积、访问延迟、功耗和实现复杂性等方面进行分析,归纳出这些方法各自的优点和存在的不足,对创新设计未来高性能众核处理器共享存储体系结构具有一定的参考价值。  相似文献   

2.
异构众核架构具有超高的性能功耗比,已成为超级计算机体系结构的重要发展方向.但众核系统更为复杂的并行层次和存储层次,给编程和优化带来了极大的挑战,因此研究面向众核系统的并行编程技术,对于降低国产众核系统并行应用的编程难度、提升并行程序的性能都具有重要的意义.提出统一架构的多模式并行编程模型,包括异构融合的加速运算模型和按同构方式编程的自主运算模型,根据编程模型设计了Parallel C语言,能有效描述国产众核系统的异构并行性,与其它众核系统上MPI+X的使用模式相比,编程和系统优化都具有全局视角,在多级局部性描述、单边消息、兼容已有多核应用等方面具有特色;基于Open64构建了Parallel C编译系统,全面支持加速运算模型和自主运算模型,提出并实现了数据布局与自动DMA、编译指导的线程代理和拓扑位置感知的集合通信等优化.Micro Benchmark和实际应用在神威太湖之光计算机系统上的测试数据表明,Parallel C语言和编译系统具有良好的性能和可扩展性,能够有效支撑大型应用.  相似文献   

3.
I/O一致性问题是众核处理器设计所面临和必须解决的关键技术之一。传统的一致性模型往往是站在处理器的角度,研究存储器和CPU之间的一致性,缺乏一个完整的I/O一致性模型用于指导I/O一致性处理的设计。把I/O一致性问题纳入众核处理器数据一致性问题的统一框架内,从处理器核、I/O和存储器相互关系的角度描述数据一致性问题,避免了孤立、片面地分析I/O一致性问题;在SPARC的TSO模型和PSO模型的基础上,提出了扩展I/O的广义一致性模型-GIOTSO模型和GIOPSO模型,并进行了形式化描述;最后对模型性能进行了理论分析和性能测试,测试结果表明所提出的一致性模型可以大大改善I/O处理的性能。  相似文献   

4.
共享存储系统中如何高效地实现高速缓存一致性是体系结构设计面临的一个关键问题和难点问题.已有的基于目录的协议存在难于实现、验证复杂和存储空间开销大等问题.面向片上众核处理器,文中提出一种由硬件结构支持、基于同步的高速缓存一致性协议.该方案不使用目录,而是通过使用bloom-filter表示一致性信息,并在并行程序中的同步点维护高速缓存一致性.与现有的基于目录的高速缓存一致性协议相比,该方案可以降低目录协议的实现、验证复杂度.用SPLASH一2测试程序集评估表明,基于同步的协议可以获得与基于目录的协议相当的性能.  相似文献   

5.
针对多核CPU和众核加速器或协处理器异构平台的架构特征进行了研究,以MPI和OpenMP混合编程模型实现了N体问题BH算法的并行,采用了正交递归二分法(ORB)使进程之间负载均衡,并对程序进行了并行优化和MIC加速。优化和加速后的程序性能提升到原版本的3.4倍以上,其中MIC加速后性能提升到加速前的1.7倍。程序具有较好的扩展性,计算粒子规模达到上亿时,可扩展到32个节点共4480核心(640个CPU核心和3840个MIC核心)  相似文献   

6.
众核处理器中使用写掩码实现混合写回/写穿透策略   总被引:4,自引:0,他引:4  
高速缓存采用写回策略,能极大地节省对片上网络和访存带宽的消耗,这对于片上众核(大于16核)的结构尤为重要.与通常多核系统中基于目录/总线的写无效或写更新协议不同,文中给出了片上实现域一致性存储模型和基于硬件锁的缓存一致性协议的方案并提出了在L1高速缓存保存写掩码的方法,用以记录本地更新缓存块的字节位置,解决了写回策略下伪共享带来的缓存一致性问题.文中还进一步提出两种优化掩码存储空间开销的新方法:通过设定程序中较少出现的、长度为1~3字节的写指令为写穿透,在L1中每4字节设置一位写掩码,将写掩码的芯片面积开销压缩到字节粒度的27.9%;设计项数为L1缓存块总数12.5%的多路写掩码缓存,在不损失性能的情况下,将面积开销压缩到字节粒度的17.7%.搭建的众核平台Godson-T采用域一致性存储模型,使用写掩码实现混合写回/写穿透缓存策略(临界区内写穿透,临界区外写回).实验使用splash2的3个程序和2个生物计算程序进行评估.结果表明,相对于完全写穿透,混合写回策略在32和64线程的配置下普遍获得24%以上的性能提升,性能略优于完全写回,并且采用两种优化空间开销的新方法后性能无损失.  相似文献   

7.
海量数据背景下传统GIS栅格数据空间分析计算效率已经不能满足快速计算的需求,为此以地形因子计算为例,分析并测试了基于共享内存模型的CPU多核并行模式与基于流处理器模型的GPU众核并行模式的计算性能,在此基础上详细实现了负载均衡的设备间任务划分,进行CPU与GPU异构混合的并行技术改良研究。实验结果表明,基于相同的单机硬件环境,与多核共享内存模型或众核流处理器的单一计算平台并行方案相比,CPU/GPU异构混合并行计算方法对于栅格数据分析具有更好的加速效果。  相似文献   

8.
CC$:一种面向分布式众核平台的并行编程语言   总被引:1,自引:1,他引:0  
CC$是一种并行编程语言,目的是解决分布式众核并行计算机的编程困难。CC$的编程模型以Multi BSP 模型为基础,将分布式众核并行计算机的硬件架构抽象为3层。数据按照存储的层次和共享范围分为5类,以便在不 同层次上提供共享。LL$还提出一类虚拟指令来解决不同层次之间的数据交换,实现数据访问的逻辑化描述。并行 程序按照3层Multi BSP超步嵌套执行。CC$具有统一的编程风格、内建的多层会共地址空间、数据访问请求的表达 式描述和数据传输编译优化4大特点。测试表明,CC$程序的运行效率高,易学易用,大幅地缩短了开发周期。  相似文献   

9.
面向层次化NoC的混合并行编程模型   总被引:1,自引:0,他引:1       下载免费PDF全文
曹祥  易伟  潘红兵  高明伦  李丽 《计算机工程》2010,36(13):278-280
为更好发挥多核处理器的硬件性能,针对层次化的片上网络架构,提出MPI/OpenMP混合并行编程模型。运用基于MPI的任务级并行模型实现片内簇间的高效通信,采用OpenMP模型实现簇内四核的通信、同步和数据交换。实验结果表明,与单一并行编程模型相比,混合并行编程模型加速比提高了20%~50%。  相似文献   

10.
为增加检测突变信号的时效性,提高各种故障检测系统的效率,提出在“神威·太湖之光”上基于两级并行模式改进的排列熵(PE)算法。在节点间采用MPI(信息传递接口)并行编程模型,通过对等模式轮询调度方式解决多文件负载不均衡问题;在核组内采用Athread(加速线程库)并行编程模型,通过相空间构建重构矩阵,实现从核级数据划分;采用双缓冲技术实现从核计算与访存的重叠,减少主从通信时间;利用DMA通信和重组传输数据的方法,减少主从通信次数。使用15个LDK UER204滚动轴承全寿命周期实验数据进行测试,结果表明,单核组性能较主核版本最高可获得11.86倍加速,128核组最高实现123.73倍的性能提升。  相似文献   

11.
Advances at an unprecedented rate in computer hardware and networking technologies have made the many-core computing affordable and readily available in a matter of few years. Nonetheless, it incurs challenges to programmers to build scalable parallel software. Optimizations of parallel programs for a many-core platform are viewed as a multifaceted problem, where system and architectural factors should be taken into account. In this paper, we tackle this problem by implementing parallel programs with different available programming paradigms and evaluate application behaviors on TILE64 many-core platform. That is, we investigate a hybrid producer-write plus consumer-read shared memory programming paradigm for the implementation of master–worker video decoder and encoder in the referred many-core platform. Experimental results show that the proposed implementation has achieved competitive performance speedup, scaling well with the number of available cores and up to four times of performance improvement over other implementations on the decoding of sample 1080P video.  相似文献   

12.
Moore’s law will grant computer architects ever more transistors for the foreseeable future, and the challenge is how to use them to deliver efficient performance and flexible programmability. We propose a many-core architecture, Godson-T, to attack this challenge. On the one hand, Godson-T features a region-based cache coherence protocol, asynchronous data transfer agents and hardware-supported synchronization mechanisms, to provide full potential for the high efficiency of the on-chip resource utilization. On the other hand, Godson-T features a highly efficient runtime system, a Pthreads-like programming model, and versatile parallel libraries, which make this many-core design flexibly programmable. This hardware/software cooperating design methodology bridges the high-end computing with mass programmers. Experimental evaluations are conducted on a cycle-accurate simulator of Godson-T. The results show that the proposed architecture has good scalability, fast synchronization, high computational efficiency, and flexible programmability.  相似文献   

13.
This paper introduces hybrid address spaces as a fundamental design methodology for implementing scalable runtime systems on many-core architectures without hardware support for cache coherence. We use hybrid address spaces for an implementation of MapReduce, a programming model for large-scale data processing, and the implementation of a remote memory access (RMA) model. Both implementations are available on the Intel SCC and are portable to similar architectures. We present the design and implementation of HyMR, a MapReduce runtime system whereby different stages and the synchronization operations between them alternate between a distributed memory address space and a shared memory address space, to improve performance and scalability. We compare HyMR to a reference implementation and we find that HyMR improves performance by a factor of 1.71× over a set of representative MapReduce benchmarks. We also compare HyMR with Phoenix++, a state-of-art implementation for systems with hardware-managed cache coherence in terms of scalability and sustained to peak data processing bandwidth, where HyMR demonstrates improvements of a factor of 3.1× and 3.2× respectively. We further evaluate our hybrid remote memory access (HyRMA) programming model and assess its performance to be superior of that of message passing.  相似文献   

14.
随着集成电路工艺的发展,众核体系结构成为人们日益关注的计算平台.LU分解是科学和工程计算中被广泛使用的核心算法之一,尽管在传统的并行体系结构上已有大量的并行化研究工作,但是结合新犁众核体系结构特征的工作还不多.文章从负载均衡、延迟容忍和性能分析模型3个方面系统研究了LU分解在众核体系结构上的并行化问题.该文的贡献在于:首先,针对二维卷帘负载分配方案难以达到良好负载均衡的缺点,提出一种新的"之"字形分配方案,实验表明不经任何优化的情况下性能比前者提高20%,优化后达到了40%;其次,提出了一个性能加速比的分析模型,并用实验定量研究了实测性能加速比和理论值之间的差距,发现在合理利用片上存储优化访存延迟,并恰当选择矩阵分块参数的情况下,实测加速效果能比较接近理论值;通过实验还证明实测性能难以达到理论预测值的两个主要原因:访存带宽有限和片上网络的资源竞争.  相似文献   

15.
Rapid changes in platform hardware resources with the evolution of many-core architectures will require a fundamental reexamination of mainstream system-software design decisions to support multiple cores and to efficiently manage on-chip hardware resources shared among the multiple cores. In turn, the evolution of many-core processor architectures will be successfully sustained by the new capabilities and features added to the system software, perhaps while requiring substantial support from hardware. The guest editors introduce five articles on the interaction of computer architecture and operating systems for this special issue of IEEE Micro.  相似文献   

16.
Nowadays, shared-memory parallel architectures have evolved and new programming frameworks have appeared that exploit these architectures: OpenMP, TBB, Cilk Plus, ArBB and OpenCL. This article focuses on the most extended of these frameworks in commercial and scientific areas. This paper shows a comparative study of these frameworks and an evaluation. The study covers several capacities, such as task deployment, scheduling techniques, or programming language abstractions. The evaluation measures three dimensions: code development complexity, performance and efficiency, measure as speedup per watt. For this evaluation, several parallel benchmarks have been implemented with each framework. These benchmarks are created to cover certain scenarios, like regular memory access or irregular computation. The conclusions show some highlights, like the fact that some frameworks (OpenMP, Cilk Plus) are better for transforming quickly a sequential code, others (TBB) have a small footprint which is ideal for small problems, and others (OpenCL) are suited for heterogeneous architectures but they require a very complex development process. The conclusions also show that the vectorization support is more critical than multitasking to achieve efficiency for those problems where this approach fits.  相似文献   

17.
A number of highly-threaded, many-core architectures hide memory-access latency by low-overhead context switching among a large number of threads. The speedup of a program on these machines depends on how well the latency is hidden. If the number of threads were infinite, theoretically, these machines could provide the performance predicted by the PRAM analysis of these programs. However, the number of threads per processor is not infinite, and is constrained by both hardware and algorithmic limits. In this paper, we introduce the Threaded Many-core Memory (TMM) model which is meant to capture the important characteristics of these highly-threaded, many-core machines. Since we model some important machine parameters of these machines, we expect analysis under this model to provide a more fine-grained and accurate performance prediction than the PRAM analysis. We analyze 4 algorithms for the classic all pairs shortest paths problem under this model. We find that even when two algorithms have the same PRAM performance, our model predicts different performance for some settings of machine parameters. For example, for dense graphs, the dynamic programming algorithm and Johnson’s algorithm have the same performance in the PRAM model. However, our model predicts different performance for large enough memory-access latency and validates the intuition that the dynamic programming algorithm performs better on these machines. We validate several predictions made by our model using empirical measurements on an instantiation of a highly-threaded, many-core machine, namely the NVIDIA GTX 480.  相似文献   

18.
Future many-core processor systems require scalable solutions that conventional architectures currently do not provide. This paper presents a novel architecture that demonstrates the required scalability. It is based on a model of computation developed in the AETHER project to provide a safe and composable approach to concurrent programming. The model supports a dynamic approach to concurrency that enables self-adaptivity in any environment so the model is quite general. It is implemented here in the instruction set of a dynamically scheduled RISC processor and many such processors form a microgrid. Binary compatibility over arbitrary clusters of such processors and an inherent scalability in both area and performance with concurrency exploited make this a very promising development for the era of many-core chips. This paper introduces the model, the processor and chip architecture and its emulation on a range of computational kernels. It also estimates the area of the structures required to support this model in silicon.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号