首页 | 本学科首页   官方微博 | 高级检索  
     

面向SLP 的多重循环向量化
引用本文:魏帅,赵荣彩,姚远.面向SLP 的多重循环向量化[J].软件学报,2012,23(7):1717-1728.
作者姓名:魏帅  赵荣彩  姚远
作者单位:解放军信息工程大学信息工程学院;
基金项目:国家高技术研究发展计划(863)(2009AA012201); “核高基”国家科技重大专项(2009ZX01036)
摘    要:如今,越来越多的处理器集成了SIMD(single instruction multiple data)扩展,现有的编译器大多也实现了自动向量化的功能,但是一般都只针对最内层循环进行向量化,对于多重循环缺少一种通用、易行的向量化方法.为此,提出了一种面向SLP(superword level parallelism)的多重循环向量化方法,从外至内依次对各个循环层次进行分析,收集各层循环对应的一些影响向量化效果的属性值,主要包括能否对该循环进行直接循环展开和压紧、有多少数组引用相对于该循环索引连续以及该循环所包含的区域等,然后根据这些属性值决定在哪些循环层次进行直接循环展开和压紧,最后通过SLP对循环中的语句进行向量化.实验结果表明,该算法相对于内层循环向量化和简单的外层循环向量化平均加速比提升了2.13和1.41,对于一些常用的核心循环可以得到高达5.3的加速比.

关 键 词:SIMD  向量化  依赖关系分析  多重循环  超字并行
收稿时间:2011/4/19 0:00:00
修稿时间:2011/7/21 0:00:00

Loop-Nest Auto-Vectorization Based on SLP
WEI Shuai,ZHAO Rong-Cai and YAO Yuan.Loop-Nest Auto-Vectorization Based on SLP[J].Journal of Software,2012,23(7):1717-1728.
Authors:WEI Shuai  ZHAO Rong-Cai and YAO Yuan
Affiliation:Information Engineering Colledge, PLA Information Engineering University, Zhengzhou 450002, China;Information Engineering Colledge, PLA Information Engineering University, Zhengzhou 450002, China;Information Engineering Colledge, PLA Information Engineering University, Zhengzhou 450002, China
Abstract:Nowadays, more and more processors are integrated with SIMD (single instruction multiple data) extensions, and most of the compilers have applied automatic vectorization, but the vectorization usually targets the innermost loop, there have been no easy vectorization approaches that deal with the loop nest. This paper brings out an automatic vectorization approach to vectorize nested loops form outer to inner. The paper first analyzes whether the loop can do direct unroll-and-jam through dependency analysis. Next, this study collects the values about the loop that will influence vectorization performance, including whether it can do direct unroll-and-jam, the number of array references that are continuous for this loop index and the loop region. Moreover, the study also presents an aggressive algorithm that will be used to decide which loops need to do unroll-and-jam at last generate SIMD code using SLP (superword level parallelism) algorithm. The test results on Intel platform show that the average speedup factor of some numerical/video/communication kernels achieved by this approach is 2.13/1.41, better than the innermost loop vectorization and simple outer-loop vectorization, the speedup factor of some common kernels can reach 5.3.
Keywords:SIMD (single instruction multiple data)  vectorization  data dependence analysis  nested loop  SLP (superword level parallelism)
本文献已被 CNKI 等数据库收录!
点击此处可从《软件学报》浏览原始摘要信息
点击此处可从《软件学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号