首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
提出一种基于矩阵转换的高效卷积计算优化方法MCFA。根据输出矩阵的宽度和卷积核大小对输入矩阵进行分块,通过im2col方法转换输入矩阵子块和核函数矩阵,利用计算统一设备架构中封装的矩阵-矩阵乘法加速库提升卷积计算的速度。在此基础上,将输出子块按序排列,最终得到完整的输出矩阵。实验结果证明,该方法相比im2col方法能节省61.25%的计算空间,相比MEC方法能提高20.57%的计算速度,且在分块情况下可以缓解大输入矩阵引起的缓存压力,提高缓存利用率。  相似文献   

2.
针对当前稀疏数据推荐准确率低的问题,提出一种基于多核学习卷积神经网络的稀疏数据推荐算法.将项目的辅助信息送入卷积神经网络学习特征,将向量在可再生核希尔伯特空间组合,利用多核学习技术增强卷积神经网络的特征学习能力;基于学习的卷积特征集初始化非负矩阵模型,通过非负矩阵模型实现对缺失评分的预测.实验结果表明,该算法有效提高了稀疏数据集的推荐性能,验证了多核学习卷积神经网络的有效性.  相似文献   

3.
狄新凯  杨海钢 《计算机工程》2021,47(7):189-195,204
为消除卷积神经网络前向计算过程中因模型参数的稀疏性而出现的无效运算,基于现场可编程门阵列(FPGA)设计针对稀疏化神经网络模型的数据流及并行加速器。通过专用逻辑模块在输入通道方向上筛选出特征图矩阵和卷积滤波器矩阵中的非零点,将有效数据传递给由数字信号处理器组成的阵列做乘累加操作。在此基础上,对所有相关的中间结果经加法树获得最终输出特征图点,同时在特征图宽度、高度和输出通道方向上做粗颗粒度并行并寻找最佳的设计参数。在Xilinx器件上进行实验验证,结果表明,该设计实现VGG16卷积层综合性能达到678.2 GOPS,性能功耗比为69.45 GOPS/W,其性能与功耗指标较基于FPGA的稠密网络加速器和稀疏网络加速器有较大提升。  相似文献   

4.
卷积神经网络模型所需的存储容量和计算资源远超出移动和嵌入式设备的承载量,因此文中提出轻量级卷积神经网络架构(SFNet).SFNet架构引入切分模块的概念,通过将网络的输出特征图进行“切分”处理,每个特征图片段分别输送给不同大小的卷积核进行卷积运算,将运算得到的特征图拼接后由大小为1×1的卷积核进行通道融合.实验表明,相比目前通用的轻量级卷积神经网络,在卷积核数目及输入特征图通道数相同时,SFNet的参数和计算量更少,分类正确率更高.相比标准卷积,在网络复杂度大幅降低的情况下,切分模块的分类正确率持平甚至更高.  相似文献   

5.
尝试将word embedding和卷积神经网络(CNN)相结合来解决情感分类问题。首先,利用Skip-Gram模型训练出数据集中每个词的word embedding,然后将每条样本中出现的word embedding组合为二维特征矩阵作为卷积神经网络的输入;此外,每次迭代训练过程中,输入特征也作为参数进行更新。其次,设计了一种具有3种不同大小卷积核的神经网络结构,从而完成多种局部抽象特征的自动提取过程。与传统机器学习方法相比,所提出的基于word embedding和CNN的情感分类模型成功将分类正确率提升了5.04%。  相似文献   

6.
设计了一款用于动态视觉传感器数据特征提取的小尺寸事件型卷积处理器,该卷积处理器包含了32×32的累加阵列、用于存储卷积核的RAM阵列、左/右移位模块、控制模块和异步的事件读出模块.为了减小面积,设计了2 bit的32×32的RAM阵列来存储所需的卷积核;在累加阵列中,采用7 bit的二进制计数器代替传统的加法器来实现卷积核的累加操作,在0.18 μm CMOS工艺下,每个卷积单元的面积为37.5 μm×40 μm,对于每个事件输入输出的最小延时为17 ns,能够处理的最大事件率为12.5 Meps.基于该卷积处理器搭建了一个识别系统,利用16个卷积处理器来提取特征,利用脉冲神经网络实现了分类识别.实验结果表明,使用2 bit卷积核的小尺寸卷积处理器能够准确完成对输入事件的卷积操作,而且基于该卷积处理器所搭建的识别系统对MNIST数据库的识别效率可以达到90.57%.  相似文献   

7.
功能性磁共振成像(fMRI)数据分类方法无法有效提取fMRI数据的局部特征,影响分类准确性.因此文中提出基于卷积神经网络的fMRI数据分类方法.首先设计卷积神经网络结构,并根据卷积神经网络的卷积核尺寸构建受限玻尔兹曼机模型.然后使用fMRI数据感兴趣区域体素构造数据,对受限玻尔兹曼机进行预训练,并将训练得到的权重矩阵进行相对变换,用于初始化卷积神经网络的卷积核参数.最后训练初始化好的整个模型,得到最终的分类模型.在Haxby和LPD数据集上的实验表明,文中方法可以有效提升fMRI数据的分类准确率  相似文献   

8.
传统基于支持向量机的不平衡数据分类算法包含矩阵运算,无法应用于大规模的不平衡数据集。针对这种情况,提出基于差分孪生卷积神经网络的大规模不平衡数据分类算法。设计差分卷积机制增强卷积神经网络的深度结构表示能力,在不改变滤波器数量的情况下提高模型的判别能力。通过差分孪生卷积神经网络分别优化每个类的特征图,每个类关联多个超平面,根据输入样本与超平面的距离决定输出样本的类标签。基于多组不平衡数据集的实验结果表明,该算法实现了较好的分类性能。  相似文献   

9.
针对卷积神经网络中卷积核数量多凭经验确定的问题,提出了一种统计图像边缘信息来确定卷积核数量的方法。首先,采用边缘检测算子对训练图像进行边缘检测,并依据卷积层的卷积核尺寸对边缘图像进行边缘块提取;然后,统计提取到的边缘块以获得边缘特征矩阵;最后,计算边缘特征矩阵各列的方差,将方差排序且归一化,选择方差较大部分边缘类型的个数作为卷积核数量。在Mnist和Chars74K数据集上的实验结果表明,本文方法能依数据集特点自适应地确定卷积核数量,构造的卷积神经网络模型大小适应于特定数据集,且能获得较高分类准确率。  相似文献   

10.
李南星  盛益强  倪宏 《计算机工程》2020,46(4):85-90,96
在推荐系统中,传统的矩阵分解无法提取用户和物品特征,而神经协同过滤(NCF)在分解模型中增加多层感知器,但不能有效利用用户和物品ID之外的辅助信息.为此,提出一种新的条件卷积方法.通过将物品特征作为输入,将用户特征作为卷积核,达到权值不共享的目的,使得条件卷积具有更强的特征提取和组合能力以及不增加参数量的特性.在此基础上,条件卷积能够融入多种辅助信息进行个性化推荐.实验结果表明,与NCF模型相比,该方法在隐性反馈数据中推荐命中率提升3.11%,在显性反馈数据中评分预测误差降低2.47%.  相似文献   

11.
细粒度任务并行GPU通用矩阵乘   总被引:1,自引:0,他引:1       下载免费PDF全文
稠密线性代数运算对模式识别和生物信息等许多实际应用至关重要,而通用矩阵乘(GEMM)处于稠密线性代数运算的基础地位。在cuBLAS与MAGMA中,GEMM被实现为若干kernel函数,对大型GEMM计算能够达到很高的性能。然而,现有实现对批量的小型GEMM计算性能发挥则较为有限。而且,现有实现也不能在多个具有不同性能的GPU之间自动扩展并达到负载均衡。提出任务并行式GEMM(TPGEMM),用细粒度任务并行的方式实现批量矩阵乘和多GPU矩阵乘。一个或多个GEMM的计算能够被拆分为多个任务,动态地调度到一个或多个GPU上。TPGEMM避免了为批量矩阵乘启动多个kernel函数的开销,对批量矩阵乘能够取得显著高于cuBLAS与MAGMA的性能。在低开销细粒度任务调度的基础上,TPGEMM支持单个GEMM计算在多个GPU间的自动并行,在一台具有四个不同性能GPU的工作站上取得了接近100%的扩展效率。  相似文献   

12.
Although matrix multiplication plays an essential role in a wide range of applications, previous works only focus on optimizing dense or sparse matrix multiplications. The Sparse Approximate Matrix Multiply (SpAMM) is an algorithm to accelerate the multiplication of decay matrices, the sparsity of which is between dense and sparse matrices. In addition, large-scale decay matrix multiplication is performed in scientific applications to solve cutting-edge problems. To optimize large-scale decay matrix multiplication using SpAMM on supercomputers such as Sunway Taihulight, we present swSpAMM, an optimized SpAMM algorithm by adapting the computation characteristics to the architecture features of Sunway Taihulight.Specifically, we propose both intra-node and inter-node optimizations to accelerate swSpAMM for large-scale execution. For intra-node optimizations, we explore algorithm parallelization and block-major data layout that are tailored to better utilize the architecture advantage of Sunway processor. For inter-node optimizations, we propose a matrix organization strategy for better distributing sub-matrices across nodes and a dynamic scheduling strategy for improving load balance across nodes. We compare swSpAMM with the existing GEMM library on a single node as well as large-scale matrix multiplication methods on multiple nodes. The experiment results show that swSpAMM achieves a speedup up to 14.5× and 2.2× when compared to xMath library on a single node and 2D GEMM method on multiple nodes, respectively.  相似文献   

13.
In this work, we address the efficient realization of block-Jacobi preconditioning on graphics processing units (GPUs). This task requires the solution of a collection of small and independent linear systems. To fully realize this implementation, we develop a variable-size batched matrix inversion kernel that uses Gauss-Jordan elimination (GJE) along with a variable-size batched matrix–vector multiplication kernel that transforms the linear systems’ right-hand sides into the solution vectors. Our kernels make heavy use of the increased register count and the warp-local communication associated with newer GPU architectures. Moreover, in the matrix inversion, we employ an implicit pivoting strategy that migrates the workload (i.e., operations) to the place where the data resides instead of moving the data to the executing cores. We complement the matrix inversion with extraction and insertion strategies that allow the block-Jacobi preconditioner to be set up rapidly. The experiments on NVIDIA’s K40 and P100 architectures reveal that our variable-size batched matrix inversion routine outperforms the CUDA basic linear algebra subroutine (cuBLAS) library functions that provide the same (or even less) functionality. We also show that the preconditioner setup and preconditioner application cost can be somewhat offset by the faster convergence of the iterative solver.  相似文献   

14.
Recently, several experimental studies have been conducted on block data layout in conjunction with tiling as a data transformation technique to improve cache performance. In this paper, we analyze cache and translation look-aside buffer (TLB) performance of such alternate layouts (including block data layout and Morton layout) when used in conjunction with tiling. We derive a tight lower bound on TLB performance for standard matrix access patterns, and show that block data layout and Morton layout achieve this bound. To improve cache performance, block data layout is used in concert with tiling. Based on the cache and TLB performance analysis, we propose a data block size selection algorithm that finds a tight range for optimal block size. To validate our analysis, we conducted simulations and experiments using tiled matrix multiplication, LU decomposition, and Cholesky factorization. For matrix multiplication, simulation results using UltraSparc II parameters show that tiling and block data layout with a block size given by our block size selection algorithm, reduces up to 93 percent of TLB misses compared with other techniques. The total miss cost is reduced considerably. Experiments on several platforms show that tiling with block data layout achieves up to 50 percent performance improvement over other techniques that use conventional layouts. Morton layout is also analyzed and compared with block data layout. Experimental results show that matrix multiplication using block data layout is up to 15 percent faster than that using Morton data layout.  相似文献   

15.
In this paper, we address the matrix chain multiplication problem, i.e., the multiplication of several matrices. Although several studies have investigated the problem, our approach has some different points. First, we propose MapReduce algorithms that allow us to provide scalable computation for large matrices. Second, we transform the matrix chain multiplication problem from sequential multiplications of two matrices into a single multiplication of several matrices. Since matrix multiplication is associative, this approach helps to improve the performance of the algorithms. To implement the idea, we adopt multi-way join algorithms in MapReduce that have been studied in recent years. In our experiments, we show that the proposed algorithms are fast and scalable, compared to several baseline algorithms.  相似文献   

16.
Recursive array layouts and fast matrix multiplication   总被引:1,自引:0,他引:1  
The performance of both serial and parallel implementations of matrix multiplication is highly sensitive to memory system behavior. False sharing and cache conflicts cause traditional column-major or row-major array layouts to incur high variability in memory system performance as matrix size varies. This paper investigates the use of recursive array layouts to improve performance and reduce variability. Previous work on recursive matrix multiplication is extended to examine several recursive array layouts and three recursive algorithms: standard matrix multiplication and the more complex algorithms of Strassen (1969) and Winograd. While recursive layouts significantly outperform traditional layouts (reducing execution times by a factor of 1.2-2.5) for the standard algorithm, they offer little improvement for Strassen's and Winograd's algorithms. For a purely sequential implementation, it is possible to reorder computation to conserve memory space and improve performance between 10 percent and 20 percent. Carrying the recursive layout down to the level of individual matrix elements is shown to be counterproductive; a combination of recursive layouts down to canonically ordered matrix tiles instead yields higher performance. Five recursive layouts with successively increasing complexity of address computation are evaluated and it is shown that addressing overheads can be kept in control even for the most computationally demanding of these layouts.  相似文献   

17.
Expressing scientific computations in terms of BLAS, and in particular the general dense matrix-matrix multiplication (GEMM), is of fundamental importance for obtaining high performance portability across architectures. However, GEMMs for small matrices of sizes smaller than 32 are not sufficiently optimized in existing libraries. We consider the computation of many small GEMMs and its performance portability for a wide range of computer architectures, including Intel CPUs, ARM, IBM, Intel Xeon Phi, and GPUs. These computations often occur in applications like big data analytics, machine learning, high-order finite element methods (FEM), and others. The GEMMs are grouped together in a single batched routine. For these cases, we present algorithms and their optimization techniques that are specialized for the matrix sizes and architectures of interest. We derive a performance model and show that the new developments can be tuned to obtain performance that is within 90% of the optimal for any of the architectures of interest. For example, on a V100 GPU for square matrices of size 32, we achieve an execution rate of about 1600 gigaFLOP/s in double-precision arithmetic, which is 95% of the theoretically derived peak for this computation on a V100 GPU. We also show that these results outperform currently available state-of-the-art implementations such as vendor-tuned math libraries, including Intel MKL and NVIDIA CUBLAS, as well as open-source libraries like OpenBLAS and Eigen.  相似文献   

18.
We describe an efficient and scalable symmetric iterative eigensolver developed for distributed memory multi‐core platforms. We achieve over 80% parallel efficiency by major reductions in communication overheads for the sparse matrix‐vector multiplication and basis orthogonalization tasks. We show that the scalability of the solver is significantly improved compared to an earlier version, after we carefully reorganize the computational tasks and map them to processing units in a way that exploits the network topology. We discuss the advantage of using a hybrid OpenMP/MPI programming model to implement such a solver. We also present strategies for hiding communication on a multi‐core platform. We demonstrate the effectiveness of these techniques by reporting the performance improvements achieved when we apply our solver to large‐scale eigenvalue problems arising in nuclear structure calculations. Because sparse matrix‐vector multiplication and inner product computation constitute the main kernels in most iterative methods, our ideas are applicable in general to the solution of problems involving large‐scale symmetric sparse matrices with irregular sparsity patterns. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
在许多应用领域中,大规模浮点矩阵乘法往往是最耗时的计算核心之一。在新兴的应用中经常存在至少有一个维度很小的大规模矩阵,我们把具备这种特性的矩阵称为非均匀矩阵。由于FPGA上用以存储中间结果的片上存储器容量十分有限,计算大规模矩阵乘法时往往需要将矩阵划分成细粒度的子块计算任务。当加速非均匀矩阵乘法时,由于只支持固定分块大小,大多数现有的线性阵列结构的硬件矩阵乘法器将遭受很大的性能下降。为了解决这个问题,提出了一种有效的优化分块策略。在此基础上,在Xilinx公司的Zynq XC7Z045FPGA芯片上实现了一个支持可变分块的矩阵乘法器。通过集成224个处理单元,该矩阵乘法器在150 MHz的时钟频率下对于实际应用中的非均匀矩乘达到了48GFLOPS的实测性能,而所需带宽仅为4.8GB/s。实验结果表明,我们提出的分块策略相比于传统的分块算法实现了高达12%的性能提升。  相似文献   

20.
We present block algorithms and their implementation for the parallelization of sub-cubic Gaussian elimination on shared memory architectures. Contrarily to the classical cubic algorithms in parallel numerical linear algebra, we focus here on recursive algorithms and coarse grain parallelization. Indeed, sub-cubic matrix arithmetic can only be achieved through recursive algorithms making coarse grain block algorithms perform more efficiently than fine grain ones. This work is motivated by the design and implementation of dense linear algebra over a finite field, where fast matrix multiplication is used extensively and where costly modular reductions also advocate for coarse grain block decomposition. We incrementally build efficient kernels, for matrix multiplication first, then triangular system solving, on top of which a recursive PLUQ decomposition algorithm is built. We study the parallelization of these kernels using several algorithmic variants: either iterative or recursive and using different splitting strategies. Experiments show that recursive adaptive methods for matrix multiplication, hybrid recursive–iterative methods for triangular system solve and tile recursive versions of the PLUQ decomposition, together with various data mapping policies, provide the best performance on a 32 cores NUMA architecture. Overall, we show that the overhead of modular reductions is more than compensated by the fast linear algebra algorithms and that exact dense linear algebra matches the performance of full rank reference numerical software even in the presence of rank deficiencies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号