首页 | 本学科首页   官方微博 | 高级检索  
     


A context switching streaming memory architecture to accelerate a neocortex model
Authors:Christopher N Vutsinas  Tarek M Taha  Kenneth L Rice
Affiliation:1. Jaypee University of Engineering and Technology, Raghogarh, Madhya Pradesh, India;2. School of Computer Engineering, Nanyang Technological University, Nanyang Avenue, Singapore 639798, Singapore;3. Department of Electrical and Computer Engineering, Concordia University, Montreal, QC, Canada H3G 2W1;1. Department of Physiology and Biophysics and Howard Hughes Medical Institute, University of Washington, Seattle, WA 98195, USA;1. Dalian University of Technology, No. 2 Linggong Road, 116024 Dalian, PR China;2. IPHC, 23 rue du Loess 67037 Strasbourg, France;3. CNRS, UMR7178, 67037 Strasbourg, France;1. AMS Sensors Portugal, Funchal, Portugal;2. ITI/LARSyS/Madeira Interactive Technologies Institute and University of Madeira, Funchal, Portugal;1. Department of Electrical and Computer Engineering, University of California, Davis, One Shields Ave., Davis, CA 95616, USA;2. Department of Electrical and Computer Engineering, California State University, Fresno, 2320 E. San Ramon Ave., Fresno, CA 93740, USA
Abstract:A novel architecture to accelerate a neocortex inspired cognitive model is presented. The architecture utilizes a collection of context switchable processing elements (PEs). This enables time multiplexing of nodes in the model onto available PEs. A streaming memory system is designed to enable high-throughput computation and efficient use of memory resources. Several scheduling algorithms were examined to efficiently assign network nodes to the PEs. Multiple parallel FPGA-accelerated implementations were evaluated on a Cray XD1. Networks of varying complexity were tested and indicate that hardware acceleration can provide an average throughput gain of 184 times over equivalent parallel software implementations.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号