首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 26 毫秒
1.
Analog VLSI on-chip learning Neural Networks represent a mature technology for a large number of applications involving industrial as well as consumer appliances. This is particularly the case when low power consumption, small size and/or very high speed are required. This approach exploits the computational features of Neural Networks, the implementation efficiency of analog VLSI circuits and the adaptation capabilities of the on-chip learning feedback schema.Many experimental chips and microelectronic implementations have been reported in the literature based on the research carried out over the last few years by several research groups. The author presents and discusses the motivations, the system and circuit issues, the design methodology as well as the limitations of this kind of approach. Attention is focused on supervised learning algorithms because of their reliability and popularity within the neural network research community. In particular, the Back Propagation and Weight Perturbation learning algorithms are introduced and reviewed with respect to their analog VLSI implementation.Finally, the author also reviews and compares the main results reported in the literature, highlighting the efficiency and the reliability of the on-chip implementation of these algorithms.  相似文献   

2.
A large-scale, dual-network architecture using wafer-scale integration (WSI) technology is proposed. By using 0.8 μm CMOS technology, up to 144 self-learning digital neurons were integrated on each of eight 5 in silicon wafers. Neural functions and the back-propagation (BP) algorithm were mapped to digital circuits. The complete hardware system packaged more than 1000 neurons within a 30 cm cube. The dual-network architecture allowed high-speed learning at more than 2 gigaconnections updated per second (GCUPS). The high fault tolerance of the neural network and proposed defect-handling techniques overcame the yield problem of WSI. This hardware can be connected to a host workstation and used to simulating a wide range of artificial neural networks. Signature verification and stock price prediction have already been demonstrated with this hardware  相似文献   

3.
Time-critical neural network applications that require fully parallel hardware implementations for maximal throughput are considered. The rich array of technologies that are being pursued is surveyed, and the analog CMOS VLSI medium approach is focused on. This medium is messy in that limited dynamic range, offset voltages, and noise sources all reduce precision. The authors examine how neural networks can be directly implemented in analog VLSI, giving examples of approaches that have been pursued to date. Two important application areas are highlighted: optimization, because neural hardware may offer a speed advantage of orders of magnitude over other methods; and supervised learning, because of the widespread use and generality of gradient-descent learning algorithms as applied to feedforward networks  相似文献   

4.
Analog VLSI implementations of artificial neural networks are usually considered efficient for the small area and the low power consumption they require, but very poor in terms of programmability. In this paper, we present an approach to the design of analog VLSI neural information-processing systems with on-chip learning capabilities. We describe a set of analog circuits for implementing the neural computational primitives of a Multi-Layer Perceptron, including the ones supporting a gradient-based learning algorithm (Back Propagation). Only supervision tasks are managed off chip.An experimental chip has been designed and fabricated using a standard digital 1.5 m CMOS N-well technology. The chip contains 4 neurons and 32 synapses organized into a single-layer architecture with 8 inputs and 4 outputs. Measures illustrating the chip behavior during learning are reported.  相似文献   

5.
An analog continuous-time neural network is described. Building blocks which include the capability for on-chip learning and an example network are described and test results are presented. We are using analog nonvolatile CMOS floating-gate memories for storage of the neural weights. The floating-gate memories are programmed by illuminating the entire chip with ultraviolet light. The subthreshold operation of the CMOS transistor in analog VLSI has a very low power dissipation which can be utilized to build larger computational systems, e.g., neural networks. The experimental results show that the floating-gate memories are promising, and that the building blocks are operating as separate units; however, especially the time constants involved in the computations of the continuous-time analog neural network should be studied further.  相似文献   

6.
The current art of digital electronic implementation of neural networks is reviewed. Most of this work has taken place as digital simulations on general-purpose serial or parallel digital computers. Specialized neural network emulation systems have also been developed for more efficient learning and use. Dedicated digital VLSI integrated circuits offer the highest near-term future potential for this technology  相似文献   

7.
一种可扩展BP在片学习神经网络芯片   总被引:1,自引:0,他引:1  
卢纯  石秉学  陈卢 《电子学报》2002,30(9):1270-1273
基于0.6μm标准CMOS工艺,设计并实现了一种可扩展BP在片学习神经网络芯片.该芯片包含8个神经元和64个突触.提出了一种新颖的可扩展拓扑结构,使得利用该芯片构成完整的神经网络系统时,不需附加额外的神经元误差计算芯片;将L个芯片层叠起来就可以得到一个L层的神经网络.该芯片采用模拟电路,利用电容进行电荷存储,在片学习本身可用于权重刷新以保证权重值的正确性.奇偶校验实验证明了该神经网络芯片具有在片学习的能力.  相似文献   

8.
Conventional interconnect and switching technology is rapidly becoming a critical issue in the realization of systems using high speed silicon and GaAs based technologies. In recent years clock speeds and on-chip density for VLSI/VHSIC technology has made packaging these high speed chips extremely difficult. A strong case can be made for using optical interconnects for on-chip/on-wafer, chip-to-chip and board-to-board high speed communications. GaAs Integrated Optoelectronic Circuits (IOC's) are being developed in a number of laboratories for performing Input/Output functions at all levels. In this paper integrated optoelectronic materials, electronics and optoelectronic devices are presented. IOC’s are examined from the standpoint of what it takes to fabricate the devices and what performance can be expected.  相似文献   

9.
Artificial neural network chips can achieve high-speed performance in solving complex computational problems for signal and information processing applications. These chips contain regular circuit units such as synapse matrices that interconnect linear arrays of input and output neurons. The neurons and synapses may be implemented in an analog or digital design style. Although the neural processing has some degree of fault tolerance, a significant percentage of processing defects can result in catastrophic failure of the neural network processors. Systematic testing of these arrays of circuitry is of great importance in order to assure the quality and reliability of VLSI neural network processor chips. The proposed testing method consists of parametric test and behavioral test. Two programmable analog neural chips have been designed and fabricated. The systematic approach used to test the chips is described, and measurement results on parametric test are presented.This research was partially supported by DARPA under Contract MDA 972-90-C-0037 and by National Science Foundation under Grant MIP-8904172.  相似文献   

10.
Typical analog VLSI architectures for on-chip learning are limited in functionality, and scale poorly under variable problem size. We present a scalable hybrid analog-digital architecture for backpropagation learning in multilayer feedforward neural networks, which integrates the flexible functionality and programmability of digital control functions with the efficiency of analog parallel neural computation. The architecture is fully scalable, both in the parallel analog functions of forward and backward signal propagation through synaptic and neural functional units (SynMod and NeuMod), and in the global and local digital functions controlling recall, learning, initialization, monitoring and built-in test. The architecture includes local provisions for long-term weight storage using refresh, which is transparent to the functional operation both during recall and learning. Refresh While Learning (RWL) provides a means to compensate for the finite precision of the quantized analog weights during learning. We include simulation results for a network of 32×32 neurons, mapped in parallel onto a MassPar computational engine, which validate the functionality of the architecture on simple character recognition tasks, and demonstrate robust operation of the trained network under 4-bit quantization of the weights owing to the RWL technique.  相似文献   

11.
An analog computing-based systolic architecture which employs multiple neuroprocessors for high-speed early vision processing is presented. For a two-dimensional image, parallel processing is performed in the row direction and pipelined processing is performed in the column direction. The mixed analog/digital design approach is suitable for implementation of electronic neural systems. Local data computation is executed by analog circuitry to achieve full parallelism and to minimize power dissipation. Inter-processor communication is carried out in the digital format to maintain strong signal strength across the chip boundary and to achieve direct scalability in neural network size. For demonstration purposes, a compact and efficient VLSI neural chip that includes multiple neuroprocessors for high-speed digital image restoration is designed. Measured results of the programmable synapse, and statistical distribution of measured synapse conductances are presented. Based on these results, system-level analyses at 8-bit resolution are conducted. A 8.0×6.0-mm 2 chip from a 1.2-µm CMOS technology can accommodate 5 neuroprocessors and the speed-up factor over the Sun-4/75 SPARC workstation is around 450. This chip achieves 18 Giga connections per second.This research was partially supported by DARPA under Contract MDA 972-90-C-0037 and by TRW Inc., Samsung Electronics Co., Ltd., and NKK Corp.  相似文献   

12.
Implementations of artificial neural networks as analog VLSI circuits differ in their method of synaptic weight storage (digital weights, analog EEPROMs, or capacitive weights) and in whether learning is performed locally at the synapses or off-chip. In this paper, we explain the principles of analog networks with in situ or local synaptic learning of capacitive weights, with test results of CMOS implementations from our laboratory. Synapses for both simple Hebbian and mean field networks are investigated. Synaptic weights may be refreshed by periodic rehearsal on the training data, which compensates for temperature drift or other nonstationarity. Compact high-performance layouts have been obtained in which learning adjusts for component variability.  相似文献   

13.
An expandable on-chip back-propagation (BP) learning neural network chip is designed. The chip has four neurons and 16 synapses. Large-scale neural networks with arbitrary layers and discretional neurons per layer can be constructed by combining a certain number of such unit chips. A novel neuron circuit with programmable parameters, which generates not only the sigmoid function but also its derivative, is proposed. The neuron has a push-pull output stage to gain strong driving ability in both charge and discharge processes, which is very important in heavy load situations. An improved version of the Gilbert multiplier is also proposed. It has large linear range and accurate zero point. The chip is fabricated with a standard 0.5 μm CMOS, double-poly, double-metal technology. The results of parity experiments demonstrate its ability of on-chip BP learning.  相似文献   

14.
A systematic method for testing large arrays of analog, digital, or mixed-signal circuit components that constitute VLSI neural networks is described. This detailed testing procedure consists of a parametric test and a behavioral test. Characteristics of the input neuron, synapse, and output neuron circuits are used to distinguish between faulty and useful chips. Stochastic analysis of the parametric test results can be used to predict chip yield information. Several measurement results from two analog neural network processor designs that are fabricated in 2 μm double-polysilicon CMOS technologies are presented to demonstrate the testing procedure  相似文献   

15.
本文采用一种简化的BP(Back Propagation)神经网络硬件模块实现方法。该方法利用全电流模式电路组成神经元模块,再用若干模块构成简化的BP神经网络。所提出的模块结构网络系统具有在线学习和在线权值存储能力,且可应用于实现编、解码和二维图像识别。文中提供了PSPICE和高级语言计算机仿真结果。  相似文献   

16.
The design and analog VLSI implementation of a recurrent neural network with integrated temporal learning is presented. The learning algorithm is forward in time, and is implemented strictly as instantaneous, local weight updates. PSpice simulations of networks with 4 to 6 neurons demonstrate robust learning of trajectory generation and classification tasks. A scalable 2-D VLSI architecture is described and a prototupe 4-neuron recurrent neural network with learning has subsequently been fabricated in MOSIS TinyChip 2 micron technology. Experimental results of the chip validate the learning performance with convergence in the millisecond range. Specific experimental results of learning circular and figure-8 dynamic trajectories are included.  相似文献   

17.
We report the implementation of a prototype three-dimensional (3D) optoelectronic neural network that combines free-space optical interconnects with silicon-VLSI-based optoelectronic circuits. The prototype system consists of a 16-node input, 4-neuron hidden, and a single-neuron output layer, where the denser input-to-hidden-layer connections are optical. The input layer uses PLZT light modulators to generate optical outputs which are distributed over an optoelectronic neural network chip through space-invariant holographic optical interconnects. Optical interconnections provide negligible fan-out delay and allow compact, purely on-chip electronic H-tree type fan-in structure. The small prototype system achieves a measured 8-bit electronic fan-in precision and a calculated maximum speed of 640 million interconnections per second. The system was tested using synaptic weights learned off system and was shown to distinguish any vertical line from any horizontal one in an image of 4×4 pixels. New, more efficient light detector and small-area analog synapse circuits and denser optoelectronic neuron layouts are proposed to scale up the system. A high-speed, feed-forward optoelectronic synapse implementation density of up to 104/cm2 seems feasible using new synapse design. A scaling analysis of the system shows that the optically interconnected neural network implementation can provide higher fan-in speed and lower power consumption characteristics than a purely electronic, crossbar-based neural network implementation  相似文献   

18.
Parallel algorithms/architectures for neural networks   总被引:1,自引:0,他引:1  
This paper advocates digital VLSI architectures for implementing a wide variety of artificial neural networks (ANNs). A programmable systolic array is proposed, which maximizes the strength of VLSI in terms of intensive and pipelined computing and yet circumvents the limitation on communication. The array is meant to be more general purpose than most other ANN architectures proposed. It may be used for a variety of algorithms in both the retrieving and learning phases of ANNs: e.g., single layer feedback networks, competitive learning networks, and multilayer feed-forward networks. A unified approach to modeling of existing neural networks is proposed. This unified formulation leads to a basic structure for a universal simulation tool and neurocomputer architecture. Fault-tolerance approach and partitioning scheme for large or non-homogeneous networks are also proposed. Finally, the implementations based on commercially available VLSI chips (e.g., Inmos T800) and custom VLSI technology are discussed in great detail.  相似文献   

19.
20.
An analog feed-forward neural network with on-chip learning   总被引:1,自引:0,他引:1  
An analog continuous-time neural network with on-chip learning is presented. The 4-3-2 feed-forward network with a modified back-propagation learning scheme was build using micropower building blocks in a double poly, double metal 2 CMOS process. The weights are stored in non-volatile UV-light programmable analog floating gate memories. A differential signal representation is used to design simple building blocks which may be utilized to build very large neural networks. Measured results from on-chip learning are shown and an example of generalization is demonstrated. The use of micro-power building blocks allows very large networks to be implemented without significant power consumption.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号