首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
 Hardware implementation of artificial neural networks (ANN) based on MOS transistors with floating gate (Neuron MOS or νMOS) is discussed. Choosing analog approach as a weight storage rather than digital improves learning accuracy, minimizes chip area and power dissipation. However, since weight value can be represented by any voltage in the range of supplied voltage (e.g. from 0 to 3.3 V), minimum difference of two values is very small, especially in the case of using neuron with large sum of weights. This implies that ANN using analog hardware approach is weak against V dd deviation. The purpose of this paper is to investigate main parts of analog ANN circuits (synapse and neuron) that can compensate all kinds of deviation and to develop their design methodologies.  相似文献   

2.
There are several neural network implementations using either software, hardware-based or a hardware/software co-design. This work proposes a hardware architecture to implement an artificial neural network (ANN), whose topology is the multilayer perceptron (MLP). In this paper, we explore the parallelism of neural networks and allow on-the-fly changes of the number of inputs, number of layers and number of neurons per layer of the net. This reconfigurability characteristic permits that any application of ANNs may be implemented using the proposed hardware. In order to reduce the processing time that is spent in arithmetic computation, a real number is represented using a fraction of integers. In this way, the arithmetic is limited to integer operations, performed by fast combinational circuits. A simple state machine is required to control sums and products of fractions. Sigmoid is used as the activation function in the proposed implementation. It is approximated by polynomials, whose underlying computation requires only sums and products. A theorem is introduced and proven so as to cover the arithmetic strategy of the computation of the activation function. Thus, the arithmetic circuitry used to implement the neuron weighted sum is reused for computing the sigmoid. This resource sharing decreased drastically the total area of the system. After modeling and simulation for functionality validation, the proposed architecture synthesized using reconfigurable hardware. The results are promising.  相似文献   

3.
In this paper, we present the design of a deterministic bit-stream neuron, which makes use of the memory rich architecture of fine-grained field-programmable gate arrays (FPGAs). It is shown that deterministic bit streams provide the same accuracy as much longer stochastic bit streams. As these bit streams are processed serially, this allows neurons to be implemented that are much faster than those that utilize stochastic logic. Furthermore, due to the memory rich architecture of fine-grained FPGAs, these neurons still require only a small amount of logic to implement. The design presented here has been implemented on a Virtex FPGA, which allows a very regular layout facilitating efficient usage of space. This allows for the construction of neural networks large enough to solve complex tasks at a speed comparable to that provided by commercially available neural-network hardware.  相似文献   

4.
The implementation of adaptive neural fuzzy networks (NFNs) using field programmable gate arrays (FPGA) is proposed in this study. Hardware implementation of NFNs with learning ability is very difficult. The backpropagation (BP) method in the learning algorithm is widely used in NFNs, making it difficult to implement NFNs in hardware because calculating the backpropagation error of all parameters in a system is very complex. However, we use the simultaneous perturbation method as a learning scheme for the NFN hardware implementation. In order to reduce the chip area, we utilize the traditional non-linear activation function to implement the Gaussian function. We can confirm the reasonableness of NFN performance through some examples.  相似文献   

5.
This paper investigates neuron activation statistics in artificial neural networks employing stochastic arithmetic. It is shown that a doubly stochastic Poisson process is an appropriate model for the signals in these circuits.  相似文献   

6.

The development of hardware platforms for artificial neural networks (ANN) has been hindered by the high consumption of power and hardware resources. In this paper, we present a methodology for ANN-optimized implementation, of a learning vector quantization (LVQ) type on a field-programmable gate array (FPGA) device. The aim was to provide an intelligent embedded system for real-time vigilance state classification of a subject from an analysis of the electroencephalogram signal. The present approach consists in applying the extension of the algorithm architecture adequacy (AAA) methodology with the arithmetic accuracy constraint, allowing the LVQ-optimized implementation on the FPGA. This extension improves the optimization phase of the AAA methodology by taking into account the operations wordlength required by applying and creating approximative-wordlength operation groups, where the operations in the same group will be performed with the same operator. This LVQ implementation will allow a considerable gain of circuit resources, power and maximum frequency while respecting the time and accuracy constraints. To validate our approach, the LVQ implementation has been tried for several network topologies on two Virtex devices. The accuracy–success rate relation has been studied and reported.

  相似文献   

7.
Recent advances in artificial neural networks (ANNs) have led to the design and construction of neuroarchitectures as simulator and emulators of a variety of problems in science and engineering. Such problems include pattern recognition, prediction, optimization, associative memory, and control of dynamic systems. This paper offers an analytical overview of the most successful design, implementation, and application of neuroarchitectures as neurosimulators and neuroemulators. It also outlines historical notes on the formulation of basic biological neuron, artificial computational models, network architectures, and learning processes of the most common ANN; describes and analyzes neurosimulation on parallel architecture both in software and hardware (neurohardware); presents the simulation of ANNs on parallel architectures; gives a brief introduction of ANNs in vector microprocessor systems; and presents ANNs in terms of the "new technologies". Specifically, it discusses cellular computing, cellular neural networks (CNNs), a new proposition for unsupervised neural networks (UNNs), and pulse coupled neural networks (PCNNs).  相似文献   

8.

Design of analog modular neuron based on memristor is proposed here. Since neural networks are built by repetition of basic blocks that are called neurons, using modular neurons is essential for the neural network hardware. In this work modularity of the neuron is achieved through distributed neurons structure. Some major challenges in implementation of synaptic operation are weight programmability, weight multiplication by input signal and nonvolatile weight storage. Introduction of memristor bridge synapse addresses all of these challenges. The proposed neuron is a modular neuron based on distributed neuron structure which it uses the benefits of the memristor bridge synapse for synaptic operations. In order to test appropriate operation of the proposed neuron, it is used in a real-world application of neural network. Off-chip method is used to train the neural network. The results show 86.7 % correct classification and about 0.0695 mean square error for 4-5-3 neural network based on proposed modular neuron.

  相似文献   

9.
The paper presents a method for FPGA implementation of Self-Organizing Map (SOM) artificial neural networks with on-chip learning algorithm. The method aims to build up a specific neural network using generic blocks designed in the MathWorks Simulink environment. The main characteristics of this original solution are: on-chip learning algorithm implementation, high reconfiguration capability and operation under real time constraints. An extended analysis has been carried out on the hardware resources used to implement the whole SOM network, as well as each individual component block.  相似文献   

10.
A low-complexity fuzzy activation function for artificial neural networks   总被引:3,自引:0,他引:3  
A novel fuzzy-based activation function for artificial neural networks is proposed. This approach provides easy hardware implementation and straightforward interpretability in the basis of IF-THEN rules. Backpropagation learning with the new activation function also has low computational complexity. Several application examples ( XOR gate, chaotic time-series prediction, channel equalization, and independent component analysis) support the potential of the proposed scheme.  相似文献   

11.
This work introduces hardware implementation of artificial neural networks (ANNs) with learning ability on field programmable gate array (FPGA) for dynamic system identification. The learning phase is accomplished by using the improved particle swarm optimization (PSO). The improved PSO is obtained by modifying the velocity update function. Adding an extra term to the velocity update function reduced the possibility of stucking in a local minimum. The results indicates that ANN, trained using improved PSO algorithm, converges faster and produces more accurate results with a little extra hardware utilization cost.  相似文献   

12.
In this paper we present a Multi-Element generalized Polynomial Chaos (ME-gPC) method to deal with stochastic inputs with arbitrary probability measures. Based on the decomposition of the random space of the stochastic inputs, we construct numerically a set of orthogonal polynomials with respect to a conditional probability density function (PDF) in each element and subsequently implement generalized Polynomial Chaos (gPC) locally. Numerical examples show that ME-gPC exhibits both p- and h-convergence for arbitrary probability measures  相似文献   

13.
A neural network architecture which uses stochastic processing techniques to perform the weighted input multiplication, summation, and thresholding processes of a neuron using the optimal amount of hardware is described. It will be argued that the advantage of this approach is that it will allow large neural networks to be fabricated with relatively small amounts of hardware. The architecture allows a choice to be made between the speed and accuracy of processing, as well as a choice of hardware. Implementations of a bit stream neuron using electronic, optoelectronic and optical hardware are developed and their capabilities are compared based on speed of processing and network size. The aim of this study is to investigate the capabilities of optical logic in distributed processing systems and specifically the use of the optical thyristor as logic elements. It is shown that optical processing and optical interconnection allows a simplification of the processing sequence and allows the parallelism of distributed systems to be utilized. Experimental results of a detector matrix which can statistically quantify the occupancy ratio of optical ones and zeros in a spatial pattern and that can be considered to implement the sum, thresholding, and sigmoid translation functions of a neuron are given.  相似文献   

14.
提出一种量子神经网络模型及算法.首先借鉴受控非门的含义提出一种受控量子旋转门,基于该门的物理意义,提出一种量子神经元模型,该模型包含对输入量子比特相位的旋转角度和对旋转角度的控制量两种设计参数;然后基于上述量子神经元提出一种量子神经网络模型,基于梯度下降法详细设计了该模型的学习算法:最后通过模式识别和时间序列预测两个仿...  相似文献   

15.
Deep neural networks have evolved remarkably over the past few years and they are currently the fundamental tools of many intelligent systems. At the same time, the computational complexity and resource consumption of these networks continue to increase. This poses a significant challenge to the deployment of such networks, especially in real-time applications or on resource-limited devices. Thus, network acceleration has become a hot topic within the deep learning community. As for hardware implementation of deep neural networks, a batch of accelerators based on a field-programmable gate array (FPGA) or an application-specific integrated circuit (ASIC) have been proposed in recent years. In this paper, we provide a comprehensive survey of recent advances in network acceleration, compression, and accelerator design from both algorithm and hardware points of view. Specifically, we provide a thorough analysis of each of the following topics: network pruning, low-rank approximation, network quantization, teacher–student networks, compact network design, and hardware accelerators. Finally, we introduce and discuss a few possible future directions.  相似文献   

16.
This paper presents two digital circuits that allow the implementation of a fully parallel stochastic Hopfield neural network (SHNN). In a parallel SHNN with n neurons, the n*n stochastic signals s (ij) pulse with probability which are proportional to the synapse inputs, are simultaneously available. The proposed circuits calculate the summation of the stochastic input pulses to neuron i(F(i)=Sigma(j) s(ij)). The resulting network achieves considerable speed up with respect to the previous network.  相似文献   

17.
To enhance the approximation and generalization ability of artificial neural networks (ANNs) by employing the principle of quantum rotation gate and controlled-not gate, a quantum-inspired neuron with sequence input is proposed. In the proposed model, the discrete sequence input is represented by the qubits, which, as the control qubits of the controlled-not gate after being rotated by the quantum rotation gates, control the target qubit for reverse. The model output is described by the probability amplitude of state \(|1\rangle \) in the target qubit. Then a quantum-inspired neural networks (QINN) is designed by employing the quantum-inspired neurons to the hidden layer and the common neurons to the output layer. The algorithm of QINN is derived by employing the Levenberg–Marquardt algorithm. Simulation results of some benchmark problems show that, under a certain condition, the QINN is obviously superior to the classical ANN.  相似文献   

18.
This paper presents a hardware implementation of multilayer feedforward neural networks (NN) using reconfigurable field-programmable gate arrays (FPGAs). Despite improvements in FPGA densities, the numerous multipliers in an NN limit the size of the network that can be implemented using a single FPGA, thus making NN applications not viable commercially. The proposed implementation is aimed at reducing resource requirement, without much compromise on the speed, so that a larger NN can be realized on a single chip at a lower cost. The sequential processing of the layers in an NN has been exploited in this paper to implement large NNs using a method of layer multiplexing. Instead of realizing a complete network, only the single largest layer is implemented. The same layer behaves as different layers with the help of a control block. The control block ensures proper functioning by assigning the appropriate inputs, weights, biases, and excitation function of the layer that is currently being computed. Multilayer networks have been implemented using Xilinx FPGA "XCV400hq240." The concept used is shown to be very effective in reducing resource requirements at the cost of a moderate overhead on speed. This implementation is proposed to make NN applications viable in terms of cost and speed for online applications. An NN-based flux estimator is implemented in FPGA and the results obtained are presented  相似文献   

19.
近年来,起源于计算神经科学的脉冲神经网络因其具有丰富的时空动力学特征、多样的编码机制、契合硬件的事件驱动特性等优势,在神经形态工程和类脑计算领域已得到广泛的关注.脉冲神经网络与当前计算机科学导向的以深度卷积网络为代表的人工神经网络的交叉融合被认为是发展人工通用智能的有力途径.对此,回顾了脉冲神经网络的发展历程,将其划分为神经元模型、训练算法、编程框架、数据集以及硬件芯片等5个重点方向,全方位介绍脉冲神经网络的最新进展和内涵,讨论并分析了脉冲神经网络领域各个重点方向的发展机遇和挑战.希望本综述能够吸引不同学科的研究者,通过跨学科的思想交流与合作研究,推动脉冲神经网络领域的发展.  相似文献   

20.
This paper investigates the problem of predicting daily returns based on five Canadian exchange rates using artificial neural networks and EGARCH-M models. First, the statistical properties of five daily exchange rate series (US Dollar, German Mark, French Franc, Japanese Yen and British Pound) are analysed. EGARCH-M models on the Generalised Error Distribution (GED) are fitted to the return series, and serve as comparison standards, along with random walk models. Second, backpropagation networks (BPN) using lagged returns as inputs are trained and tested. Estimated volatilities from the EGARCH-M models are used also as inputs to see if performance is affected. The question of spillovers in interrelated markets is investigated with networks of multiple inputs and outputs. In addition, Elman-type recurrent networks are also trained and tested. Comparison of the various methods suggests that, despite their simplicity, neural networks are similar to the EGARCH-M class of nonlinear models, but superior to random walk models, in terms of insample fit and out-of-sample prediction performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号