首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, a modified learning algorithm for the multilayer neural network with the multi-valued neurons (MLMVN) is presented. The MLMVN, which is a member of complex-valued neural networks family, has already demonstrated a number of important advantages over other techniques. A modified learning algorithm for this network is based on the introduction of an acceleration step, performing by means of the complex QR decomposition and on the new approach to calculation of the output neurons errors: they are calculated as the differences between the corresponding desired outputs and actual values of the weighted sums. These modifications significantly improve the existing derivative-free backpropagation learning algorithm for the MLMVN in terms of learning speed. A modified learning algorithm requires two orders of magnitude lower number of training epochs and less time for its convergence when compared with the existing learning algorithm. Good performance is confirmed not only by the much quicker convergence of the learning algorithm, but also by the compatible or even higher classification/prediction accuracy, which is obtained by testing over some benchmarks (Mackey–Glass and Jenkins–Box time series) and over some satellite spectral data examined in a comparison test.  相似文献   

2.
A multilayer neural network based on multivalued neurons (MLMVN) is a neural network with a traditional feedforward architecture. At the same time, this network has a number of specific different features. Its backpropagation learning algorithm is derivative-free. The functionality of MLMVN is superior to that of the traditional feedforward neural networks and of a variety kernel-based networks. Its higher flexibility and faster adaptation to the target mapping enables to model complex problems using simpler networks. In this paper, the MLMVN is used to identify both type and parameters of the point spread function, whose precise identification is of crucial importance for the image deblurring. The simulation results show the high efficiency of the proposed approach. It is confirmed that the MLMVN is a powerful tool for solving classification problems, especially multiclass ones.  相似文献   

3.
In this paper, we observe some important aspects of Hebbian and error-correction learning rules for complex-valued neurons. These learning rules, which were previously considered for the multi-valued neuron (MVN) whose inputs and output are located on the unit circle, are generalized for a complex-valued neuron whose inputs and output are arbitrary complex numbers. The Hebbian learning rule is also considered for the MVN with a periodic activation function. It is experimentally shown that Hebbian weights, even if they still cannot implement an input/output mapping to be learned, are better starting weights for the error-correction learning, which converges faster starting from the Hebbian weights rather than from the random ones.  相似文献   

4.
A new derivation is presented for the bounds on the size of a multilayer neural network to exactly implement an arbitrary training set; namely the training set can be implemented with zero error with two layers and with the number of the hidden-layer neurons equal to #1>/= p-1. The derivation does not require the separation of the input space by particular hyperplanes, as in previous derivations. The weights for the hidden layer can be chosen almost arbitrarily, and the weights for the output layer can be found by solving #1+1 linear equations. The method presented exactly solves (M), the multilayer neural network training problem, for any arbitrary training set.  相似文献   

5.
对于具有非线性、大时滞、不确定性等特性的难以用精确数学模型描述的多变量复杂系统,靠传统控制理论难以获得理想的控制效果。基于模糊神经网络控制技术不依赖于被控对象精确的数学模型,且能根据被控对象参数的变化自适应调节控制规则和隶属函数参数的特性,进行了采用模糊神经网络控制器实现其控制的应用研究。采用典型的前向型模糊神经网络模型,给出了具有学习功能的多值模糊神经网络控制系统的一种设计方法。仿真实验证明,该系统能够获得较理想的控制效果。  相似文献   

6.
《Applied Soft Computing》2007,7(3):739-745
In this paper, a learning algorithm for a single integrate-and-fire neuron (IFN) is proposed and tested for various applications in which a multilayer perceptron neural network is conventionally used. It is found that a single IFN is sufficient for the applications that require a number of neurons in different hidden layers of a conventional neural network. Several benchmark and real-life problems of classification and time-series prediction have been illustrated. It is observed that the inclusion of some more biological phenomenon in an artificial neural network can make it more powerful.  相似文献   

7.
A high-order feedforward neural architecture, called pi t -sigma (π t σ) neural network, is proposed for lossy digital image compression and reconstruction problems. The π t σ network architecture is composed of an input layer, a single hidden layer, and an output layer. The hidden layer is composed of classical additive neurons, whereas the output layer is composed of translated multiplicative neurons (π t -neurons). A two-stage learning algorithm is proposed to adjust the parameters of the π t σ network: first, a genetic algorithm (GA) is used to avoid premature convergence to poor local minima; in the second stage, a conjugate gradient method is used to fine-tune the solution found by GA. Experiments using the Standard Image Database and infrared satellite images show that the proposed π t σ network performs better than classical multilayer perceptron, improving the reconstruction precision (measured by the mean squared error) in about 56%, on average.  相似文献   

8.
《Computers & chemistry》1998,21(5):385-391
A new neural network (NN) using potential function (PF), named PFNN, is proposed for classifying the complex chemical patterns. Correspondingly, a new algorithm called the “+δ” algorithm is proposed to train the networks. With a benchmark classification problem the conventional multilayer feedforward (MLF) neural networks is tested and compared with PFNN. Furthermore, the experiments on classifying complex chemical patterns are performed. The results of these experiments demonstrate that PFNN is good in dealing with classification due to its preciseness and quickness.  相似文献   

9.
We study the number of hidden layers required by a multilayer neural network with threshold units to compute a dichotomy from ℝ d to {0,1}, defined by a finite set of hyperplanes. We show that this question is far more intricate than computing Boolean functions, although this well-known problem is underlying our research. We present advanced results on the characterization of dichotomies, from ℝ2 to {0,1}, which require two hidden layers to be exactly realized. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

10.
There are several neural network implementations using either software, hardware-based or a hardware/software co-design. This work proposes a hardware architecture to implement an artificial neural network (ANN), whose topology is the multilayer perceptron (MLP). In this paper, we explore the parallelism of neural networks and allow on-the-fly changes of the number of inputs, number of layers and number of neurons per layer of the net. This reconfigurability characteristic permits that any application of ANNs may be implemented using the proposed hardware. In order to reduce the processing time that is spent in arithmetic computation, a real number is represented using a fraction of integers. In this way, the arithmetic is limited to integer operations, performed by fast combinational circuits. A simple state machine is required to control sums and products of fractions. Sigmoid is used as the activation function in the proposed implementation. It is approximated by polynomials, whose underlying computation requires only sums and products. A theorem is introduced and proven so as to cover the arithmetic strategy of the computation of the activation function. Thus, the arithmetic circuitry used to implement the neuron weighted sum is reused for computing the sigmoid. This resource sharing decreased drastically the total area of the system. After modeling and simulation for functionality validation, the proposed architecture synthesized using reconfigurable hardware. The results are promising.  相似文献   

11.
This paper proposes a constructive neural network with a piecewise linear or nonlinear local interpolation capability to approximate arbitrary continuous functions. This neural network is devised by introducing a space tessellation which is a covering of the Euclidean space by nonoverlapping hyperpolyhedral convex cells. In the proposed neural network, a number of neural network granules (NNG's) are processed in parallel and repeated regularly with the same structures. Each NNG does a local mapping with an interpolation capability for a corresponding hyperpolyhedral convex cell in a tessellation. The plastic weights of the NNG can be calculated to implement the mapping for training data; consequently, this reduces training time and alleviates the difficulties of local minima in training. In addition, the interpolation capability of the NNG improves the generalization for the new data within the convex cell. The proposed network requires additional neurons for tessellation over the standard multilayer neural networks. This increases the network size but does not slow the retrieval response when implemented by parallel architecture.  相似文献   

12.
The vestibulo-ocular reflex (VOR) is characterized by a short-latency, high-fidelity eye movement response to head rotations at frequencies up to 20 Hz. Electrophysiological studies of medial vestibular nucleus (MVN) neurons, however, show that their response to sinusoidal currents above 10 to 12 Hz is highly nonlinear and distorted by aliasing for all but very small current amplitudes. How can this system function in vivo when single cell response cannot explain its operation? Here we show that the necessary wide VOR frequency response may be achieved not by firing rate encoding of head velocity in single neurons, but in the integrated population response of asynchronously firing, intrinsically active neurons. Diffusive synaptic noise and the pacemaker-driven, intrinsic firing of MVN cells synergistically maintain asynchronous, spontaneous spiking in a population of model MVN neurons over a wide range of input signal amplitudes and frequencies. Response fidelity is further improved by a reciprocal inhibitory link between two MVN populations, mimicking the vestibular commissural system in vivo, but only if asynchrony is maintained by noise and pacemaker inputs. These results provide a previously missing explanation for the full range of VOR function and a novel account of the role of the intrinsic pacemaker conductances in MVN cells. The values of diffusive noise and pacemaker currents that give optimal response fidelity yield firing statistics similar to those in vivo, suggesting that the in vivo network is tuned to optimal performance. While theoretical studies have argued that noise and population heterogeneity can improve coding, to our knowledge this is the first evidence indicating that these parameters are indeed tuned to optimize coding fidelity in a neural control system in vivo.  相似文献   

13.
Alberto  Pierre  Damien  Mevludin  Louis   《Neurocomputing》2007,70(16-18):2668
This paper investigates a possibility for estimating rotor angles in the time frame of transient (angle) stability of electric power systems, for use in real-time. The proposed dynamic state estimation technique is based on the use of voltage and current phasors obtained from a phasor measurement unit supposed to be installed on the extra-high voltage side of the substation of a power plant, together with a multilayer perceptron trained off-line from simulations. We demonstrate that an intuitive approach to directly map phasor measurement inputs to the neural network to generator rotor angle does not offer satisfactory results. We found out that a good way to approach the angle estimation problem is to use two neural networks in order to estimate the sin(δ) and cos(δ) of the angle and recover the latter from these values by simple post-processing.Simulation results on a part of the Mexican interconnected system show that the approach could yield satisfactory accuracy for real-time monitoring and control of transient instability.  相似文献   

14.
We propose an algorithm for constructing a feedforward neural network with a single hidden layer. This algorithm is applied to image compression and it is shown to give satisfactory results. The neural network construction algorithm begins with a simple network topology containing a single unit in the hidden layer. An optimal set of weights for this network is obtained by applying a variant of the quasi-Newton method for unconstrained optimisation. If this set of weights does not give a network with the desired accuracy then one more unit is added to the hidden layer and the network is retrained. This process is repeated until the desired network is obtained. We show that each addition of the hidden unit to the network is guaranteed to increase the signal to noise ratio of the compressed image.  相似文献   

15.
针对非线性系统时滞问题,给出了一种新型的单神经元Smith预测控制算法.神经网络的预测控制器由不完全微分的单神经元自适应PID控制器和神经网络的Smith预估器组成.预估器对输出进行多步预测,控制器超前动作以消除时滞对系统的影响.不完全微分的单神经元自适应PID控制器通过改进的Hebb学习规则实现其权值调节,通过权系数的在线调整实现自适应控制.仿真实验证明了该方法具有较快的响应速度和较好的响应性能.  相似文献   

16.
In this paper, a new multi-output neural model with tunable activation function (TAF) and its general form are presented. It combines both traditional neural model and TAF neural model. Recursive least squares algorithm is used to train a multilayer feedforward neural network with the new multi-output neural model with tunable activation function (MO-TAF). Simulation results show that the MO-TAF-enabled multi-layer feedforward neural network has better capability and performance than the traditional multilayer feedforward neural network and the feedforward neural network with tunable activation functions. In fact, it significantly simplifies the neural network architecture, improves its accuracy and speeds up the convergence rate.  相似文献   

17.
Let W be a simply connected region in , analytic in W and γ a positively oriented Jordan curve in W that does not pass through any zero of f. We present an algorithm for computing all the zeros of f that lie in the interior of γ. It proceeds by evaluating certain integrals along γ numerically and is based on the theory of formal orthogonal polynomials. The algorithm requires only f and not its first derivative f'. We have found that it gives accurate approximations for the zeros. Moreover, it is self-starting in the sense that it does not require initial approximations. The algorithm works for simple zeros as well as multiple zeros, although it is unable to compute the multiplicity of a zero explicitly. Numerical examples illustrate the effectiveness of our approach. Received: November 2, 1998; revised March 30, 1999  相似文献   

18.
This paper investigates the use of neural networks for the identification of linear time invariant dynamical systems. Two classes of networks, namely the multilayer feedforward network and the recurrent network with linear neurons, are studied. A notation based on Kronecker product and vector-valued function of matrix is introduced for neural models. It permits to write a feedforward network as a one step ahead predictor used in parameter estimation. A special attention is devoted to system theory interpretation of neural models. Sensitivity analysis can be formulated using derivatives based on the above-mentioned notation.  相似文献   

19.
The computational power of a neuron lies in the spatial grouping of synapses belonging to any dendrite tree. Attempts to give a mathematical representation to the grouping process of synapses continue to be a fascinating field of work for researchers in the neural network community. In the literature, we generally find neuron models that comprise of summation, radial basis or product aggregation function, as basic unit of feed-forward multilayer neural network. All these models and their corresponding networks have their own merits and demerits. The MLP constructs global approximation to input–output mapping, while a RBF network, using exponentially decaying localized non-linearity, constructs local approximation to input–output mapping. In this paper, we propose two compensatory type novel aggregation functions for artificial neurons. They produce net potential as linear or non-linear composition of basic summation and radial basis operations over a set of input signals. The neuron models based on these aggregation functions ensure faster convergence, better training and prediction accuracy. The learning and generalization capabilities of these neurons have been tested over various classification and functional mapping problems. These neurons have also shown excellent generalization ability over the two-dimensional transformations.  相似文献   

20.
PID神经元网络多变量控制系统分析   总被引:62,自引:0,他引:62  
舒怀林 《自动化学报》1999,25(1):105-111
PID神经元网络是一种新的多层前向神经元网络,其隐含层单元分别为比例(P)、积分(I)、微分(D)单元,各层神经元个数、连接方式、连接权初值是按PID控制规律的基本原则确定的,它可以用于多变量系统的解耦控制.给出了PID神经元网络的结构形式和计算方法,从理论上证明了PID神经元网络多变量控制系统的收敛件和稳定性,通过计算机仿真证明了PID神经元网络具有良好的自学习和自适应解耦控制性能.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号