首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
用遗传算法优化Boltzmann机   总被引:3,自引:0,他引:3       下载免费PDF全文
Boltzmann机是一种应用广泛的随机神经网络。它通过模拟退火算法进行网络学习,能取得一个全局或接近全局最优的最优值;通过期望网络模式和实际学习得到的网络模式比较来调节网络的权值,使网络能尽可能地达到或逼近期望的网络模式。将遗传算法运用到Boltzmann机的网络学习中,在对BM机编码后,通过选择、交叉和变异等遗传操作算子对网络进行训练,调整网络的权值,使适应度函数值大的网络保留下来,最终使网络达到期望的模式。通过实例验证,这是一种简单可行的调节网络权值的方法。  相似文献   

2.
This study presents a nonlinear systems and function learning by using wavelet network. Wavelet networks are as neural network for training and structural approach. But, training algorithms of wavelet networks is required a smaller number of iterations when the compared with neural networks. Gaussian-based mother wavelet function is used as an activation function. Wavelet networks have three main parameters; dilation, translation, and connection parameters (weights). Initial values of these parameters are randomly selected. They are optimized during training (learning) phase. Because of random selection of all initial values, it may not be suitable for process modeling. Because wavelet functions are rapidly vanishing functions. For this reason heuristic procedure has been used. In this study serial-parallel identification model has been applied to system modeling. This structure does not utilize feedback. Real system outputs have been exercised for prediction of the future system outputs. So that stability and approximation of the network is guaranteed. Gradient methods have been applied for parameters updating with momentum term. Quadratic cost function is used for error minimization. Three example problems have been examined in the simulation. They are static nonlinear functions and discrete dynamic nonlinear system.  相似文献   

3.
In this work we present a constructive algorithm capable of producing arbitrarily connected feedforward neural network architectures for classification problems. Architecture and synaptic weights of the neural network should be defined by the learning procedure. The main purpose is to obtain a parsimonious neural network, in the form of a hybrid and dedicate linear/nonlinear classification model, which can guide to high levels of performance in terms of generalization. Though not being a global optimization algorithm, nor a population-based metaheuristics, the constructive approach has mechanisms to avoid premature convergence, by mixing growing and pruning processes, and also by implementing a relaxation strategy for the learning error. The synaptic weights of the neural networks produced by the constructive mechanism are adjusted by a quasi-Newton method, and the decision to grow or prune the current network is based on a mutual information criterion. A set of benchmark experiments, including artificial and real datasets, indicates that the new proposal presents a favorable performance when compared with alternative approaches in the literature, such as traditional MLP, mixture of heterogeneous experts, cascade correlation networks and an evolutionary programming system, in terms of both classification accuracy and parsimony of the obtained classifier.  相似文献   

4.
Model-free learning for synchronous and asynchronous quasi-static networks is presented. The network weights are continuously perturbed, while the time-varying performance index is measured and correlated with the perturbation signals; the correlation output determines the changes in the weights. The perturbation may be either via noise sources or orthogonal signals. The invariance to detailed network structure mitigates large variability between supposedly identical networks as well as implementation defects. This local, regular, and completely distributed mechanism requires no central control and involves only a few global signals. Thus, it allows for integrated, on-chip learning in large analog and optical networks.  相似文献   

5.
Concerns the problem of finding weights for feed-forward networks in which threshold functions replace the more common logistic node output function. The advantage of such weights is that the complexity of the hardware implementation of such networks is greatly reduced. If the task to be learned does not change over time, it may be sufficient to find the correct weights for a threshold function network off-line and to transfer these weights to the hardware implementation. This paper provides a mathematical foundation for training a network with standard logistic function nodes and gradually altering the function to allow a mapping to a threshold unit network. The procedure is analogous to taking the limit of the logistic function as the gain parameter goes to infinity. It is demonstrated that, if the error in a trained network is small, a small change in the gain parameter will cause a small change in the network error. The result is that a network that must be implemented with threshold functions can first be trained using a traditional back propagation network using gradient descent, and further trained with progressively steeper logistic functions. In theory, this process could require many repetitions. In simulations, however, the weights have be successfully mapped to a true threshold network after a modest number of slope changes. It is important to emphasize that this method is only applicable to situations for which off-line learning is appropriate.  相似文献   

6.
In this paper, a hybrid method is proposed to control a nonlinear dynamic system using feedforward neural network. This learning procedure uses different learning algorithm separately. The weights connecting the input and hidden layers are firstly adjusted by a self organized learning procedure, whereas the weights between hidden and output layers are trained by supervised learning algorithm, such as a gradient descent method. A comparison with backpropagation (BP) shows that the new algorithm can considerably reduce network training time.  相似文献   

7.
Effects of moving the center''s in an RBF network   总被引:1,自引:0,他引:1  
In radial basis function (RBF) networks, placement of centers is said to have a significant effect on the performance of the network. Supervised learning of center locations in some applications show that they are superior to the networks whose centers are located using unsupervised methods. But such networks can take the same training time as that of sigmoid networks. The increased time needed for supervised learning offsets the training time of regular RBF networks. One way to overcome this may be to train the network with a set of centers selected by unsupervised methods and then to fine tune the locations of centers. This can be done by first evaluating whether moving the centers would decrease the error and then, depending on the required level of accuracy, changing the center locations. This paper provides new results on bounds for the gradient and Hessian of the error considered first as a function of the independent set of parameters, namely the centers, widths, and weights; and then as a function of centers and widths where the linear weights are now functions of the basis function parameters for networks of fixed size. Moreover, bounds for the Hessian are also provided along a line beginning at the initial set of parameters. Using these bounds, it is possible to estimate how much one can reduce the error by changing the centers. Further to that, a step size can be specified to achieve a guaranteed, amount of reduction in error.  相似文献   

8.
Hebbian heteroassociative learning is inherently asymmetric. Storing a forward association, from item A to item B, enables recall of B (given A), but does not permit recall of A (given B). Recurrent networks can solve this problem by associating A to B and B back to A. In these recurrent networks, the forward and backward associations can be differentially weighted to account for asymmetries in recall performance. In the special case of equal strength forward and backward weights, these recurrent networks can be modeled as a single autoassociative network where A and B are two parts of a single, stored pattern. We analyze a general, recurrent neural network model of associative memory and examine its ability to fit a rich set of experimental data on human associative learning. The model fits the data significantly better when the forward and backward storage strengths are highly correlated than when they are less correlated. This network-based analysis of associative learning supports the view that associations between symbolic elements are better conceptualized as a blending of two ideas into a single unit than as separately modifiable forward and backward associations linking representations in memory.  相似文献   

9.
A supervised learning algorithm for quantum neural networks (QNN) based on a novel quantum neuron node implemented as a very simple quantum circuit is proposed and investigated. In contrast to the QNN published in the literature, the proposed model can perform both quantum learning and simulate the classical models. This is partly due to the neural model used elsewhere which has weights and non-linear activations functions. Here a quantum weightless neural network model is proposed as a quantisation of the classical weightless neural networks (WNN). The theoretical and practical results on WNN can be inherited by these quantum weightless neural networks (qWNN). In the quantum learning algorithm proposed here patterns of the training set are presented concurrently in superposition. This superposition-based learning algorithm (SLA) has computational cost polynomial on the number of patterns in the training set.  相似文献   

10.
A type of optimized neural networks with limited precision weights (LPWNN) is presented in this paper. Such neural networks, which require less memory for storing the weights and less expensive floating point units in order to perform the computations involved, are better suited for embedded systems implementation than the real weight ones. Based on analyzing the learning capability of LPWNN, Quantize Back-propagation Step-by-Step (QBPSS) algorithm is proposed for such neural networks to overcome the effects of limited precision. Methods of designing and training LPNN are represented, including the quantization of non-linear activation function and the selection of learning rate, network architecture and weights precision. The optimized LPWNN performance has been evaluated by comparing to conventional neural networks with double-precision floating-point weights on road recognition of image for intelligent vehicle in ARM 9 embedded systems, and the results show the optimized LPWNN has 7 times faster than the conventional ones.  相似文献   

11.
汪涛  庄新华 《计算机学报》1993,16(2):97-105
本文提出了一种异联想记忆模型的优化学习算法.首先,我们将反映神经元网络性能的标准转化为一个易于控制的代价函数,从而将权值的确定过程自然地转化为一个全局最优化过程.优化过程采用了梯度下降技术.这种学习算法可以保证每个训练模式成为系统的稳定吸引子,并且具有优化意义上的最大吸引域.在理论上,我们讨论了异联想记忆模型的存储能力,训练模式的渐近稳定性和吸引域的范围.计算机实验结果充分说明了算法的有效性.  相似文献   

12.
基于扩展卡尔曼滤波器的RBF神经网络学习算法   总被引:1,自引:1,他引:0  
径向基函数(RBF)神经网络可广泛应用于解决信号处理与模式识别问题,目前存在一些学习算法用来确定RBF中心节点和训练网络,对于确定RBF中心节点向量值和网络权重值可以看作同一系统问题,因此该文提出把扩展卡尔曼滤波器(EKF)用于多输入多输出的径向基函数(RBF)神经网络作为其学习算法,当确定神经网络中网络节点的个数后,EKF可以同时确定中心节点向量值和网络权重矩阵,为提高收敛速度提出带有次优渐消因子的扩展卡尔曼滤波器(SFEKF)用于RBF神经网络学习算法,仿真结果说明了在学习过程中应用EKF比常规RBF神经网络有更好的效果,学习速度比梯度下降法明显加快,减少了计算负担。  相似文献   

13.
Neural networks based on metric recognition methods have a strictly determined architecture. Number of neurons, connections, as well as weights and thresholds values are calculated analytically, based on the initial conditions of tasks: number of recognizable classes, number of samples, metric expressions used. This paper discusses the possibility of transforming these networks in order to apply classical learning algorithms to them without using analytical expressions that calculate weight values. In the received network, training is carried out by recognizing images in pairs. This approach simplifies the learning process and easily allows to expand the neural network by adding new images to the recognition task. The advantages of these networks, including such as: (1) network architecture simplicity and transparency; (2) training simplicity and reliability; (3) the possibility of using a large number of images in the recognition problem using a neural network; (4) a consistent increase in the number of recognizable classes without changing the previous values of weights and thresholds.  相似文献   

14.
In order to bear the challenge of global competition, the realisation of value adding processes within networked production structures gains significance. To optimise the supply chain in these production networks hereby is one major challenge in order to assure success. This contribution focuses the procedure of performance evaluation in production networks as one part of the network controlling. Thereby the specific model of a non-hierarchical production network based on competence cells (CCs) serves as the theoretical background. The evaluation of performances of the involved CCs is one important task in that context. It is realised during the final phase of an ideal-typical life cycle of a production network. In order to quantify the evaluation data, suitable performance attributes need to be identified. The evaluation process is realised by an adapted value benefit analysis. As the final result a measure for the valuation of the performance of participating CCs is ascertained. In case of an unsatisfying performance the profit shares will be shortened. Thereby a specific approach for the distribution of profit serves as the basis.  相似文献   

15.
Based on various approaches, several different learing algorithms have been given in the literature for neural networks. Almost all algorithms have constant learning rates or constant accelerative parameters, though they have been shown to be effective for some practical applications. The learning procedure of neural networks can be regarded as a problem of estimating (or identifying) constant parameters (i.e. connection weights of network) with a nonlinear or linear observation equation. Making use of the Kalman filtering, we derive a new back-propagation algorithm whose learning rate is computed by a time-varying Riccati difference equation. Perceptron-like and correlational learning algorithms are also obtained as special cases. Furthermore, a self-organising algorithm of feature maps is constructed within a similar framework.  相似文献   

16.
A hybrid algorithm combining Regrouping Particle Swarm Optimization (RegPSO) with wavelet radial basis function neural network referred to as (RegPSO-WRBF-NN) algorithm is presented which is used to detect, identify and characterize the acoustic signals due to surface discharge activity and hence differentiate abnormal operating conditions from the normal ones. The tests were carried out on clean and polluted high-voltage glass insulators by using surface tracking and erosion test procedure of international electro-technical commission 60,587. A laboratory experiment was conducted by preparing the prototypes of the discharges. A very important step for the WRBF network training is to decide a proper number of hidden nodes, centers, spreads and the network weights can be viewed as a system identification problem. So PSO is used to optimize the WRBF neural network parameters in this work.Therefore, the combination method based on the WRBF neural network is adapted. A regrouping technique called as a Regrouping Particle Swarm Optimization (RegPSO) is also used to help the swarm escape from the state of premature convergence, RegPSO was able to solve the stagnation problem for the surface discharge dataset tested and approximate the true global minimizer. Testing results indicate that the proposed approach can make a quick response and yield accurate solutions as soon as the inputs are given. Comparisons of learning performance are made to the existing conventional networks. This learning method has proven to be effective by applying the wavelet radial basis function based on the RegPSO neural network in the classification of surface discharge fault data set. The test results show that the proposed approach is efficient and revealed a very high classification rate.  相似文献   

17.
Learning without local minima in radial basis function networks   总被引:54,自引:0,他引:54  
Learning from examples plays a central role in artificial neural networks. The success of many learning schemes is not guaranteed, however, since algorithms like backpropagation may get stuck in local minima, thus providing suboptimal solutions. For feedforward networks, optimal learning can be achieved provided that certain conditions on the network and the learning environment are met. This principle is investigated for the case of networks using radial basis functions (RBF). It is assumed that the patterns of the learning environment are separable by hyperspheres. In that case, we prove that the attached cost function is local minima free with respect to all the weights. This provides us with some theoretical foundations for a massive application of RBF in pattern recognition.  相似文献   

18.
Transductive classification using labeled and unlabeled objects in a heterogeneous information network for knowledge extraction is an interesting and challenging problem. Most of the real-world networks are heterogeneous in their natural setting and traditional methods of classification for homogeneous networks are not suitable for heterogeneous networks. In a heterogeneous network, various meta-paths connecting objects of the target type, on which classification is to be performed, make the classification task more challenging. The semantic of each meta-path would lead to the different accuracy of classification. Therefore, weight learning of meta-paths is required to leverage their semantics simultaneously by a weighted combination. In this work, we propose a novel meta-path based framework, HeteClass, for transductive classification of target type objects. HeteClass explores the network schema of the given network and can also incorporate the knowledge of the domain expert to generate a set of meta-paths. The regularization based weight learning method proposed in HeteClass is effective to compute the weights of symmetric as well as asymmetric meta-paths in the network, and the weights generated are consistent with the real-world understanding. Using the learned weights, a homogeneous information network is formed on target type objects by the weighted combination, and transductive classification is performed. The proposed framework HeteClass is flexible to utilize any suitable classification algorithm for transductive classification and can be applied on heterogeneous information networks with arbitrary network schema. Experimental results show the effectiveness of the HeteClass for classification of unlabeled objects in heterogeneous information networks using real-world data sets.  相似文献   

19.
This paper presents new theoretical results on global exponential stability of recurrent neural networks with bounded activation functions and time-varying delays. The stability conditions depend on external inputs, connection weights, and time delays of recurrent neural networks. Using these results, the global exponential stability of recurrent neural networks can be derived, and the estimated location of the equilibrium point can be obtained. As typical representatives, the Hopfield neural network (HNN) and the cellular neural network (CNN) are examined in detail.  相似文献   

20.
针对基于单条元路径的异质网络表征缺失异质信息网络中结构信息及其它元路径语义信息的问题,本文提出了基于融合元路径权重的异质网络表征学习方法.该方法对异质信息网络中元路径集合进行权重学习,进而对基于不同元路径的低维表征进行加权融合,得到融合不同元路径语义信息的异质网络表征.实验结果表明,基于融合元路径权重的异质网络表征学习具有良好的表征学习能力,可有效应用于数据挖掘.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号