首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Yu  Xin  Deng  Fei 《Neural computing & applications》2012,22(1):333-339

The ridge polynomial neural network is one of the most popular higher-order neural networks, which has the powerful capability of approximating reasonable functions while avoiding the combinatorial increase in the number of weights required. In this paper, we study the convergence of gradient method with batch updating rule for ridge polynomial neural network, and a monotonicity theorem and two convergence theorems (including a weak convergence and a strong convergence) are proved. The experimental results demonstrate that the proposed theorems are valid.

  相似文献   

2.
多项式函数型回归神经网络模型及应用   总被引:2,自引:1,他引:2  
周永权 《计算机学报》2003,26(9):1196-1200
文中利用回归神经网络既有前馈通路又有反馈通路的特点,将网络隐层中神经元的激活函数设置为可调多项式函数序列,提出了多项式函数型回归神经网络新模型,它不但具有传统回归神经网络的特点,而且具有较强的函数逼近能力,针对递归计算问题,提出了多项式函数型回归神经网络学习算法,并将该网络模型应用于多元多项式近似因式分解,其学习算法在多元多项式近似分解中体现了较强的优越性,通过算例分析表明,该算法十分有效,收敛速度快,计算精度高,可适用于递归计算问题领域,该文所提出的多项式函数型回归神经网络模型及学习算法对于代数符号近似计算有重要的指导意义。  相似文献   

3.
正则模糊神经网络是模糊值函数的泛逼近器   总被引:2,自引:0,他引:2       下载免费PDF全文
通过分析多元模糊值Bernstein多项式的近似特性,证明了4层前向正则模糊神经网络(FNN)的逼近性能,该类网络构成了模糊值函数的一类泛逼近器,即在欧氏空间的任何紧集上,任意连续模糊值函数能被这类FNN逼近到任意精度,最后通过实例给出了实现这种近似的具体步骤。  相似文献   

4.
CMAC学习性能及泛化性能研究综述   总被引:1,自引:0,他引:1  
小脑模型清晰度控制器(CMAC)是一种局部学习前馈网络,结构简单,收敛速度快,易于实现。从其每个神经元来看,各神经元之间是一种线性关系,但从总体结构来看,网络是一种非线性映射关系。而且模型从输入开始就存在一种泛化能力。网络的学习和泛化能力一直是研究热点,因此,该文将对CMAC网络的泛化能力、学习能力以及一些改善途径进行多方面的综合性的讨论。文章最后还将给出一种改善CMAC泛化能力的训练策略,它不仅避免了学习干扰问题加快了学习速度而且可以通过提高训练循环次数增加训练样本量。通过MATLAB仿真发现这种训练策略可以改善CMAC网络的泛化能力。该方法简单有效是可行的。  相似文献   

5.
The ability to model the behaviour of arbitrary dynamic system is one of the most useful properties of recurrent networks. Dynamic ridge polynomial neural network (DRPNN) is a recurrent neural network used for time series forecasting. Despite the potential and capability of the DRPNN, stability problems could occur in the DRPNN due to the existence of the recurrent feedback. Therefore, in this study, a sufficient condition based on an approach that uses adaptive learning rate is developed by introducing a Lyapunov function. To compare the performance of the proposed solution with the existing solution, which is derived based on the stability theorem for a feedback network, we used six time series, namely Darwin sea level pressure, monthly smoothed sunspot numbers, Lorenz, Santa Fe laser, daily Euro/Dollar exchange rate and Mackey-Glass time-delay differential equation. Simulation results proved the stability of the proposed solution and showed an average 21.45% improvement in Root Mean Square Error (RMSE) with respect to the existing solution. Furthermore, the proposed solution is faster than the existing solution. This is due to the fact that the proposed solution solves network size restriction found in the existing solution and takes advantage of the calculated dynamic system variable to check the stability, unlike the existing solution that needs more calculation steps.  相似文献   

6.
We investigate the optimization of linear impulse systems with the reinforcement learning based adaptive dynamic programming (ADP) method. For linear impulse systems, the optimal objective function is shown to be a quadric form of the pre-impulse states. The ADP method provides solutions that iteratively converge to the optimal objective function. If an initial guess of the pre-impulse objective function is selected as a quadratic form of the pre-impulse states, the objective function iteratively converges to the optimal one through ADP. Though direct use of the quadratic objective function of the states within the ADP method is theoretically possible, the numerical singularity problem may occur due to the matrix inversion therein when the system dimensionality increases. A neural network based ADP method can circumvent this problem. A neural network with polynomial activation functions is selected to approximate the pr~impulse objective function and trained iteratively using the ADP method to achieve optimal control. After a successful training, optimal impulse control can be derived. Simulations are presented for illustrative purposes.  相似文献   

7.
In this study, we introduce a new topology of radial basis function-based polynomial neural networks (RPNNs) that is based on a genetically optimized multi-layer perceptron with radial polynomial neurons (RPNs). This paper offers a comprehensive design methodology involving various mechanisms of optimization, especially fuzzy C-means (FCM) clustering and particle swarm optimization (PSO). In contrast to the typical architectures encountered in polynomial neural networks (PNNs), our main objective is to develop a topology and establish a comprehensive design strategy of RPNNs: (a) The architecture of the proposed network consists of radial polynomial neurons (RPN). These neurons are fully reflective of the structure encountered in numeric data, which are granulated with the aid of FCM clustering. RPN dwells on the concepts of a collection of radial basis function and the function-based nonlinear polynomial processing. (b) The PSO-based design procedure being applied to each layer of the RPNN leads to the selection of preferred nodes of the network whose local parameters (such as the number of input variables, a collection of the specific subset of input variables, the order of the polynomial, the number of clusters of FCM clustering, and a fuzzification coefficient of the FCM method) are properly adjusted. The performance of the RPNN is quantified through a series of experiments where we use several modeling benchmarks, namely a synthetic three-dimensional data and learning machine data (computer hardware data, abalone data, MPG data, and Boston housing data) already used in neuro-fuzzy modeling. A comparative analysis shows that the proposed RPNN exhibits higher accuracy in comparison with some previous models available in the literature.  相似文献   

8.
The Perron–Frobenius (PF) theorem provides a simple characterization of the eigenvectors and eigenvalues of irreducible nonnegative square matrices. A generalization of the PF theorem to nonsquare matrices, which can be interpreted as representing systems with additional degrees of freedom, was recently presented in [1]. This generalized theorem requires a notion of irreducibility for nonsquare systems. A suitable definition, based on the property that every maximal square (legal) subsystem is irreducible, is provided in [1], and is shown to be necessary and sufficient for the generalized theorem to hold. This note shows that irreducibility of a nonsquare system can be tested in polynomial time. The analysis uses a graphic representation of the nonsquare system, termed the constraint graph, representing the flow of influence between the constraints of the system.  相似文献   

9.
针对采用高斯过程进行建模时,不同核函数形式有着不同学习效果的问题。提出了一种自定义的平方指数形式的核函数,并基于多项式函数拟合对这种新形的核函数进行了数值仿真。仿真结果表明,该核函数不但可以提高模型的精确度和有效性,而且可以提高模型的学习能力和泛化能力。最后,将基于该核函数的高斯过程建模方法用于矩形双频微带天线优化设计和WLAN双频单极子天线优化设计,进一步证明了这种方法是可行的和有效的。  相似文献   

10.
The cerebellar model articulation controller (CMAC) has some attractive features, namely fast learning capability and the possibility of efficient digital hardware implementation. Although CMAC was proposed many years ago, several open questions have been left even for today. The most important ones are about its modeling and generalization capabilities. The limits of its modeling capability were addressed in the literature, and recently, certain questions of its generalization property were also investigated. This paper deals with both the modeling and the generalization properties of CMAC. First, a new interpolation model is introduced. Then, a detailed analysis of the generalization error is given, and an analytical expression of this error for some special cases is presented. It is shown that this generalization error can be rather significant, and a simple regularized training algorithm to reduce this error is proposed. The results related to the modeling capability show that there are differences between the one-dimensional (1-D) and the multidimensional versions of CMAC. This paper discusses the reasons of this difference and suggests a new kernel-based interpretation of CMAC. The kernel interpretation gives a unified framework. Applying this approach, both the 1-D and the multidimensional CMACs can be constructed with similar modeling capability. Finally, this paper shows that the regularized training algorithm can be applied for the kernel interpretations too, which results in a network with significantly improved approximation capabilities.  相似文献   

11.
An obvious Bayesian nonparametric generalization of ridge regression assumes that coefficients are exchangeable, from a prior distribution of unknown form, which is given a Dirichlet process prior with a normal base measure. The purpose of this paper is to explore predictive performance of this generalization, which does not seem to have received any detailed attention, despite related applications of the Dirichlet process for shrinkage estimation in multivariate normal means, analysis of randomized block experiments and nonparametric extensions of random effects models in longitudinal data analysis. We consider issues of prior specification and computation, as well as applications in penalized spline smoothing. With a normal base measure in the Dirichlet process and letting the precision parameter approach infinity the procedure is equivalent to ridge regression, whereas for finite values of the precision parameter the discreteness of the Dirichlet process means that some predictors can be estimated as having the same coefficient. Estimating the precision parameter from the data gives a flexible method for shrinkage estimation of mean parameters which can work well when ridge regression does, but also adapts well to sparse situations. We compare our approach with ridge regression, the lasso and the recently proposed elastic net in simulation studies and also consider applications to penalized spline smoothing.  相似文献   

12.
13.
This paper presents an approach to learning polynomial feedforward neural networks (PFNNs). The approach suggests, first, finding the polynomial network structure by means of a population-based search technique relying on the genetic programming paradigm, and second, further adjustment of the best discovered network weights by an especially derived backpropagation algorithm for higher order networks with polynomial activation functions. These two stages of the PFNN learning process enable us to identify networks with good training as well as generalization performance. Empirical results show that this approach finds PFNN which outperform considerably some previous constructive polynomial network algorithms on processing benchmark time series.  相似文献   

14.
多元多项式函数的三层前向神经网络逼近方法   总被引:4,自引:0,他引:4  
该文首先用构造性方法证明:对任意r阶多元多项式,存在确定权值和确定隐元个数的三层前向神经网络.它能以任意精度逼近该多项式.其中权值由所给多元多项式的系数和激活函数确定,而隐元个数由r与输入变量维数确定.作者给出算法和算例,说明基于文中所构造的神经网络可非常高效地逼近多元多项式函数.具体化到一元多项式的情形,文中结果比曹飞龙等所提出的网络和算法更为简单、高效;所获结果对前向神经网络逼近多元多项式函数类的网络构造以及逼近等具有重要的理论与应用意义,为神经网络逼近任意函数的网络构造的理论与方法提供了一条途径.  相似文献   

15.
基于信度分配的并行集成CMAC及其在建模中的应用   总被引:1,自引:0,他引:1  
Albus CMAC(cerebella model articulation controller) 神经网络是一种模拟人类小脑学习结构的小脑模型关节控制器, 它具有很强的记忆与输出泛化能力, 但对于在线学习来说, Albus CMAC仍难满足快速性的要求. 本文在常规CMAC神经网络的基础上, 针对其在学习精度与存储容量之间的矛盾, 引入信度分配概念, 提出了一种基于信度分配的并行集成CMAC. 它将大规模网络切割为多个子网络分别训练后再组合, 大大地提高了计算效率. 通过对复杂非线性函数建模的仿真研究表明, 该方案提高了系统建模的泛化能力和算法的收敛速度. 文章最后讨论了学习常数和泛化参数对该神经网络在线学习效果的影响.  相似文献   

16.
Extreme learning machine for regression and multiclass classification   总被引:13,自引:0,他引:13  
Due to the simplicity of their implementations, least square support vector machine (LS-SVM) and proximal support vector machine (PSVM) have been widely used in binary classification applications. The conventional LS-SVM and PSVM cannot be used in regression and multiclass classification applications directly, although variants of LS-SVM and PSVM have been proposed to handle such cases. This paper shows that both LS-SVM and PSVM can be simplified further and a unified learning framework of LS-SVM, PSVM, and other regularization algorithms referred to extreme learning machine (ELM) can be built. ELM works for the "generalized" single-hidden-layer feedforward networks (SLFNs), but the hidden layer (or called feature mapping) in ELM need not be tuned. Such SLFNs include but are not limited to SVM, polynomial network, and the conventional feedforward neural networks. This paper shows the following: 1) ELM provides a unified learning platform with a widespread type of feature mappings and can be applied in regression and multiclass classification applications directly; 2) from the optimization method point of view, ELM has milder optimization constraints compared to LS-SVM and PSVM; 3) in theory, compared to ELM, LS-SVM and PSVM achieve suboptimal solutions and require higher computational complexity; and 4) in theory, ELM can approximate any target continuous function and classify any disjoint regions. As verified by the simulation results, ELM tends to have better scalability and achieve similar (for regression and binary class cases) or much better (for multiclass cases) generalization performance at much faster learning speed (up to thousands times) than traditional SVM and LS-SVM.  相似文献   

17.
This paper presents a proof that existence of a polynomial Lyapunov function is necessary and sufficient for exponential stability of a sufficiently smooth nonlinear vector field on a bounded set. The main result states that if there exists an $n$ -times continuously differentiable Lyapunov function which proves exponential stability on a bounded subset of $ BBR ^{n}$, then there exists a polynomial Lyapunov function which proves exponential stability on the same region. Such a continuous Lyapunov function will exist if, for example, the vector field is at least $n$-times continuously differentiable. The proof is based on a generalization of the Weierstrass approximation theorem to differentiable functions in several variables. Specifically, polynomials can be used to approximate a differentiable function, using the Sobolev norm $W^{1,infty }$ to any desired accuracy. This approximation result is combined with the second-order Taylor series expansion to show that polynomial Lyapunov functions can approximate continuous Lyapunov functions arbitrarily well on bounded sets. The investigation is motivated by the use of polynomial optimization algorithms to construct polynomial Lyapunov functions.   相似文献   

18.
用构造性方法证明:对于给定的r阶多项式函数,可以具体地构造出一个三层泛函网络,以任意精度逼近该多项式,所构造的网络的中问神经元个数仅与多项式基函数的阶数r有关,并能用r表达.该文所得结果对于基于多项式基函数的泛函网络逼近任意函数类的网络具体构造和逼近具有理论指导意义.  相似文献   

19.
A new approach to the design of a neural network (NN) based navigator is proposed in which the mobile robot travels to a pre-defined goal position safely and efficiently without any prior map of the environment. This navigator can be optimized for any user-defined objective function through the use of an evolutionary algorithm. The motivation of this research is to develop an efficient methodology for general goal-directed navigation in generic indoor environments as opposed to learning specialized primitive behaviors in a limited environment. To this end, a modular NN has been employed to achieve the necessary generalization capability across a variety of indoor environments. Herein, each NN module takes charge of navigating in a specialized local environment, which is the result of decomposing the whole path into a sequence of local paths through clustering of all the possible environments. We verify the efficacy of the proposed algorithm over a variety of both simulated and real unstructured indoor environments using our autonomous mobile robot platform.  相似文献   

20.
Support-Vector Networks   总被引:722,自引:0,他引:722  
Cortes  Corinna  Vapnik  Vladimir 《Machine Learning》1995,20(3):273-297
Thesupport-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data.High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号