首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
首先利用遗传算法优化的投影寻踪技术对神经网络学习矩阵降维,再利用Bagging技术和不同的神经网络学习算法生成集成个体,并再次用遗传算法进化的投影寻踪技术对神经网络个体集成.建立基于遗传算法优化的投影寻踪技术神经网络集成模型,通过上证指数开盘价、收盘价进行实例分析,计算结果表明该方法具有较好的学习能力和泛化能力,在股市预测中预测精度高、稳定性好.  相似文献   

2.
One nonparametric regression technique that has been successfully applied to high-dimensional data is projection pursuit regression (PPR). In this method, the regression surface is approximated by a sum of empirically determined univariate functions of linear combinations of the predictors. Projection pursuit learning (PPL) proposed by Hwanget al. formulates PPR using a two-layer feedforward neural network. One of the main differences between PPR and PPL is that the smoothers in PPR are nonparametric, whereas those in PPL are based on Hermite functions of some predefined highest orderR. While the convergence property of PPR is already known, that for PPL has not been thoroughly studied. In this paper, we demonstrate that PPL networks do not have the universal approximation and strong convergence properties for any finiteR. But, by including a bias term in each linear combination of the predictor variables, PPL networks can regain these capabilities, independent of the exact choice ofR. It is also shown experimentally that this modification improves the generalization performance in regression problems, and creates smoother decision surfaces for classification problems.  相似文献   

3.
Neural-network design for small training sets of high dimension   总被引:5,自引:0,他引:5  
We introduce a statistically based methodology for the design of neural networks when the dimension d of the network input is comparable to the size n of the training set. If one proceeds straightforwardly, then one is committed to a network of complexity exceeding n. The result will be good performance on the training set but poor generalization performance when the network is presented with new data. To avoid this we need to select carefully the network architecture, including control over the input variables. Our approach to selecting a network architecture first selects a subset of input variables (features) using the nonparametric statistical process of difference-based variance estimation and then selects a simple network architecture using projection pursuit regression (PPR) ideas combined with the statistical idea of slicing inverse regression (SIR). The resulting network, which is then retrained without regard to the PPR/SIR determined parameters, is one of moderate complexity (number of parameters significantly less than n) whose performance on the training set can be expected to generalize well. The application of this methodology is illustrated in detail in the context of short-term forecasting of the demand for electric power from an electric utility.  相似文献   

4.
In a regression problem, one is given a multidimensional random vector X, the components of which are called predictor variables, and a random variable, Y, called response. A regression surface describes a general relationship between X and Y. A nonparametric regression technique that has been successfully applied to high-dimensional data is projection pursuit regression (PPR). The regression surface is approximated by a sum of empirically determined univariate functions of linear combinations of the predictors. Projection pursuit learning (PPL) formulates PPR using a 2-layer feedforward neural network. The smoothers in PPR are nonparametric, whereas those in PPL are based on Hermite functions of some predefined highest order R. We demonstrate that PPL networks in the original form do not have the universal approximation property for any finite R, and thus cannot converge to the desired function even with an arbitrarily large number of hidden units. But, by including a bias term in each linear projection of the predictor variables, PPL networks can regain these capabilities, independent of the exact choice of R. Experimentally, it is shown in this paper that this modification increases the rate of convergence with respect to the number of hidden units, improves the generalization performance, and makes it less sensitive to the setting of R. Finally, we apply PPL to chaotic time series prediction, and obtain superior results compared with the cascade-correlation architecture.  相似文献   

5.
首先利用粒子群算法和投影寻踪技术构造神经网络的学习矩阵,基于负相关学习的样本重构方法生成神经网络集成个体,进一步用粒子群算法和投影寻踪回归方法对集成个体集成,生成神经网络集成的输出结论,建立基于粒子群算法-投影寻踪的样本重构神经网络集成模型。该方法应用于广西全区的月降水量预报,结果表明该方法在降水预报中能有效从众多天气因子中构造神经网络的学习矩阵,而且集成学习预测精度高、稳定性好,具有一定的推广能力。  相似文献   

6.
We present a novel regression method that combines projection pursuit regression with feed forward neural networks. The algorithm is presented and compared to standard neural network learning. Connectionist projection pursuit regression (CPPR) is applied to modelling the U.S. average dollar-Deutsch mark exchange rate movement using several economic indicators. The performance of CPPR is compared with the performances of other approaches to this problem.  相似文献   

7.
将投影寻踪回归分析技术引入遥感影像分类中,详尽叙述遥感影像投影寻踪回归分类模型的建立和实现过程。将广州地区的TM影像用于分类实验,并用混合蛙跳算法来优化投影寻踪回归分类模型中的参数矩阵,取得了较为理想的分类效果。此外,还进一步分析了投影中心的设定、调整以及优化算法和岭函数个数对投影寻踪回归模型分类精度的影响。实验结果表明,该模型易于优化实现,稳定性强,模型中岭函数的个数对投影寻踪回归模型的分类精度没有显著影响。  相似文献   

8.
Lesa M.  Mitra   《Pattern recognition》2000,33(12):2019-2031
Projection pursuit learning networks (PPLNs) have been used in many fields of research but have not been widely used in image processing. In this paper we demonstrate how this highly promising technique may be used to connect edges and produce continuous boundaries. We also propose the application of PPLN to deblurring a degraded image when little or no a priori information about the blur is available. The PPLN was successful at developing an inverse blur filter to enhance blurry images. Theory and background information on projection pursuit regression (PPR) and PPLN are also presented.  相似文献   

9.
A hybrid learning system for image deblurring   总被引:1,自引:0,他引:1  
Min  Mitra   《Pattern recognition》2002,35(12):2881-2894
In this paper we propose a 3-stage hybrid learning system with unsupervised learning to cluster data in the first stage, supervised learning in the middle stage to determine network parameters and finally a decision-making stage using voting mechanism. We take this opportunity to study the role of various supervised learning systems that constitute the middle stage. Specifically, we focus on one-hidden layer neural network with sigmoidal activation function, radial basis function network with Gaussian activation function and projection pursuit learning network with Hermite polynomial as the activation function. These learning systems rank in increasing order of complexity. We train and test each system with identical data sets. Experimental results show that learning ability of a system is controlled by the shape of the activation function when other parameters remain fixed. We observe that clustering in the input space leads to better system performance. Experimental results provide compelling evidences in favor of use of the hybrid learning system and the committee machines with gating network.  相似文献   

10.
Graphical inspection of multimodality is demonstrated using unsupervised lateral-inhibition neural networks. Three projection pursuit indexes are compared on low-dimensional simulated and real-world data: principal components, Legendre polynomial, and projection pursuit network.  相似文献   

11.
In this paper, a feedforward neural network with sigmoid hidden units is used to design a neural network based iterative learning controller for nonlinear systems with state dependent input gains. No prior offline training phase is necessary, and only a single neural network is employed. All the weights of the neurons are tuned during the iteration process in order to achieve the desired learning performance. The adaptive laws for the weights of neurons and the analysis of learning performance are determined via Lyapunov‐like analysis. A projection learning algorithm is used to prevent drifting of weights. It is shown that the tracking error vector will asymptotically converges to zero as the iteration goes to infinity, and the all adjustable parameters as well as internal signals remain bounded.  相似文献   

12.
神经网络在线投影算法及非线性建模应用   总被引:1,自引:0,他引:1  
针对神经网络难以在线学习的缺点,把神经网络当作结构已知的非线性系统,权系数的学习看成非线性系统的参数估计,基于新估计准则的非线性系统在线参数估计投影算法,给出前馈神经网络的一种在线运行投影学习算法.理论上证明该算法的全局收敛性,讨论算法参数的物理意义和取值范围.通过2个非线性时变系统的神经网络建模应用的仿真,验证算法的全局收敛性和在线运行能力.  相似文献   

13.
针对复杂网络环境下网络流监测(分类)问题,为实现多个类别直接分类以及提高学习方法的训练速度,提出了一种随机的人工神经网络学习方法。该方法借鉴平面高斯(PG)神经网络模型,引入随机投影思想,通过计算矩阵伪逆的方法解析获得网络连接矩阵,理论上可证明该网络具有全局逼近能力。在人工数据和标准网络流监测数据上进行了实验仿真,与同样采用随机方法的极限学习机(ELM)和PG网络相比,分析与实验结果表明:1)由于继承了PG网络的几何特性,对平面型分布数据更为有效;2)采用了随机方法,训练速度与ELM相当,但比PG网络快得多;3)三种方法中,该方法更有利于解决网络流监测问题。  相似文献   

14.
In the present work, a constructive learning algorithm was employed to design a near-optimal one-hidden layer neural network structure that best approximates the dynamic behavior of a bioprocess. The method determines not only a proper number of hidden neurons but also the particular shape of the activation function for each node. Here, the projection pursuit technique was applied in association with the optimization of the solvability condition, giving rise to a more efficient and accurate computational learning algorithm. As each activation function of a hidden neuron is defined according to the peculiarities of each approximation problem, better rates of convergence are achieved, guiding to parsimonious neural network architectures. The proposed constructive learning algorithm was successfully applied to identify a MIMO bioprocess, providing a multivariable model that was able to describe the complex process dynamics, even in long-range horizon predictions. The resulting identification model was considered as part of a model-based predictive control strategy, producing high-quality performance in closed-loop experiments.  相似文献   

15.
Dimensionality reducing mappings, often also denoted as multidimensional scaling, are the basis for multivariate data projection and visual analysis in data mining. Topology and distance preserving mapping techniques-e.g., Kohonen's self-organizing feature map (SOM) or Sammon's nonlinear mapping (NLM)-are available to achieve multivariate data projections for the following interactive visual analysis process. For large data bases, however, NLM computation becomes intractable. Also, if additional data points or data sets are to be included in the projection, a complete recomputation of the mapping is required. In general, a neural network could learn the mapping and serve for arbitrary additional data projection. However, the computational costs would also be high, and convergence is not easily achieved. In this work, a convenient hierarchical neural projection approach is introduced, where first an unsupervised neural network-e.g., a SOM-quantizes the data base, followed by fast NLM mapping of the quantized data. In the second stage of the hierarchy, an enhancement of the NLM by a recall algorithm is applied. The training and application of a second neural network, which is learning the mapping by function approximation, is quantitatively compared with this new approach. Efficient interactive visualization and analysis techniques, exploiting the achieved hierarchical neural projection for data mining, are presented.  相似文献   

16.
人工鱼群算法是通过模仿鱼群的觅食、聚群和追尾等行为寻找最佳觅食水域从而实现全局寻优的优化算法。应用神经网络的投影寻踪耦合回归模型存在优化问题,学习过程中运用人工鱼群算法进行优化,进而获得最佳的投影方向、阈值和正交Hermite多项式系数。本文描述了应用人工鱼群算法优化的神经网络投影寻踪耦合回归模型算法。仿真实验结果表明,该算法可以获得满意的预测效果。  相似文献   

17.
In this brief, by combining an efficient wavelet representation with a coupled map lattice model, a new family of adaptive wavelet neural networks, called lattice dynamical wavelet neural networks (LDWNNs), is introduced for spatio-temporal system identification. A new orthogonal projection pursuit (OPP) method, coupled with a particle swarm optimization (PSO) algorithm, is proposed for augmenting the proposed network. A novel two-stage hybrid training scheme is developed for constructing a parsimonious network model. In the first stage, by applying the OPP algorithm, significant wavelet neurons are adaptively and successively recruited into the network, where adjustable parameters of the associated wavelet neurons are optimized using a particle swarm optimizer. The resultant network model, obtained in the first stage, however, may be redundant. In the second stage, an orthogonal least squares algorithm is then applied to refine and improve the initially trained network by removing redundant wavelet neurons from the network. An example for a real spatio-temporal system identification problem is presented to demonstrate the performance of the proposed new modeling framework.  相似文献   

18.
支持向量机理论与基于规划的神经网络学习算法   总被引:19,自引:3,他引:19  
张铃 《计算机学报》2001,24(2):113-118
近年来支持向量机(SVM)理论得到国外学者高度的重视,普遍认为这是神经网络学习的新研究方向,近来也开始得到国内学者的注意。该文将研究SVM理论与神经网络的规划算法的关系,首先指出,Vapnik的基于SVM的算法与该文作者1994年提出的神经网络的基于规划的算法是等价的,即在样本集是线性可分的情况下,二者求得的均是最大边缘(maximal margin)解。不同的是,前者(通常用拉格郎日乘子法)求解的复杂性将随规模呈指数增长,而后者的复杂性是规模的多项式函数。其次,作者将规划算法化为求一点到某一凸集上的投影,利用这个几何的直观,给出一个构造性的迭代求解算法--“单纯形迭代算法”。新算法有很强的几何直观性,这个直观性将加深对神经网络(线性可分情况下)学习的理解,并由此导出一个样本集是线性可分的充分必要条件。另外,新算法对知识扩充问题,给出一个非常方便的增量学习算法。最后指出,“将一些必须满足的条件,化成问题的约束条件,将网络的某一性能,作为目标函数,将网络的学习问题化为某种规划问题来求解”的原则,将是研究神经网络学习问题的一个十分有效的办法。  相似文献   

19.
Cascade-correlation (Cascor) is a popular supervised learning architecture that dynamically grows layers of hidden neurons of fixed nonlinear activations (e.g., sigmoids), so that the network topology (size, depth) can be efficiently determined. Similar to a cascade-correlation learning network (CCLN), a projection pursuit learning network (PPLN) also dynamically grows the hidden neurons. Unlike a CCLN where cascaded connections from the existing hidden units to the new candidate hidden unit are required to establish high-order nonlinearity in approximating the residual error, a PPLN approximates the high-order nonlinearity by using trainable parametric or semi-parametric nonlinear smooth activations based on minimum mean squared error criterion. An analysis is provided to show that the maximum correlation training criterion used in a CCLN tends to produce hidden units that saturate and thus makes it more suitable for classification tasks instead of regression tasks as evidenced in the simulation results. It is also observed that this critical weakness in CCLN can also potentially carry over to classification tasks, such as the two-spiral benchmark used in the original CCLN paper.  相似文献   

20.
In order to improve the learning ability of a forward neural network, in this article, we incorporate the feedback back-propagation (FBBP) and grey system theory to consider the learning and training of a neural network new perspective. By reducing the input grey degree we optimise the input of the neural network to make it more rational for learning and training of neural networks. Simulation results verified the efficiency of the proposed algorithm by comparing its performance with that of FBBP and classic back-propagation (BP). The results showed that the proposed algorithm has the characteristics of fast training and strong ability of generalisation and it is an effective learning method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号