首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A class of variable step-size learning algorithms for complex-valued nonlinear adaptive finite impulse response (FIR) filters is proposed. To achieve this, first a general complex-valued nonlinear gradient-descent (CNGD) algorithm with a fully complex nonlinear activation function is derived. To improve the convergence and robustness of CNGD, we further introduce a gradient-adaptive step size to give a class of variable step-size CNGD (VSCNGD) algorithms. The analysis and simulations show the proposed class of algorithms exhibiting fast convergence and being able to track nonlinear and nonstationary complex-valued signals. To support the derivation, an analysis of stability and computational complexity of the proposed algorithms is provided. Simulations on colored, nonlinear, and real-world complex-valued signals support the analysis.  相似文献   

2.
This paper presents the deduction of the enhanced gradient descent, conjugate gradient, scaled conjugate gradient, quasi-Newton, and Levenberg–Marquardt methods for training quaternion-valued feedforward neural networks, using the framework of the HR calculus. The performances of these algorithms in the real- and complex-valued cases led to the idea of extending them to the quaternion domain, also. Experiments done using the proposed training methods on time series prediction applications showed a significant performance improvement over the quaternion gradient descent algorithm.  相似文献   

3.
Goh SL  Mandic DP 《Neural computation》2004,16(12):2699-2713
A complex-valued real-time recurrent learning (CRTRL) algorithm for the class of nonlinear adaptive filters realized as fully connected recurrent neural networks is introduced. The proposed CRTRL is derived for a general complex activation function of a neuron, which makes it suitable for nonlinear adaptive filtering of complex-valued nonlinear and nonstationary signals and complex signals with strong component correlations. In addition, this algorithm is generic and represents a natural extension of the real-valued RTRL. Simulations on benchmark and real-world complex-valued signals support the approach.  相似文献   

4.
对于非线性方程组的求解,传统方法有很多,如牛顿法、梯度下降法等,但这些算法存在要求方程组连续可微、初值的选取是否合适等缺点,根据以上缺点将求解的问题转化为优化的问题,提出了新的交叉优化算法,充分利用细菌觅食算法局部搜索能力和粒子群算法的全局搜索能力,充分发挥了这两个算法各自优点。数值实验表明,新的算法可以弥补粒子群算法局部搜索能力弱和细菌觅食算法的全局搜索能力的不足,是求解非线性方程的有效方法。  相似文献   

5.
基于非线性PCA准则的两个盲信号分离算法   总被引:1,自引:0,他引:1  
该文首先基于Oja定义的非线性PCA准则J1(W),利用矩阵广义逆递推得到一种盲信号分离算法,然后对Karhunen给出的非线性PCA加权误差平方和准则J2(W),采用梯度下降算法和线性寻优而得到另一种自适应盲信号分离算法。对这两个分离算法进行了计算机仿真,仿真结果表明它们的有效性。  相似文献   

6.
Goh SL  Mandic DP 《Neural computation》2007,19(4):1039-1055
An augmented complex-valued extended Kalman filter (ACEKF) algorithm for the class of nonlinear adaptive filters realized as fully connected recurrent neural networks is introduced. This is achieved based on some recent developments in the so-called augmented complex statistics and the use of general fully complex nonlinear activation functions within the neurons. This makes the ACEKF suitable for processing general complex-valued nonlinear and nonstationary signals and also bivariate signals with strong component correlations. Simulations on benchmark and real-world complex-valued signals support the approach.  相似文献   

7.
Liu  Yan  Yang  Dakun  Li  Long  Yang  Jie 《Neural Processing Letters》2019,50(2):1589-1609

In order to broaden the study of the most popular and general Takagi–Sugeno (TS) system, we propose a complex-valued neuro-fuzzy inference system which realises the zero-order TS system in the complex-valued network architecture and develop it. In the complex domain, boundedness and analyticity cannot be achieved together. The splitting strategy is given by computing the gradients of the real-valued error function with respect to the real and the imaginary parts of the weight parameters independently. Specifically, this system has four layers: in the Gaussian layer, the L-dimensional complex-valued input features are mapped to a Q-dimensional real-valued space, and in the output layer, complex-valued weights are employed to project it back to the complex domain. Hence, split-complex valued gradients of the real-valued error function are obtained, forming the split-complex valued neuro-fuzzy (split-CVNF) learning algorithm based on gradient descent. Another contribution of this paper is that the deterministic convergence of the split-CVNF algorithm is analysed. It is proved that the error function is monotone during the training iteration process, and the sum of gradient norms tends to zero. By adding a moderate condition, the weight sequence itself is also proved to be convergent.

  相似文献   

8.
We present a new block adaptive algorithm as a variant of the Toeplitz-preconditioned block conjugate gradient (TBCG) algorithm. The proposed algorithm is formulated by combining TBCG algorithm with a data-reusing scheme that is realized by processing blocks of data in an overlapping manner, as in the optimum block adaptive shifting (OBAS) algorithm. Simulation results show that the proposed algorithm is superior to the block conjugate gradient shifting (BCGS), TBCG and Toeplitz-OBAS (TOBAS) algorithms in both convergence rate and tracking property of input signal conditioning.  相似文献   

9.
This article presents a direct adaptive fuzzy control scheme for a class of uncertain continuous-time multi-input multi-output nonlinear (MIMO) dynamic systems. Within this scheme, fuzzy systems are employed to approximate an unknown ideal controller that can achieve control objectives. The adjustable parameters of the used fuzzy systems are updated using a gradient descent algorithm that is designed to minimize the error between the unknown ideal controller and the fuzzy controller. The stability analysis of the closed-loop system is performed using a Lyapunov approach. In particular, it is shown that the tracking errors are bounded and converge to a neighborhood of the origin. Simulations performed on a two-link robot manipulator illustrate the approach and exhibit its performance.  相似文献   

10.
The blind equalizers based on complex valued feedforward neural networks, for linear and nonlinear communication channels, yield better performance as compared to linear equalizers. The learning algorithms are, generally, based on stochastic gradient descent, as they are simple to implement. However, these algorithms show a slow convergence rate. In the blind equalization problem, the unavailability of the desired output signal and the presence of nonlinear activation functions make the application of recursive least squares algorithm difficult. In this letter, a new scheme using recursive least squares algorithm is proposed for blind equalization. The learning of weights of the output layer is obtained by using a modified version of constant modulus algorithm cost function. For the learning of weights of hidden layer neuron space adaptation approach is used. The proposed scheme results in faster convergence of the equalizer.  相似文献   

11.
Algorithms for accelerated convergence of adaptive PCA   总被引:3,自引:0,他引:3  
We derive and discuss adaptive algorithms for principal component analysis (PCA) that are shown to converge faster than the traditional PCA algorithms due to Oja and Karhunen (1985), Sanger (1989), and Xu (1993). It is well known that traditional PCA algorithms that are derived by using gradient descent on an objective function are slow to converge. Furthermore, the convergence of these algorithms depends on appropriate choices of the gain sequences. Since online applications demand faster convergence and an automatic selection of gains, we present new adaptive algorithms to solve these problems. We first present an unconstrained objective function, which can be minimized to obtain the principal components. We derive adaptive algorithms from this objective function by using: (1) gradient descent; (2) steepest descent; (3) conjugate direction; and (4) Newton-Raphson methods. Although gradient descent produces Xu's LMSER algorithm, the steepest descent, conjugate direction, and Newton-Raphson methods produce new adaptive algorithms for PCA. We also provide a discussion on the landscape of the objective function, and present a global convergence proof of the adaptive gradient descent PCA algorithm using stochastic approximation theory. Extensive experiments with stationary and nonstationary multidimensional Gaussian sequences show faster convergence of the new algorithms over the traditional gradient descent methods. We also compare the steepest descent adaptive algorithm with state-of-the-art methods on stationary and nonstationary sequences.  相似文献   

12.
In this paper, evolution and visualization of the existence of saddle points of nonlinear functionals or multi-variable functions in finite dimensional spaces are presented. New algorithms are developed based on the mountain pass lemma and link thery in nonlinear analysis. Further more, a simple comparison of the steepest descent algorithm and the genetic algorithm is given. The process of the saddle point finding is visualised in an inteactive graphical interface.  相似文献   

13.
随机梯度下降算法研究进展   总被引:6,自引:1,他引:5  
在机器学习领域中, 梯度下降算法是求解最优化问题最重要、最基础的方法. 随着数据规模的不断扩大, 传统的梯度下降算法已不能有效地解决大规模机器学习问题. 随机梯度下降算法在迭代过程中随机选择一个或几个样本的梯度来替代总体梯度, 以达到降低计算复杂度的目的. 近年来, 随机梯度下降算法已成为机器学习特别是深度学习研究的焦点. 随着对搜索方向和步长的不断探索, 涌现出随机梯度下降算法的众多改进版本, 本文对这些算法的主要研究进展进行了综述. 将随机梯度下降算法的改进策略大致分为动量、方差缩减、增量梯度和自适应学习率等四种. 其中, 前三种主要是校正梯度或搜索方向, 第四种对参数变量的不同分量自适应地设计步长. 着重介绍了各种策略下随机梯度下降算法的核心思想、原理, 探讨了不同算法之间的区别与联系. 将主要的随机梯度下降算法应用到逻辑回归和深度卷积神经网络等机器学习任务中, 并定量地比较了这些算法的实际性能. 文末总结了本文的主要研究工作, 并展望了随机梯度下降算法的未来发展方向.  相似文献   

14.
In this paper a general class of fast learning algorithms for feedforward neural networks is introduced and described. The approach exploits the separability of each layer into linear and nonlinear blocks and consists of two steps. The first step is the descent of the error functional in the space of the outputs of the linear blocks (descent in the neuron space), which can be performed using any preferred optimization strategy. In the second step, each linear block is optimized separately by using a least squares (LS) criterion. To demonstrate the effectiveness of the new approach, a detailed treatment of a gradient descent in the neuron space is conducted. The main properties of this approach are the higher speed of convergence with respect to methods that employ an ordinary gradient descent in the weight space backpropagation (BP), better numerical conditioning, and lower computational cost compared to techniques based on the Hessian matrix. The numerical stability is assured by the use of robust LS linear system solvers, operating directly on the input data of each layer. Experimental results obtained in three problems are described, which confirm the effectiveness of the new method.  相似文献   

15.
蒋珉  柴干 《自动化学报》1993,19(4):487-492
针对工业控制中常见的一类非线性动态系统,本文提出了一种仿真算法,与其它同阶精度算法相比,具有精度高、稳定性好和计算量小等优点,文中给出了五个不同类型的仿真算例,结果表明,该算法是有效可行的。  相似文献   

16.
Parametric uncertainties in adaptive estimation and control have been dealt with, by and large, in the context of linear parameterizations. Algorithms based on the gradient descent method either lead to instability or inaccurate performance when the unknown parameters occur nonlinearly. Complex dynamic models are bound to include nonlinear parameterizations which necessitate the need for new adaptation algorithms that behave in a stable and accurate manner. The authors introduce, in this paper, an error model approach to establish these algorithms and their global stability and convergence properties. A number of applications of this error model in adaptive estimation and control are included, in each of which the new algorithm is shown to result in global boundedness. Simulation results are presented which complement the authors' theoretical derivations  相似文献   

17.
We analyze and compare the well-known gradient descent algorithm and the more recent exponentiated gradient algorithm for training a single neuron with an arbitrary transfer function. Both algorithms are easily generalized to larger neural networks, and the generalization of gradient descent is the standard backpropagation algorithm. We prove worst-case loss bounds for both algorithms in the single neuron case. Since local minima make it difficult to prove worst case bounds for gradient-based algorithms, we must use a loss function that prevents the formation of spurious local minima. We define such a matching loss function for any strictly increasing differentiable transfer function and prove worst-case loss bounds for any such transfer function and its corresponding matching loss. The different forms of the two algorithms' bounds indicates that exponentiated gradient outperforms gradient descent when the inputs contain a large number of irrelevant components. Simulations on synthetic data confirm these analytical results.  相似文献   

18.
In this paper, we propose an implicit gradient descent algorithm for the classic k-means problem. The implicit gradient step or backward Euler is solved via stochastic fixed-point iteration, in which we randomly sample a mini-batch gradient in every iteration. It is the average of the fixed-point trajectory that is carried over to the next gradient step. We draw connections between the proposed stochastic backward Euler and the recent entropy stochastic gradient descent for improving the training of deep neural networks. Numerical experiments on various synthetic and real datasets show that the proposed algorithm provides better clustering results compared to k-means algorithms in the sense that it decreased the objective function (the cluster) and is much more robust to initialization.  相似文献   

19.
Most of the cost functions used for blind equalization are nonconvex and nonlinear functions of tap weights, when implemented using linear transversal filter structures. Therefore, a blind equalization scheme with a nonlinear structure that can form nonconvex decision regions is desirable. The efficacy of complex-valued feedforward neural networks for blind equalization of linear and nonlinear communication channels has been confirmed by many studies. In this paper we present a complex valued neural network for blind equalization with M-ary phase shift keying (PSK) signals. The complex nonlinear activation functions used in the neural network are especially defined for handling the M-ary PSK signals. The training algorithm based on constant modulus algorithm (CMA) cost function is derived. The improved performance of the proposed neural network in both, stationary and nonstationary environments, is confirmed through computer simulations.  相似文献   

20.
为解决经典方法预测全社会用电总量预测数值精度较低、模型结构参数过于复杂等技术难题,本文提出将电力大数据和人工智能领域深度学习算法相结合的研究方法。采用计算机建立具有阶层结构的深度神经网络,根据仿生学原理引入线性整流函数解决梯度消失及神经网络收敛速度减慢问题,采用梯度下降来进行优化模型,同时通过引入指数衰减法由神经网络模型自动设定学习率以提高模型预测精度并降低迭代次数。从数量场的梯度原理并结合泰勒公式,推导出梯度下降法背后数学原理。为解决过拟合问题引入早停算法以提高模型训练速度及泛化能力。最后深度学习算法预测数值与经典线性回归算法预测数值相比较,深度学习算法在对全社会月用电总量的预测精准度、稳定性指标上明显优于线性回归算法,深度神经网络模型对未来全社会电力需求的预测数值具有高度的可信性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号