首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Dynamics of Generalized PCA and MCA Learning Algorithms   总被引:1,自引:0,他引:1  
Principal component analysis (PCA) and minor component analysis (MCA) are two important statistical tools which have many applications in the fields of signal processing and data analysis. PCA and MCA neural networks (NNs) can be used to online extract principal component and minor component from input data. It is interesting to develop generalized learning algorithms of PCA and MCA NNs. Some novel generalized PCA and MCA learning algorithms are proposed in this paper. Convergence of PCA and MCA learning algorithms is an essential issue in practical applications. Traditionally, the convergence is studied via deterministic continuous-time (DCT) method. The DCT method requires the learning rate of the algorithms to approach to zero, which is not realistic in many practical applications. In this paper, deterministic discrete-time (DDT) method is used to study the dynamical behaviors of the proposed algorithms. The DDT method is more reasonable for the convergence analysis since it does not require constraints as that of the DCT method. It is proven that under some mild conditions, the weight vector in these proposed algorithms will converge exponentially to principal or minor component. Simulation results are further used to illustrate the theoretical results.  相似文献   

2.
Principal component analysis (PCA) and minor component analysis (MCA) are a powerful methodology for a wide variety of applications such as pattern recognition and signal processing. In this paper, we first propose a differential equation for the generalized eigenvalue problem. We prove that the stable points of this differential equation are the eigenvectors corresponding to the largest eigenvalue. Based on this generalized differential equation, a class of PCA and MCA learning algorithms can be obtained. We demonstrate that many existing PCA and MCA learning algorithms are special cases of this class, and this class includes some new and simpler MCA learning algorithms. Our results show that all the learning algorithms of this class have the same order of convergence speed, and they are robust to implementation error.  相似文献   

3.
In this paper, we first propose a differential equation for the generalized eigenvalue problem. We prove that the stable points of this differential equation are the eigenvectors corresponding to the largest eigenvalue. Based on this generalized differential equation, a class of principal component analysis (PCA) and minor component analysis (MCA) learning algorithms can be obtained. We demonstrate that many existing PCA and MCA learning algorithms are special cases of this class, and this class includes some new and simpler MCA learning algorithms. Our results show that all the learning algorithms of this class have the same order of convergence speed, and they are robust to implementation error.  相似文献   

4.
Principal component analysis (PCA) and Minor component analysis (MCA) are similar but have different dynamical performances. Unexpectedly, a sequential extraction algorithm for MCA proposed by Luo and Unbehauen [11] does not work for MCA, while it works for PCA. We propose a different sequential-addition algorithm which works for MCA. We also show a conversion mechanism by which any PCA algorithms are converted to dynamically equivalent MCA algorithms and vice versa.  相似文献   

5.
Principal component analysis (PCA) by neural networks is one of the most frequently used feature extracting methods. To process huge data sets, many learning algorithms based on neural networks for PCA have been proposed. However, traditional algorithms are not globally convergent. In this paper, a new PCA learning algorithm based on cascade recursive least square (CRLS) neural network is proposed. This algorithm can guarantee the network weight vector converges to an eigenvector associated with the largest eigenvalue of the input covariance matrix globally. A rigorous mathematical proof is given. Simulation results show the effectiveness of the algorithm.  相似文献   

6.
Recently, many unified learning algorithms have been developed to solve the task of principal component analysis (PCA) and minor component analysis (MCA). These unified algorithms can be used to extract principal component and if altered simply by the sign, it can also serve as a minor component extractor. This is of practical significance in the implementations of algorithms. Convergence of the existing unified algorithms is guaranteed only under the condition that the learning rates of algorithms approach zero, which is impractical in many practical applications. In this paper, we propose a unified PCA & MCA algorithm with a constant learning rate, and derive the sufficient conditions to guarantee convergence via analyzing the discrete-time dynamics of the proposed algorithm. The achieved theoretical results lay a solid foundation for the applications of our proposed algorithm.  相似文献   

7.
A principal component analysis (PCA) neural network is developed for online extraction of the multiple minor directions of an input signal. The neural network can extract the multiple minor directions in parallel by computing the principal directions of the transformed input signal so that the stability-speed problem of directly computing the minor directions can be avoided to a certain extent. On the other hand, the learning algorithms for updating the net weights use constant learning rates. This overcomes the shortcoming of the learning rates approaching zero. In addition, the proposed algorithms are globally convergent so that it is very simple to choose the initial values of the learning parameters. This paper presents the convergence analysis of the proposed algorithms by studying the corresponding deterministic discrete time (DDT) equations. Rigorous mathematical proof is given to prove the global convergence. The theoretical results are further confirmed via simulations.  相似文献   

8.
This paper examines the applicability of some learning techniques to the classification of phonemes. The methods tested were artificial neural nets (ANN), support vector machines (SVM) and Gaussian mixture modeling (GMM). We compare these methods with a traditional hidden Markov phoneme model (HMM), working with the linear prediction-based cepstral coefficient features (LPCC). We also tried to combine the learners with linear/nonlinear and unsupervised/supervised feature space transformation methods such as principal component analysis (PCA), independent component analysis (ICA), linear discriminant analysis (LDA), springy discriminant analysis (SDA) and their nonlinear kernel-based counterparts. We found that the discriminative learners can attain the efficiency of HMM, and that after the transformations they can retain the same performance in spite of the severe dimension reduction. The kernel-based transformations brought only marginal improvements compared to their linear counterparts.  相似文献   

9.
Minor component analysis (MCA) is a statistical method of extracting the eigenvector associated with the smallest eigenvalue of the covariance matrix of input signals. Convergence is essential for MCA algorithms towards practical applications. Traditionally, the convergence of MCA algorithms is indirectly analyzed via their corresponding deterministic continuous time (DCT) systems. However, the DCT method requires the learning rate to approach zero, which is not reasonable in many applications due to the round-off limitation and tracking requirements. This paper studies the convergence of the deterministic discrete time (DDT) system associated with the OJAn MCA learning algorithm. Unlike the DCT method, the DDT method does not require the learning rate to approach zero. In this paper, some important convergence results are obtained for the OJAn MCA learning algorithm via the DDT method. Simulations are carried out to illustrate the theoretical results achieved.  相似文献   

10.
一种全局收敛的PCA神经网络学习算法   总被引:2,自引:1,他引:2  
主元分析(PCA)也称为K-L变换是进行特征提取的一种重要方法。近年来,为了处理海量数据,许多基于Hebbian学习算法的PCA神经网络被提出来。传统的算法,通常不能保证其收敛性或者收敛速度较慢。基于CRLS神经网络,本文提出了一种新的确保权向量收敛的学习算法,本算法无须在计算中规格化权向量。同时也证明了该学习算法使得权向量收敛到最大特征值所对应的特征向量。实验表明,与传统的CRLS神经网络比较,本文算法准确性得到极大提高。  相似文献   

11.
The classical analysis of a stochastic signal into principal components compresses the signal using an optimal selection of linear features. Noisy Principal Component Analysis (NPCA) is an extension of PCA under the assumption that the extracted features are unreliable, and the unreliability is modeled by additive noise. The applications of this assumption appear for instance, in communications problems with noisy channels. The level of noise in the NPCA features affects the reconstruction error in a way resembling the water-filling analogy in information theory. Robust neural network models for Noisy PCA can be defined with respect to certain synaptic weight constraints. In this paper we present the NPCA theory related to a particularly simple and tractable constraint which allows us to evaluate the robustness of old PCA Hebbian learning rules. It turns out that those algorithms are not optimally robust in the sense that they produce a zero solution when the noise power level reaches half the limit set by NPCA. In fact, they are not NPCA-optimal for any other noise levels except zero. Finally, we propose new NPCA-optimal robust Hebbian learning algorithms for multiple adaptive noisy principal component extraction.  相似文献   

12.
A Class of Self-Stabilizing MCA Learning Algorithms   总被引:1,自引:0,他引:1  
In this letter, we propose a class of self-stabilizing learning algorithms for minor component analysis (MCA), which includes a few well-known MCA learning algorithms. Self-stabilizing means that the sign of the weight vector length change is independent of the presented input vector. For these algorithms, rigorous global convergence proof is given and the convergence rate is also discussed. By combining the positive properties of these algorithms, a new learning algorithm is proposed which can improve the performance. Simulations are employed to confirm our theoretical results  相似文献   

13.
This paper examines the applicability of some learning techniques for speech recognition, more precisely, for the classification of phonemes represented by a particular segment model. The methods compared were the IB1 algorithm (TiMBL), ID3 tree learning (C4.5), oblique tree learning (OC1), artificial neural nets (ANN), and Gaussian mixture modeling (GMM), and, as a reference, a hidden Markov model (HMM) recognizer was also trained on the same corpus. Before feeding them into the learners, the segmental features were additionally transformed using either linear discriminant analysis (LDA), principal component analysis (PCA), or independent component analysis (ICA). Each learner was tested with each transformation in order to find the best combination. Furthermore, we experimented with several feature sets, such as filter-bank energies, mel-frequency cepstral coefficients (MFCC), and gravity centers. We found LDA helped all the learners, in several cases quite considerably. PCA was beneficial only for some of the algorithms, and ICA improved the results quite rarely and was bad for certain learning methods. From the learning viewpoint, ANN was the most effective and attained the same results independently of the transformation applied. GMM behaved worse, which shows the advantages of discriminative over generative learning. TiMBL produced reasonable results; C4.5 and OC1 could not compete, no matter what transformation was tried.  相似文献   

14.
Principal/minor component analysis(PCA/MCA),generalized principal/minor component analysis(GPCA/GMCA),and singular value decomposition(SVD)algorithms are important techniques for feature extraction.In the convergence analysis of these algorithms,the deterministic discrete-time(DDT)method can reveal the dynamic behavior of PCA/MCA and GPCA/GMCA algorithms effectively.However,the dynamic behavior of SVD algorithms has not been studied quantitatively because of their special structure.In this paper,for the first time,we utilize the advantages of the DDT method in PCA algorithms analysis to study the dynamics of SVD algorithms.First,taking the cross-coupled Hebbian algorithm as an example,by concatenating the two cross-coupled variables into a single vector,we successfully get a PCA-like DDT system.Second,we analyze the discrete-time dynamic behavior and stability of the PCA-like DDT system in detail based on the DDT method,and obtain the boundedness of the weight vectors and learning rate.Moreover,further discussion shows the universality of the proposed method for analyzing other SVD algorithms.As a result,the proposed method provides a new way to study the dynamical convergence properties of SVD algorithms.  相似文献   

15.
Independent component analysis (ICA) neural networks can estimate independent components from the mixed signal. The dynamical behavior of the learning algorithms for ICA neural networks is crucial to effectively apply these networks to practical applications. The paper presents the stability and chaotic dynamical behavior of a class of ICA learning algorithms with constant learning rates. Some invariant sets are obtained so that the non-divergence of these algorithms can be guaranteed. In these invariant sets, the stability and chaotic behaviors are analyzed. The conditions for stability and chaos are derived. Lyapunov exponents and bifurcation diagrams are presented to illustrate the existence of chaotic behavior.  相似文献   

16.
张明月  王静 《计算机科学》2019,46(2):279-285
针对传统的视频跟踪算法对视频跟踪的精度不足以及主成分分析(PCA)的非线性拟合能力较弱的问题,将卷积神经网络与交互似然(IL)算法相结合,在深度学习的基础上对粒子滤波算法进行了优化改进。将核主成分分析(KPCA)网络应用于视频跟踪来获取目标的深层次特征表达,并采用一种新的交互似然图像跟踪器, 非迭代地计算,对不同区域进行跟踪取样来减少数据之间的关联需求 。在图像集上将所提算法与多种改进算法进行评估对比,结果表明所提算法具有非常好的鲁棒性及精确性。  相似文献   

17.
Mixture of local principal component analysis (PCA) has attracted attention due to a number of benefits over global PCA. The performance of a mixture model usually depends on the data partition and local linear fitting. In this paper, we propose a mixture model which has the properties of optimal data partition and robust local fitting. Data partition is realized by a soft competition algorithm called neural 'gas' and robust local linear fitting is approached by a nonlinear extension of PCA learning algorithm. Based on this mixture model, we describe a modular classification scheme for handwritten digit recognition, in which each module or network models the manifold of one of ten digit classes. Experiments demonstrate a very high recognition rate.  相似文献   

18.
传统数据降维算法分为线性或流形学习降维算法,但在实际应用中很难确定需要哪一类算法.设计一种综合的数据降维算法,以保证它的线性降维效果下限为主成分分析方法且在流形学习降维方面能揭示流形的数据结构.通过对高维数据构造马尔可夫转移矩阵,使越相似的节点转移概率越大,从而发现高维数据降维到低维流形的映射关系.实验结果表明,在人造...  相似文献   

19.
Neural network algorithms on principal component analysis (PCA) and minor component analysis (MCA) are of importance in signal processing. Unified (dual purpose) algorithm is capable of both PCA and MCA, thus it is valuable for reducing the complexity and the cost of hardware implementations. Coupled algorithm can mitigate the speed-stability problem which exists in most noncoupled algorithms. Though unified algorithm and coupled algorithm have these advantages compared with single purpose algorithm and noncoupled algorithm, respectively, there are only few of unified algorithms and coupled algorithms have been proposed. Moreover, to the best of the authors’ knowledge, there is no algorithm which is both unified and coupled has been proposed. In this paper, based on a novel information criterion, we propose two self-stabilizing algorithms which are both unified and coupled. In the derivation of our algorithms, it is easier to obtain the results compared with traditional methods, because it is not needed to calculate the inverse Hessian matrix. Experiment results show that the proposed algorithms perform better than existing coupled algorithms and unified algorithms.  相似文献   

20.
Pattern recognition techniques have been widely used in a variety of scientific disciplines including computer vision, artificial intelligence, biology, and so forth. Although many methods present satisfactory performances, they still have several weak points, thus leaving a lot of space for further improvements. In this paper, we propose two performance-driven subspace learning methods by extending the principal component analysis (PCA) and the kernel PCA (KPCA). Both methods adopt a common structure where genetic algorithms are employed to pursue optimal subspaces. Because the proposed feature extractors aim at achieving high classification accuracy, enhanced generalization ability can be expected. Extensive experiments are designed to evaluate the effectiveness of the proposed algorithms in real-world problems including object recognition and a number of machine learning tasks. Comparative studies with other state-of-the-art techniques show that the methods in this paper are capable of enhancing generalization ability for pattern recognition systems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号