首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 562 毫秒
1.
We investigate sparse non-linear denoising of functional brain images by kernel principal component analysis (kernel PCA). The main challenge is the mapping of denoised feature space points back into input space, also referred to as “the pre-image problem”. Since the feature space mapping is typically not bijective, pre-image estimation is inherently illposed. In many applications, including functional magnetic resonance imaging (fMRI) data which is the application used for illustration in the present work, it is of interest to denoise a sparse signal. To meet this objective we investigate sparse pre-image reconstruction by Lasso regularization. We find that sparse estimation provides better brain state decoding accuracy and a more reproducible pre-image. These two important metrics are combined in an evaluation framework which allow us to optimize both the degree of sparsity and the non-linearity of the kernel embedding. The latter result provides evidence of signal manifold non-linearity in the specific fMRI case study.  相似文献   

2.
A novel fuzzy nonlinear classifier, called kernel fuzzy discriminant analysis (KFDA), is proposed to deal with linear non-separable problem. With kernel methods KFDA can perform efficient classification in kernel feature space. Through some nonlinear mapping the input data can be mapped implicitly into a high-dimensional kernel feature space where nonlinear pattern now appears linear. Different from fuzzy discriminant analysis (FDA) which is based on Euclidean distance, KFDA uses kernel-induced distance. Theoretical analysis and experimental results show that the proposed classifier compares favorably with FDA.  相似文献   

3.
The pre-image problem in kernel methods   总被引:2,自引:0,他引:2  
In this paper, we address the problem of finding the pre-image of a feature vector in the feature space induced by a kernel. This is of central importance in some kernel applications, such as on using kernel principal component analysis (PCA) for image denoising. Unlike the traditional method in which relies on nonlinear optimization, our proposed method directly finds the location of the pre-image based on distance constraints in the feature space. It is noniterative, involves only linear algebra and does not suffer from numerical instability or local minimum problems. Evaluations on performing kernel PCA and kernel clustering on the USPS data set show much improved performance.  相似文献   

4.
A nonlinear feature extraction method is presented which can reduce the data dimension down to the number of classes, providing dramatic savings in computational costs. The dimension reducing nonlinear transformation is obtained by implicitly mapping the input data into a feature space using a kernel function, and then finding a linear mapping based on an orthonormal basis of centroids in the feature space that maximally separates the between-class relationship. The experimental results demonstrate that our method is capable of extracting nonlinear features effectively so that competitive performance of classification can be obtained with linear classifiers in the dimension reduced space.  相似文献   

5.
针对现有的局部正切空间算法中存在的问题,文中提出一种基于核变换的特征提取方法——核正交判别局部正切空间对齐算法(KOTSDA)。该算法首先利用核方法将人脸图像投影到一个高维非线性空间,提取其非线性信息;然后在目标函数中利用正切空间判别分析算法在保持样本的类内局部几何结构的同时最大化类间差异;最后添加正交约束,得到核正交判别局部正切空间对齐算法。该算法不需要经过PCA降维,有效避免判别信息的丢失,在ORL和Yale人脸库上的实验验证算法有效性。  相似文献   

6.
A novel ant-based clustering algorithm using the kernel method   总被引:1,自引:0,他引:1  
A novel ant-based clustering algorithm integrated with the kernel (ACK) method is proposed. There are two aspects to the integration. First, kernel principal component analysis (KPCA) is applied to modify the random projection of objects when the algorithm is run initially. This projection can create rough clusters and improve the algorithm’s efficiency. Second, ant-based clustering is performed in the feature space rather than in the input space. The distance between the objects in the feature space, which is calculated by the kernel function of the object vectors in the input space, is applied as a similarity measure. The algorithm uses an ant movement model in which each object is viewed as an ant. The ant determines its movement according to the fitness of its local neighbourhood. The proposed algorithm incorporates the merits of kernel-based clustering into ant-based clustering. Comparisons with other classic algorithms using several synthetic and real datasets demonstrate that ACK method exhibits high performance in terms of efficiency and clustering quality.  相似文献   

7.
提出了一种基于核的聚类算法,并将其应用到入侵检测中,构造了一种新的检测模型。通过利用Mercer核,我们把输入空间的样本映射到高维特征空间后,在特征空间中进行聚类。由于经过了核函数的映射,使原来没有显现的特征凸显出来,从而能够更好地聚类。而且在初始化聚类中心的选择上利用了数据分段的方法,该聚类方法在性能上比经典的聚类算法有较大的改进,具有更快的收敛速度以及更为准确的聚类。仿真试验的结果证实了该方法的可行性和有效性。  相似文献   

8.
使用超椭球参数化坐标的支持向量机   总被引:1,自引:0,他引:1  
基于n维超椭球面坐标变换公式,构造一类核函数--n维超椭球坐标变换核.由于是同维映射,且增大了类间距离,这类核函数在一定程度上改善了支持向量机的性能.与其他核函数(如高斯核)相比,将所构造的核函数用于支持向量机,仅产生了很少的支持向量,因而大大加快了学习速度,改善了泛化性能.数值实验结果表明了所构造的核函数的有效性和正确性.  相似文献   

9.
Block-wise 2D kernel PCA/LDA for face recognition   总被引:1,自引:0,他引:1  
Direct extension of (2D) matrix-based linear subspace algorithms to kernel-induced feature space is computationally intractable and also fails to exploit local characteristics of input data. In this letter, we develop a 2D generalized framework which integrates the concept of kernel machines with 2D principal component analysis (PCA) and 2D linear discriminant analysis (LDA). In order to remedy the mentioned drawbacks, we propose a block-wise approach based on the assumption that data is multi-modally distributed in so-called block manifolds. Proposed methods, namely block-wise 2D kernel PCA (B2D-KPCA) and block-wise 2D generalized discriminant analysis (B2D-GDA), attempt to find local nonlinear subspace projections in each block manifold or alternatively search for linear subspace projections in kernel space associated with each blockset. Experimental results on ORL face database attests to the reliability of the proposed block-wise approach compared with related published methods.  相似文献   

10.
基于核最优变换与聚类中心的雷达目标识别   总被引:1,自引:0,他引:1  
抽取有效鉴别特征是雷达一维高分辨距离像识别的关键.基干统计学习理论的核化原理,提出一种新的鉴别特征提取方法--核最优变换与聚类中心算法.该算法通过非线性变换,将数据映射到核空间,在核空间执行最优变换与聚类中心算法,能够提取一维距离像的稳健非线性鉴别特征.同时,基于训练样本在核空间所张成的子空间的一组基,给出一种快速计算方法,提高了特征提取速度.基于微波暗室实测数据的实验表明了该方法的有效性.  相似文献   

11.
传统的PCA和LDA算法受限于“小样本问题”,且对像素的高阶相关性不敏感。论文将核函数方法与规范化LDA相结合,将原图像空间通过非线性映射变换到高维特征空间,并借助于“核技巧”在新的空间中应用鉴别分析方法。通过对ORL人脸库的大量实验表明,该方法在特征提取方面优于PCA,KPCA,LDA等其他方法,在简化分类器的同时,也可以获得高识别率。  相似文献   

12.
一种新的核线性鉴别分析算法及其在人脸识别上的应用   总被引:1,自引:0,他引:1  
基于核策略的核Fisher鉴别分析(KFD)算法已成为非线性特征抽取的最有效方法之一。但是先前的基于核Fisher鉴别分析算法的特征抽取过程都是基于2值分类问题而言的。如何从重叠(离群)样本中抽取有效的分类特征没有得到有效的解决。本文在结合模糊集理论的基础上,利用模糊隶属度函数的概念,在特征提取过程中融入了样本的分布信息,提出了一种新的核Fisher鉴别分析方法——模糊核鉴别分析算法。在ORL人脸数据库上的实验结果验证了该算法的有效性。  相似文献   

13.
基于特征样本的KPCA在故障诊断中的应用   总被引:8,自引:0,他引:8  
核函数主元分析(KPCA)可用于非线性过程监控.建立KPCA模型首先要计算核矩阵K,K的维数等于训练样本的数量,对于大样本集,计算K很困难.对此提出一种基于特征样本的KPCA(SKPCA),其基本思想是,首先利用非线性映射函数将输入空间映射到特征子空间,然后在特征子空间中计算主元.将SKPCA应用于监控Tennessee Eastman过程,并与基于全体样本的KPCA作比较,仿真结果显示,二者诊断结果基本相同,然而特征样本只是训练样本中的一小部分,因此减少了K的维数,解决了K的计算问题.  相似文献   

14.
传统的PCA和LDA算法受限于“小样本问题”,且对象素的高阶相关性不敏感。本文将核函数方法与规范化LDA相结合,将原图像空间通过非线性映射变换到高维特征空间,并借助于“核技巧”在新的空间中应用鉴别分析方法。通过对ORL人脸库的大量实验研究表明,本文方法在特征提取方面明显优于PCA,KPCA,LDA等其他传统的人脸识别方法,在简化分类器的同时,也可以获得高识别率。  相似文献   

15.
提出一种基于核主元分析(KPCA)和混沌粒子优化群(CPSO)算法的非线性故障检测方法。通过核函数完成非线性变换,将变量由非线性的输入空间转换到线性的特征空间来计算主元,构造平方预测误差统计量检测故障是否发生。为避免粒子群算法的早熟现象,利用混沌优化的搜索特性,将CPSO算法应用到KPCA核参数的优化中。变压器故障检测结果表明,与基于PCA、KPCA和 PSO-KPCA的故障检测方法相比,该方法的检测正确率较高。  相似文献   

16.
This paper presents a method for solving inverse mapping of a continuous function learned by a multilayer feedforward mapping network. The method is based on the iterative update of input vector toward a solution, while escaping from local minima. The input vector update is determined by the pseudo-inverse of the gradient of Lyapunov function, and, should an optimal solution be searched for, the projection of the gradient of a performance index on the null space of the gradient of Lyapunov function. The update rule is allowed to detect an input vector approaching local minima through a phenomenon called "update explosion". At or near local minima, the input vector is guided by an escape trajectory generated based on "global information", where global information is referred to here as predefined or known information on forward mapping; or the input vector is relocated to a new position based on the probability density function (PDF) constructed over the input vector space by Parzen estimate. The constructed PDF reflects the history of local minima detected during the search process, and represents the probability that a particular input vector can lead to a solution based on the update rule. The proposed method has a substantial advantage in computational complexity as well as convergence property over the conventional methods based on Jacobian pseudo-inverse or Jacobian transpose.  相似文献   

17.
Fuzzy kernel perceptron   总被引:12,自引:0,他引:12  
A new learning method, the fuzzy kernel perceptron (FKP), in which the fuzzy perceptron (FP) and the Mercer kernels are incorporated, is proposed in this paper. The proposed method first maps the input data into a high-dimensional feature space using some implicit mapping functions. Then, the FP is adopted to find a linear separating hyperplane in the high-dimensional feature space. Compared with the FP, the FKP is more suitable for solving the linearly nonseparable problems. In addition, it is also more efficient than the kernel perceptron (KP). Experimental results show that the FKP has better classification performance than FP, KP, and the support vector machine.  相似文献   

18.
核选择直接影响核方法的性能.已有高斯核选择方法的计算复杂度为Ω(n2),阻碍大规模核方法的发展.文中提出高斯核选择的线性性质检测方法,不同于传统核选择方法,询问复杂度为O(ln(1/δ)/ 2),计算复杂度独立于样本规模.文中首先给出函数 线性水平的定义,证明可使用 线性水平近似度量一个函数与线性函数类之间的距离,并以此为基础提出高斯核选择的线性性质检测准则.然后应用该准则,在随机傅里叶特征空间中有效评价并选择高斯核.理论分析与实验表明,应用性质检测以实现高斯核选择的方法有效可行.  相似文献   

19.
为了提高高光谱遥感影像的分类精度,充分利用影像的光谱和局部信息,文中提出小波核局部Fisher判别分析的高光谱遥感影像特征提取方法.通过小波核函数将数据集从低维原始空间映射至高维特征空间,考虑到数据的局部信息,利用加权矩阵计算散度矩阵,对局部Fisher判别准则函数求解最优特征矩阵,使不同类别的样本在高维特征空间中的可分离性更佳.在2个公开高光谱数据集上的实验表明,文中方法的总体分类精度和Kappa系数都有所提高.  相似文献   

20.
A new nonlinear dimensionality reduction method called kernel global–local preserving projections (KGLPP) is developed and applied for fault detection. KGLPP has the advantage of preserving global and local data structures simultaneously. The kernel principal component analysis (KPCA), which only preserves the global Euclidean structure of data, and the kernel locality preserving projections (KLPP), which only preserves the local neighborhood structure of data, are unified in the KGLPP framework. KPCA and KLPP can be easily derived from KGLPP by choosing some particular values of parameters. As a result, KGLPP is more powerful than KPCA and KLPP in capturing useful data characteristics. A KGLPP-based monitoring method is proposed for nonlinear processes. T2 and SPE statistics are constructed in the feature space for fault detection. Case studies in a nonlinear system and in the Tennessee Eastman process demonstrate that the KGLPP-based method significantly outperforms KPCA, KLPP and GLPP-based methods, in terms of higher fault detection rates and better fault sensitivity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号