首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 468 毫秒
1.
核判别随机近邻嵌入分析方法   总被引:2,自引:0,他引:2  
为了有效地解决非线性特征提取中存在的鉴别效率和样本外问题,最大限度地保持观测信息,并进一步提高相关方法的降维性能,将核学习的方法应用到判别随机近邻嵌入分析方法中,提出一种核判别随机近邻嵌入分析方法.通过引入核函数,将原空间中的样本映射到高维核空间中,构建了用于反映同类和异类数据间相似度的联合概率表达式;在此基础上,引入线性投影矩阵生成对应子空间数据;最后在类内Kullback-Leiber(KL)散度最小和类间KL散度最大的准则下建立目标泛函.该方法突出了异类样本间的特征差异,使样本变得线性可分,从而提高了分类性能.在COIL-20图像库和ORL,Yale经典人脸库上进行实验,验证了文中方法的分类鉴别能力.  相似文献   

2.
由于使用内积方式不能真实地表达数据间关系,造成分类效果参差不齐,故提出一种基于分形插值的支持向量机核函数算法.对样本数据进行预处理使样本数据范数在0~1之间,利用二次范数计算训练样本间距离关系并通过0-1原则区分同类与异类数据.通过计算数据间距离对新数据进行排序,建立同异类标签的区分最小最大区分距离,同异类标签交叉空间利用分形插值方法建立迭代系统与分形插值函数.实验结果表明,该算法能够有效增强交叉空间的区分度,缩短同类空间的差异性,进而达到提高分类准确率的效果.  相似文献   

3.
为了克服支持向量机(SVM)中单核函数的局限性,经常使用混合核函数做预测,但混合核函数中各函数权重难以确定。为解决该问题,提出了一种基于特征距离的权重求解方法。该方法首先利用支持向量机的几何意义,根据同类样本特征距离最小化和异类样本特征距离最大化原理,分析得出优化函数,然后对优化函数求解得出权重系数。实验结果表明,与传统的交叉验证法和PSO算法相比,该方法在保证预测精度的情况下,将计算时间减少了70%左右。  相似文献   

4.
核函数选择是支持向量机研究的热点和难点。目前大多数核函数选择方法主要应用验证方法选择,很少考虑数据的分布特征,没有充分利用隐含在数据中的信息。为此提出了一种应用样本分布特征的核函数选择方法,即先行分析样本分布特征,然后结合核函数蕴含的几何度量选择合适的核函数,使非线性样本映射得到的特征空间线性可分性得到提高,增强可分性和预测能力。仿真结果证明,提出的方法对支持向量机核函数选择能提供有效的指导,且对泛化能力也得到提高,方案具有可行性和有效性。  相似文献   

5.
子空间学习是特征提取领域中的一个重要研究方向,其通过一种线性或非线性的变换将原始数据映射到低维子空间中,并在该子空间中尽可能地保留原始数据的几何结构和有用信息.子空间学习的性能提升主要取决于相似性关系的衡量方式和特征嵌入的图构建手段.文中针对子空间学习中的相似性度量与图构建两大问题进行研究,提出了一种基于核保持嵌入的子空间学习算法(Kernel-preserving Embedding based Subspace Learning,KESL),该算法通过自表示技术自适应地学习数据间的相似性信息和基于核保持的构图.首先针对传统降维方法无法挖掘高维非线性数据的内部结构问题,引入核函数并最小化样本的重构误差来约束最优的表示系数,以期挖掘出有利于分类的数据结构关系.然后,针对现有基于图的子空间学习方法大都只考虑类内样本相似性信息的问题,利用学习到的相似性矩阵分别构建类内和类间图,使得在投影子空间中同类样本的核保持关系得到加强,不同类样本间的核保持关系被进一步抑制.最后,通过核保持矩阵与图嵌入的联合优化,动态地求解出最优表示下的子空间投影.在多个数据集上的实验结果表明,所提算法在分类任务中的性能优于主流的子空间学习算法.  相似文献   

6.
子空间学习是特征提取领域中的一个重要研究方向,其通过一种线性或非线性的变换将原始数据映射到低维子空间中,并在该子空间中尽可能地保留原始数据的几何结构和有用信息.子空间学习的性能提升主要取决于相似性关系的衡量方式和特征嵌入的图构建手段.文中针对子空间学习中的相似性度量与图构建两大问题进行研究,提出了一种基于核保持嵌入的子空间学习算法(Kernel-preserving Embedding based Subspace Learning,KESL),该算法通过自表示技术自适应地学习数据间的相似性信息和基于核保持的构图.首先针对传统降维方法无法挖掘高维非线性数据的内部结构问题,引入核函数并最小化样本的重构误差来约束最优的表示系数,以期挖掘出有利于分类的数据结构关系.然后,针对现有基于图的子空间学习方法大都只考虑类内样本相似性信息的问题,利用学习到的相似性矩阵分别构建类内和类间图,使得在投影子空间中同类样本的核保持关系得到加强,不同类样本间的核保持关系被进一步抑制.最后,通过核保持矩阵与图嵌入的联合优化,动态地求解出最优表示下的子空间投影.在多个数据集上的实验结果表明,所提算法在分类任务中的性能优于主流的子空间学习算法.  相似文献   

7.
基于构造性核覆盖学习方法的思想,提出了一种构造性核覆盖聚类算法.首先将原空间的待分类样本映射到一个高维的特征空间中,使得样本变得线性可分,然后在核空间采用构造性覆盖方法进行覆盖领域的构造,这组领域能将相似度小的样本分割开来,将相似度大的样本聚合在一起,通过定义一定的相似度度量标准和目标函数,达到聚类的效果.仿真实验也验证了该方法的有效性和可行性.  相似文献   

8.
于文勇  康晓东  葛文杰  王昊 《计算机科学》2015,42(3):307-310, 320
提出一种结合特征场和模糊核聚类支持向量机的图像分类辨识方法。首先,构造符合人类视觉特性的图像彩色和纹理特征数据场,一方面,引入新阈值,建立图像纹理特征;另一方面,在图像彩色特征上,对能够引起注意的像素区域的像素点进行加权处理,并使用彩色空间分布离散度来描述彩色的空间分布。其次,采用模糊核聚类支持向量机对图像进行分类研究。在使用特征空间时,不仅考虑了样本与类中心间的关系,还考虑了类中各个样本间的关系,以模糊连接度来度量类中各个样本间的关系,并以二叉树方式构造子分类器。实验结果表明,该方法可以获得较好的图像分类效果。  相似文献   

9.
基于条件正定核的SVM人脸识别   总被引:2,自引:0,他引:2       下载免费PDF全文
为提高人脸识别分类器的能力,采用了一种改进的可用于核学习方法的核函数—条件正定核函数。条件正定核函数一般不满足Mercer条件,但可以在核空间中计算样本间的距离,突出样本间的特征差异。对ORL、YALE、ESSEX三个标准人脸数据库进行仿真实验,结果表明基于条件正定核的SVM人脸识别算法在训练时间没有降低的情况下,与其他核函数法相比识别率有较大提高,并且当类别数增加时算法表现出较强的鲁棒性。  相似文献   

10.
徐鲲鹏  陈黎飞  孙浩军  王备战 《软件学报》2020,31(11):3492-3505
现有的类属型数据子空间聚类方法大多基于特征间相互独立假设,未考虑属性间存在的线性或非线性相关性.提出一种类属型数据核子空间聚类方法.首先引入原作用于连续型数据的核函数将类属型数据投影到核空间,定义了核空间中特征加权的类属型数据相似性度量.其次,基于该度量推导了类属型数据核子空间聚类目标函数,并提出一种高效求解该目标函数的优化方法.最后,定义了一种类属型数据核子空间聚类算法.该算法不仅在非线性空间中考虑了属性间的关系,而且在聚类过程中赋予每个属性衡量其与簇类相关程度的特征权重,实现了类属型属性的嵌入式特征选择.还定义了一个聚类有效性指标,以评价类属型数据聚类结果的质量.在合成数据和实际数据集上的实验结果表明,与现有子空间聚类算法相比,核子空间聚类算法可以发掘类属型属性间的非线性关系,并有效提高了聚类结果的质量.  相似文献   

11.
Nonlinear kernel-based feature extraction algorithms have recently been proposed to alleviate the loss of class discrimination after feature extraction. When considering image classification, a kernel function may not be sufficiently effective if it depends only on an information resource from the Euclidean distance in the original feature space. This study presents an extended radial basis kernel function that integrates multiple discriminative information resources, including the Euclidean distance, spatial context, and class membership. The concepts related to Markov random fields (MRFs) are exploited to model the spatial context information existing in the image. Mutual closeness in class membership is defined as a similarity measure with respect to classification. Any dissimilarity from the additional information resources will improve the discrimination between two samples that are only a short Euclidean distance apart in the feature space. The proposed kernel function is used for feature extraction through linear discriminant analysis (LDA) and principal component analysis (PCA). Experiments with synthetic and natural images show the effectiveness of the proposed kernel function with application to image classification.  相似文献   

12.
基于核空间距离测度的特征选择   总被引:1,自引:0,他引:1  
提出核空间距离测度这一可分性判据。在核空间中计算两类样本点之间的距离,并以距离的大小评价子集的分类性能。使用顺序前进法作为搜索算法,在人造和真实的数据集上进行测试,文中的核空间距离测度可分性判据明显优于传统非核的可分性判据,优于或接近于Wang提出的核散布矩阵测度,并在运行时间上快一个数量级。将文中方法应用于胰腺内镜超声图像分类,取得较好分类结果。  相似文献   

13.
张成  李娜  李元  逄玉俊 《计算机应用》2014,34(10):2895-2898
针对核主元分析(KPCA)中高斯核参数β的经验选取问题,提出了核主元分析的核参数判别选择方法。依据训练样本的类标签计算类内、类间核窗宽,在以上核窗宽中经判别选择方法确定核参数。根据判别选择核参数所确定的核矩阵,能够准确描述训练空间的结构特征。用主成分分析(PCA)对特征空间进行分解,提取主成分以实现降维和特征提取。判别核窗宽方法在分类密集区域选择较小窗宽,在分类稀疏区域选择较大窗宽。将判别核主成分分析(Dis-KPCA)应用到数据模拟实例和田纳西过程(TEP),通过与KPCA、PCA方法比较,实验结果表明,Dis-KPCA方法有效地对样本数据降维且将三个类别数据100%分开,因此,所提方法的降维精度更高。  相似文献   

14.
针对现有的局部正切空间算法中存在的问题,文中提出一种基于核变换的特征提取方法——核正交判别局部正切空间对齐算法(KOTSDA)。该算法首先利用核方法将人脸图像投影到一个高维非线性空间,提取其非线性信息;然后在目标函数中利用正切空间判别分析算法在保持样本的类内局部几何结构的同时最大化类间差异;最后添加正交约束,得到核正交判别局部正切空间对齐算法。该算法不需要经过PCA降维,有效避免判别信息的丢失,在ORL和Yale人脸库上的实验验证算法有效性。  相似文献   

15.
Kernel sparse representation based classification   总被引:5,自引:0,他引:5  
Sparse representation has attracted great attention in the past few years. Sparse representation based classification (SRC) algorithm was developed and successfully used for classification. In this paper, a kernel sparse representation based classification (KSRC) algorithm is proposed. Samples are mapped into a high dimensional feature space first and then SRC is performed in this new feature space by utilizing kernel trick. Since samples in the high dimensional feature space are unknown, we cannot perform KSRC directly. In order to overcome this difficulty, we give the method to solve the problem of sparse representation in the high dimensional feature space. If an appropriate kernel is selected, in the high dimensional feature space, a test sample is probably represented as the linear combination of training samples of the same class more accurately. Therefore, KSRC has more powerful classification ability than SRC. Experiments of face recognition, palmprint recognition and finger-knuckle-print recognition demonstrate the effectiveness of KSRC.  相似文献   

16.
A novel ant-based clustering algorithm using the kernel method   总被引:1,自引:0,他引:1  
A novel ant-based clustering algorithm integrated with the kernel (ACK) method is proposed. There are two aspects to the integration. First, kernel principal component analysis (KPCA) is applied to modify the random projection of objects when the algorithm is run initially. This projection can create rough clusters and improve the algorithm’s efficiency. Second, ant-based clustering is performed in the feature space rather than in the input space. The distance between the objects in the feature space, which is calculated by the kernel function of the object vectors in the input space, is applied as a similarity measure. The algorithm uses an ant movement model in which each object is viewed as an ant. The ant determines its movement according to the fitness of its local neighbourhood. The proposed algorithm incorporates the merits of kernel-based clustering into ant-based clustering. Comparisons with other classic algorithms using several synthetic and real datasets demonstrate that ACK method exhibits high performance in terms of efficiency and clustering quality.  相似文献   

17.
An important issue involved in kernel methods is the pre-image problem. However, it is an ill-posed problem, as the solution is usually nonexistent or not unique. In contrast to direct methods aimed at minimizing the distance in feature space, indirect methods aimed at constructing approximate equivalent models have shown outstanding performance. In this paper, an indirect method for solving the pre-image problem is proposed. In the proposed algorithm, an inverse mapping process is constructed based on a novel framework that preserves local linearity. In this framework, a local nonlinear transformation is implicitly conducted by neighborhood subspace scaling transformation to preserve the local linearity between feature space and input space. By extending the inverse mapping process to test samples, we can obtain pre-images in input space. The proposed method is non-iterative, and can be used for any kernel functions. Experimental results based on image denoising using kernel principal component analysis (PCA) show that the proposed method outperforms the state-of-the-art methods for solving the pre-image problem.  相似文献   

18.
本文提出了一种新的非线性特征抽取方法——基于散度差准则的隐空间特征抽取方法。该方法的主要思想就是首先利用一核函数将原始输入空间非线性变换到隐空间,然后,在该隐空间中,利用类间离散度与类内离散度之差作为鉴别准则进行特征抽取。与现有的核特征抽取方法不同,该方法不需要核函数满足Mercer定理,从而增加了核函数的选择范围。更为重要的是,由于采用了散度差作为鉴别准则,从根本上避免了传统的Fisher线性鉴别分析所遇到的小样本问题。在ORL人脸数据库和AR标准人脸库上的试验结果验证了本文方法的有效性。  相似文献   

19.
Kernel Fisher discriminant analysis (KFDA) extracts a nonlinear feature from a sample by calculating as many kernel functions as the training samples. Thus, its computational efficiency is inversely proportional to the size of the training sample set. In this paper we propose a more approach to efficient nonlinear feature extraction, FKFDA (fast KFDA). This FKFDA consists of two parts. First, we select a portion of training samples based on two criteria produced by approximating the kernel principal component analysis (AKPCA) in the kernel feature space. Then, referring to the selected training samples as nodes, we formulate FKFDA to improve the efficiency of nonlinear feature extraction. In FKFDA, the discriminant vectors are expressed as linear combinations of nodes in the kernel feature space, and the extraction of a feature from a sample only requires calculating as many kernel functions as the nodes. Therefore, the proposed FKFDA has a much faster feature extraction procedure compared with the naive kernel-based methods. Experimental results on face recognition and benchmark datasets classification suggest that the proposed FKFDA can generate well classified features.  相似文献   

20.
Pruning least objective contribution in KMSE   总被引:1,自引:0,他引:1  
Although kernel minimum squared error (KMSE) is computationally simple, i.e., it only needs solving a linear equation set, it suffers from the drawback that in the testing phase the computational efficiency decreases seriously as the training samples increase. The underlying reason is that the solution of Naïve KMSE is represented by all the training samples in the feature space. Hence, in this paper, a method of selecting significant nodes for KMSE is proposed. During each calculation round, the presented algorithm prunes the training sample making least contribution to the objective function, hence called as PLOC-KMSE. To accelerate the training procedure, a batch of so-called nonsignificant nodes is pruned instead of one by one in PLOC-KMSE, and this speedup algorithm is named MPLOC-KMSE for short. To show the efficacy and feasibility of the proposed PLOC-KMSE and MPLOC-KMSE, the experiments on benchmark data sets and real-world instances are reported. The experimental results demonstrate that PLOC-KMSE and MPLOC-KMSE require the fewest significant nodes compared with other algorithms. That is to say, their computational efficiency in the testing phase is best, thus suitable for environments having a strict demand of computational efficiency. In addition, from the performed experiments, it is easily known that the proposed MPLOC-KMSE accelerates the training procedure without sacrificing the computational efficiency of testing phase to reach the almost same generalization performance. Finally, although PLOC and MPLOC are proposed in regression domain, they can be easily extended to classification problem and other algorithms such as kernel ridge regression.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号