首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
度量亦称距离函数,是度量空间中满足特定条件的特殊函数,一般用来反映数据间存在的一些重要距离关系.而距离对于各种分类聚类问题影响很大,因此度量学习对于这类机器学习问题有重要影响.受到现实存在的各种噪声影响,已有的各种度量学习算法在处理各种分类问题时,往往出现分类准确率较低以及分类准确率波动大的问题.针对该问题,本文提出一种基于最大相关熵准则的鲁棒度量学习算法.最大相关熵准则的核心在于高斯核函数,本文将其引入到度量学习中,通过构建以高斯核函数为核心的损失函数,利用梯度下降法进行优化,反复测试调整参数,最后得到输出的度量矩阵.通过这样的方法学习到的度量矩阵将有更好的鲁棒性,在处理受噪声影响的各种分类问题时,将有效地提高分类准确率.本文将在一些常用机器学习数据集(UCI)还有人脸数据集上进行验证实验.  相似文献   

2.
高斯小波支持向量机的研究   总被引:1,自引:0,他引:1  
证明了偶数阶高斯小波函数满足支持向量机的平移不变核函数条件.应用小波核函数建立了相应的高斯小波支持向量机,并且使用云遗传算法对支持向量机及其核函数的参数进行优化.用该算法与常用的高斯核和Morlet小波核支持向量机进行对比实验.通过对非线性函数的逼近和电力系统短期负荷的预测,验证了该算法的有效性和优越性,表明其具有一定的实用价值.  相似文献   

3.
The Gaussian kernel function implicitly defines the feature space of an algorithm and plays an essential role in the application of kernel methods. The parameter of Gaussian kernel function is a scalar that has significant influences on final results. However, until now, it is still unclear how to choose an optimal kernel parameter. In this paper, we propose a novel data-driven method to optimize the Gaussian kernel parameter, which only depends on the original dataset distribution and yields a simple solution to this complex problem. The proposed method is task irrelevant and can be used in any Gaussian kernel-based approach, including supervised and unsupervised machine learning. Simulation experiments demonstrate the efficacy of the obtained results. A user-friendly online calculator is implemented at: www.csbio.sjtu.edu.cn/bioinf/kernel/ for public use.  相似文献   

4.
为了对最小二乘支持向量机中样本的各个特征的差异性进行研究,引入了多参数高斯核,在分析核极化几何意义的基础上,提出了基于核极化梯度迭代优化多参数高斯核的特征选择算法。利用核极化梯度迭代算法对样本中每个特征的重要性程度进行测定;按特征的重要性大小进行LSSVM样本的特征选择;运用LSSVM对选出的特征子集进行训练和测试,称该方法为KP_LSSVM。UCI数据集上的实验结果表明,相较于PCA_LSSVM、KPCA_LSSVM和LSSVM方法,提出的方法可以取得更为准确的分类结果,验证了该方法的有效性。  相似文献   

5.
It is widely recognized that whether the selected kernel matches the data determines the performance of kernel-based methods. Ideally it is expected that the data is linearly separable in the kernel induced feature space, therefore, Fisher linear discriminant criterion can be used as a cost function to optimize the kernel function. However, the data may not be linearly separable even after kernel transformation in many applications, e.g., the data may exist as multimodally distributed structure, in this case, a nonlinear classifier is preferred, and obviously Fisher criterion is not a suitable choice as kernel optimization rule. Motivated by this issue, we propose a localized kernel Fisher criterion, instead of traditional Fisher criterion, as the kernel optimization rule to increase the local margins between embedded classes in kernel induced feature space. Experimental results based on some benchmark data and measured radar high-resolution range profile (HRRP) data show that the classification performance can be improved by using the proposed method.  相似文献   

6.
Generalization performance of support vector machines (SVM) with Gaussian kernel is influenced by its model parameters, both the error penalty parameter and the Gaussian kernel parameter. After researching the characteristics and properties of the parameter simultaneous variation of support vector machines with Gaussian kernel by the parameter analysis table, a new area distribution model is proposed, which consists of optimal straight line, reference point of area boundary, optimal area, transition area, underfitting area, and overfitting area. In order to improve classification performance of support vector machines, a genetic algorithm based on change area search is proposed. Comparison experiments show that the test accuracy of the genetic algorithm based on change area search is better than that of the two-linear search method.  相似文献   

7.
Accurate control chart patterns recognition (CCPR) plays an essential role in the implementation of control charts. However, it is a challenging problem since nonrandom control chart patterns (CCPs) are normally distorted by “common process variations”. In this paper, a novel method of CCPR by integrating fuzzy support vector machine (SVM) with hybrid kernel function and genetic algorithm (GA) is proposed. Firstly, two shape features and two statistical features that do not depend on the distribution parameters and number of samples are presented to explicitly describe the characteristics of CCPs. Then, a novel multiclass method based on fuzzy SVM with a hybrid kernel function is proposed. In this method, the influence of outliers on classification accuracy of SVM-based classifiers is weakened by assigning a degree of membership for every training sample. Meanwhile, a hybrid kernel function combining Gaussian kernel and polynomial kernel is adopted to further enhance the generalization ability of the classifiers. To solve the issue of features selection and parameters optimization, GA is used to simultaneously optimize the input features subsets and parameters of fuzzy SVM-based classifier. Finally, several simulation experiments and a real example are addressed to validate the feasibility and effectiveness of the proposed methodology. And the results of simulation experiments demonstrate that it can achieve excellent performance for CCPR and outperforms other approaches, such as learning vector quantization network, multi-layer perceptron network, probability neural network, fuzzy clustering and SVM, in term of recognition accuracy. The results of the practical cases manifest that the proposed method has application potential for solving the problem of control chart interpretation in real-world.  相似文献   

8.
针对电子系统故障诊断中有效特征提取困难,核属性约简方法中核函数与核参数选择繁琐等问题,提出了一种基于自优化小波核稀疏保持投影的子空间特征提取方法。通过对核极化准则的改进,使得新准则不仅可以处理多类别信息,而且可以保留同一类别数据间的局部结构特征。以墨西哥帽小波核函数为对象,基于改进的核评估准则构建优化目标函数,并采用粒子群优化算法进行核参数选择。将优化的小波核作为核稀疏保持投影的核函数,最终实现了在核子空间中对有效特征的提取。实验结果表明,相比于其它流形的子空间特征提取方法,提出的方法有效提升了分类精度,具有良好的泛化性能。  相似文献   

9.
高斯核参数σ的选择,直接影响着高斯核支持向量机的分类性能。将聚类方法与最小距离分类法进行融合,构造了能有效确定高斯核参数σ的优化算法。采用高斯核支持向量机方法对测试集进行分类,以分类正确率来评判选取核参数σ的效果。实验表明,该方法适宜于较广泛的数据类型,具有良好的推广能力,并能有效提高分类效果。  相似文献   

10.
基于Fisher 准则和最大熵原理的SVM核参数选择方法   总被引:1,自引:0,他引:1  
针对支持向量机(SVM)核参数选择困难的问题,提出一种基于Fisher准则和最大熵原理的SVM核参数优选方法.首先,从SVM分类器原理出发,提出SVM核参数优劣的衡量标准;然后,根据此标准利用Fisher准则来优选SVM核参数,并引入最大熵原理进一步调整算法的优选性能.整个模型采用粒子群优化算法(PSO)进行参数寻优.UCI标准数据集实验表明了所提方法具有良好的参数选择效果,优选出的核参数能够使SVM具有较高的泛化性能.  相似文献   

11.
人工蜂群算法优化支持向量机的分类研究   总被引:1,自引:0,他引:1  
为了提高支持向量机分类准确率,采用人工蜂群算法对支持向量机参数进行优化,并将该优化方法应用于小麦完好粒、霉变粒和发芽粒三类麦粒的识别。使用小波变换分解信号能量作为特征向量,以分类错误率的倒数作为适应度函数,利用人工蜂群算法对支持向量机的惩罚因子和核函数宽度参数进行优化,优化SVM方法对小麦完好粒、霉变粒和发芽粒的分类正确率达到86%以上。实验结果表明,该研究有较强的实用价值,为SVM性能优化提供了一种新的方法。  相似文献   

12.
为解决传统核极限学习机算法参数优化困难的问题,提高分类准确度,提出一种改进贝叶斯优化的核极限学习机算法.用樽海鞘群设计贝叶斯优化框架中获取函数的下置信界策略,提高算法的局部搜索能力和寻优能力;用这种改进的贝叶斯优化算法对核极限学习机的参数进行寻优,用最优参数构造核极限学习机分类器.在UCI真实数据集上进行仿真实验,实验...  相似文献   

13.
姜伟  毕婷婷  李克秋  杨炳儒 《软件学报》2015,26(7):1812-1823
最近的研究表明:在许多计算机视觉任务中,将对称正定矩阵表示为黎曼流形上的点能够获得更好的识别性能.然而,已有大多数算法仅由切空间局部逼近黎曼流形,不能有效地刻画样本分布.受核方法的启发,提出了一种新的黎曼核局部线性编码方法,并成功地应用于视觉分类问题.首先,借助于最近所提出的黎曼核,把对称正定矩阵映射到再生核希尔伯特空间中,通过局部线性编码理论建立稀疏编码和黎曼字典学习数学模型;其次,结合凸优化方法,给出了黎曼核局部线性编码的字典学习算法;最后,构造一个迭代更新算法优化目标函数,并且利用最近邻分类器完成测试样本的鉴别.在3个视觉分类数据集上的实验结果表明,该算法在分类精度上获得了相当大的提升.  相似文献   

14.
邻域保持嵌入是局部线性嵌入的线性近似,强调保持数据流形的局部结构.改进的最大间隔准则重视数据流形的判别和几何结构,提高了对数据的分类性能.文中提出的核岭回归的邻域保持最大间隔分析既保持流形的局部结构,又使不同类别的数据保持最大间隔,以此构建算法的目标函数.为了解决数据流形高度非线性化的问题,算法采用核岭回归计算特征空间的变换矩阵.先求解数据样本在核子空间中降维映射的结果,再解得核子空间.在标准人脸数据库上的实验表明该算法正确有效,并且识别性能优于普通的流形学习算法.  相似文献   

15.
This paper presents a unified criterion, Fisher + kernel criterion (FKC), for feature extraction and recognition. This new criterion is intended to extract the most discriminant features in different nonlinear spaces, and then, fuse these features under a unified measurement. Thus, FKC can simultaneously achieve nonlinear discriminant analysis and kernel selection. In addition, we present an efficient algorithm Fisher + kernel analysis (FKA), which utilizes the bilinear analysis, to optimize the new criterion. This FKA algorithm can alleviate the ill-posed problem existed in traditional kernel discriminant analysis (KDA), and usually, has no singularity problem. The effectiveness of our proposed algorithm is validated by a series of face-recognition experiments on several different databases.  相似文献   

16.
This paper presents a novel method for intensity normalization of DaTSCAN SPECT brain images. The proposed methodology is based on Gaussian mixture models (GMMs) and considers not only the intensity levels, but also the coordinates of voxels inside the so-defined spatial Gaussian functions. The model parameters are obtained according to a maximum likelihood criterion employing the expectation maximization (EM) algorithm. First, an averaged control subject image is computed to obtain a threshold-based mask that selects only the voxels inside the skull. Then, the GMM is obtained for the DaTSCAN-SPECT database, performing space quantization by populating it with Gaussian kernels whose linear combination approximates the image intensity. According to a probability threshold that measures the weight of each kernel or “cluster” in the striatum area, the voxels in the non-specific region are intensity-normalized by removing clusters whose likelihood is negligible.  相似文献   

17.
The canonical support vector machines (SVMs) are based on a single kernel, recent publications have shown that using multiple kernels instead of a single one can enhance interpretability of the decision function and promote classification accuracy. However, most of existing approaches mainly reformulate the multiple kernel learning as a saddle point optimization problem which concentrates on solving the dual. In this paper, we show that the multiple kernel learning (MKL) problem can be reformulated as a BiConvex optimization and can also be solved in the primal. While the saddle point method still lacks convergence results, our proposed method exhibits strong optimization convergence properties. To solve the MKL problem, a two-stage algorithm that optimizes canonical SVMs and kernel weights alternately is proposed. Since standard Newton and gradient methods are too time-consuming, we employ the truncated-Newton method to optimize the canonical SVMs. The Hessian matrix need not be stored explicitly, and the Newton direction can be computed using several Preconditioned Conjugate Gradient steps on the Hessian operator equation, the algorithm is shown more efficient than the current primal approaches in this MKL setting. Furthermore, we use the Nesterov’s optimal gradient method to optimize the kernel weights. One remarkable advantage of solving in the primal is that it achieves much faster convergence rate than solving in the dual and does not require a two-stage algorithm even for the single kernel LapSVM. Introducing the Laplacian regularizer, we also extend our primal method to semi-supervised scenario. Extensive experiments on some UCI benchmarks have shown that the proposed algorithm converges rapidly and achieves competitive accuracy.  相似文献   

18.
结合实际应用背景, 针对各类样本服从高斯分布的监督学习情形, 提出了构造Fisher核的新方法. 由于利用了样本中的类别信息, 该方法用极大似然估计代替EM算法估计GMM参数, 有效降低了Fisher核构造的时间复杂度. 结合核Fisher分类法, 上述方法在标准人脸库上的仿真实验结果显示, 用所提方法所构造的Fisher核不仅时间复杂度低, 且识别率也优于传统的高斯核与多项式核. 本文的研究有利于将Fisher 核的应用从语音识别领域拓展到图像识别等领域.  相似文献   

19.
This paper addresses the problem of automatically tuning multiple kernel parameters for the kernel-based linear discriminant analysis (LDA) method. The kernel approach has been proposed to solve face recognition problems under complex distribution by mapping the input space to a high-dimensional feature space. Some recognition algorithms such as the kernel principal components analysis, kernel Fisher discriminant, generalized discriminant analysis, and kernel direct LDA have been developed in the last five years. The experimental results show that the kernel-based method is a good and feasible approach to tackle the pose and illumination variations. One of the crucial factors in the kernel approach is the selection of kernel parameters, which highly affects the generalization capability and stability of the kernel-based learning methods. In view of this, we propose an eigenvalue-stability-bounded margin maximization (ESBMM) algorithm to automatically tune the multiple parameters of the Gaussian radial basis function kernel for the kernel subspace LDA (KSLDA) method, which is developed based on our previously developed subspace LDA method. The ESBMM algorithm improves the generalization capability of the kernel-based LDA method by maximizing the margin maximization criterion while maintaining the eigenvalue stability of the kernel-based LDA method. An in-depth investigation on the generalization performance on pose and illumination dimensions is performed using the YaleB and CMU PIE databases. The FERET database is also used for benchmark evaluation. Compared with the existing PCA-based and LDA-based methods, our proposed KSLDA method, with the ESBMM kernel parameter estimation algorithm, gives superior performance.  相似文献   

20.
Maximum likelihood training of probabilistic neural networks   总被引:8,自引:0,他引:8  
A maximum likelihood method is presented for training probabilistic neural networks (PNN's) using a Gaussian kernel, or Parzen window. The proposed training algorithm enables general nonlinear discrimination and is a generalization of Fisher's method for linear discrimination. Important features of maximum likelihood training for PNN's are: 1) it economizes the well known Parzen window estimator while preserving feedforward NN architecture, 2) it utilizes class pooling to generalize classes represented by small training sets, 3) it gives smooth discriminant boundaries that often are "piece-wise flat" for statistical robustness, 4) it is very fast computationally compared to backpropagation, and 5) it is numerically stable. The effectiveness of the proposed maximum likelihood training algorithm is assessed using nonparametric statistical methods to define tolerance intervals on PNN classification performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号