首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 468 毫秒
1.
《计算机工程与科学》2017,(10):1901-1907
多核学习是目前基于内核学习的机器学习领域中的一个新的研究热点。内核学习方法可以把数据映射到高维空间来增加线性分类器如支持向量机的计算性能,它是目前处理非线性模式识别与分类问题的一种便捷、高效的方法。然而,在某些特殊情况下,基于单一核函数的内核学习方法并不能完全有效地处理如数据异构或者不规则、样本规模大、样本分布不平坦等实际问题,所以通过将多个核函数以加权的形式合成一个核函数,来得到更好的识别精度以及效率,是当前研究的一个发展趋势。因此,提出一种基于样本加权的合成多核学习方法,通过单一核函数对样本的拟合以及适应能力(对样本的学习精度),对每一个核函数按照对应的拟合以及适应能力加权,最终得到基于样本加权的合成多核决策函数。为了验证基于样本加权的合成多核学习方法的有效性和可靠性,在多个数据集上进行了实验分析,实验结果表明,与已有的多核学习方法相比较,本文提出的方法取得了更好的分类结果。  相似文献   

2.
多尺度核方法是当前核机器学习领域的一个热点.通常多尺度核的学习在多核处理时存在诸如多核平均组合、迭代学习时间长、经验选择合成系数等弊端.文中基于核目标度量规则,提出一种多尺度核方法的自适应序列学习算法,实现多核加权系数的自动快速求取.实验表明,该方法在回归精度、分类正确率方面比单核支持向量机方法结果更优,函数拟合与分类...  相似文献   

3.
沈健  蒋芸  张亚男  胡学伟 《计算机科学》2016,43(12):139-145
多核学习方法是机器学习领域中的一个新的热点。核方法通过将数据映射到高维空间来增加线性分类器的计算能力,是目前解决非线性模式分析与分类问题的一种有效途径。但是在一些复杂的情况下,单个核函数构成的核学习方法并不能完全满足如数据异构或者不规则、样本规模大、样本分布不平坦等实际应用中的需求问题,因此将多个核函数进行组合以期获得更好的结果,是一种必然的发展趋势。因此提出一种基于样本加权的多尺度核支持向量机方法,通过不同尺度核函数对样本的拟合能力进行加权,从而得到基于样本加权的多尺度核支持向量机决策函数。通过在多个数据集上的实验分析可以得出所提方法对于各个数据集都获得了很高的分类准确率。  相似文献   

4.
核方法是解决非线性模式分析问题的一种有效方法,是当前机器学习领域的一个研究热点.核函数是影响核方法性能的关键因素,以支持向量机作为核函数的载体,从核函数的构造、核函数中参数的选择、多核学习3个角度对核函数的选择的研究现状及其进展情况进行了系统地概述,并指出根据特定应用领域选择核函数、设计有效的核函数度量标准和拓宽核函数选择的研究范围是其中3个值得进一步研究的方向.  相似文献   

5.
基于改进多核学习的语音情感识别算法   总被引:2,自引:2,他引:0  
提出一种基于改进多核学习的语音情感识别算法.算法以高斯径向基核函数为基准,通过采样不同的样本,采用不同的评价标准并获得不同的参数,来提高分类性能.此外,通过引入多核技术,将得到的高斯核函数构建多核学习的基核,并通过利用松弛因子构建的软间隔多核学习的目标函数改善了学习效率.对比仿真实验结果表明,本文提出的基于多核学习语音情感识别算法有效提高了语音情感识别性能.  相似文献   

6.
提出了一种特征加权的核学习方法,其主要为了解决当前核方法在分类任务中对所有数据特征的同等对待的不足。在分类任务中,数据样本的每个特征所起的作用并不是相同的,有些特征对分类任务有促进作用,应该给予更多的关注。提出的算法集成了多核学习的优势,以加权的方式组合不同的核函数,但所需的计算复杂度更低。实验结果证明,提出的算法与支持向量机、多核学习算法相比,分类准确度优于支持向量机和多核学习算法,在计算复杂度上略高于支持向量机,但远远低于多核学习算法。  相似文献   

7.
提出一种基于条件信息熵维度约简和多核支持向量机的程序语义标注方法,相对于传统的本体语义标注,该方法有如下特点:采用机器学习的方式,实现了软件语义的自动标注;通过重采样平衡了正负样本;利用条件信息熵对面向对象程序的模块样本特征进行维度约简,降低了问题的计算复杂度和开销,并给出了代数约简的转化方法;核函数采用多个基核函数线性组合的方式,兼顾了分类的学习能力和泛化性能。标注实例表明,该方法能保证较高的标注准确率,具有较好的实用性和推广性。  相似文献   

8.
模糊多核支持向量机将模糊支持向量机与多核学习方法结合,通过构造隶属度函数和利用多个核函数的组合形式有效缓解了传统支持向量机模型对噪声数据敏感和多源异构数据学习困难等问题,广泛应用于模式识别和人工智能领域.综述了模糊多核支持向量机的理论基础及其研究现状,详细介绍模糊多核支持向量机中的关键问题,即模糊隶属度函数设计与多核学习方法,最后对模糊多核支持向量机算法未来的研究进行展望.  相似文献   

9.
贾涵  连晓峰  潘兵 《测控技术》2019,38(8):43-47
为了满足当前工业生产中对产品外观缺陷检测的精度及实时的要求,提出一种基于模糊松弛约束多核学习的产品外观缺陷检测方法(FRC-MKL)。该方法对各类外观缺陷的样本图像进行特征提取,采用模糊约束理论求解组合核函数中各核函数的权重,通过给每个核函数划定一个模糊权重得到组合核函数,实现了将该组合核函数作为分类器的核函数进行缺陷的学习分类。实验结果表明,利用模糊松弛约束和多核技术可以提高产品外观缺陷的检测精度并且在实地检测中可以更好地满足外观缺陷的实时检测。  相似文献   

10.
针对单核聚类的性能局限性问题,提出将高斯核、Sigmoid核以及多项式核等多种核组成一种新的多核函数,并利用于模糊核进行聚类。高斯核在聚类中有广泛应用,同时Sigmoid核在神经网络中被证明具有很好的全局分类性能。将不同的核函数组合起来的多核函数将结合各种核函数的优点,其聚类性能优于利用单核的模糊核聚类(KFCM),实验结果表明了该方法的有效性。  相似文献   

11.
The kernel method, especially the kernel-fusion method, is widely used in social networks, computer vision, bioinformatics, and other applications. It deals effectively with nonlinear classification problems, which can map linearly inseparable biological sequence data from low to high-dimensional space for more accurate differentiation, enabling the use of kernel methods to predict the structure and function of sequences. Therefore, the kernel method is significant in the solution of bioinformatics problems. Various kernels applied in bioinformatics are explained clearly, which can help readers to select proper kernels to distinguish tasks. Mass biological sequence data occur in practical applications. Research of the use of machine learning methods to obtain knowledge, and how to explore the structure and function of biological methods for theoretical prediction, have always been emphasized in bioinformatics. The kernel method has gradually become an important learning algorithm that is widely used in gene expression and biological sequence prediction. This review focuses on the requirements of classification tasks of biological sequence data. It studies kernel methods and optimization algorithms, including methods of constructing kernel matrices based on the characteristics of biological sequences and kernel fusion methods existing in a multiple kernel learning framework.  相似文献   

12.
针对传统的分类器集成的每次迭代通常是将单个最优个体分类器集成到强分类器中,而其它可能有辅助作用的个体分类器被简单抛弃的问题,提出了一种基于Boosting框架的非稀疏多核学习方法MKL-Boost,利用了分类器集成学习的思想,每次迭代时,首先从训练集中选取一个训练子集,然后利用正则化非稀疏多核学习方法训练最优个体分类器,求得的个体分类器考虑了M个基本核的最优非稀疏线性凸组合,通过对核组合系数施加LP范数约束,一些好的核得以保留,从而保留了更多的有用特征信息,差的核将会被去掉,保证了有选择性的核融合,然后将基于核组合的最优个体分类器集成到强分类器中。提出的算法既具有Boosting集成学习的优点,同时具有正则化非稀疏多核学习的优点,实验表明,相对于其它Boosting算法,MKL-Boost可以在较少的迭代次数内获得较高的分类精度。  相似文献   

13.
In recent years, several methods have been proposed to combine multiple kernels using a weighted linear sum of kernels. These different kernels may be using information coming from multiple sources or may correspond to using different notions of similarity on the same source. We note that such methods, in addition to the usual ones of the canonical support vector machine formulation, introduce new regularization parameters that affect the solution quality and, in this work, we propose to optimize them using response surface methodology on cross-validation data. On several bioinformatics and digit recognition benchmark data sets, we compare multiple kernel learning and our proposed regularized variant in terms of accuracy, support vector count, and the number of kernels selected. We see that our proposed variant achieves statistically similar or higher accuracy results by using fewer kernel functions and/or support vectors through suitable regularization; it also allows better knowledge extraction because unnecessary kernels are pruned and the favored kernels reflect the properties of the problem at hand.  相似文献   

14.
一种支持向量机的混合核函数   总被引:2,自引:0,他引:2  
核函数是支持向量机的核心,不同的核函数将产生不同的分类效果.由于普通核函数各有其利弊,为了得到学习能力和泛化能力较强的核函数,根据核函数的基本性质,两个核函数之和仍然是核函数,将局部核函数和全局核函数线性组合构成新的核函数--混合核函数.该核函数吸取了局部核函数和全局核函数的优点.利用混合核函数进行流程企业供应链预测实验,仿真结果验证了该核函数的有效性和正确性.  相似文献   

15.
Multiple kernel learning (MKL) approach has been proposed for kernel methods and has shown high performance for solving some real-world applications. It consists on learning the optimal kernel from one layer of multiple predefined kernels. Unfortunately, this approach is not rich enough to solve relatively complex problems. With the emergence and the success of the deep learning concept, multilayer of multiple kernel learning (MLMKL) methods were inspired by the idea of deep architecture. They are introduced in order to improve the conventional MKL methods. Such architectures tend to learn deep kernel machines by exploring the combinations of multiple kernels in a multilayer structure. However, existing MLMKL methods often have trouble with the optimization of the network for two or more layers. Additionally, they do not always outperform the simplest method of combining multiple kernels (i.e., MKL). In order to improve the effectiveness of MKL approaches, we introduce, in this paper, a novel backpropagation MLMKL framework. Specifically, we propose to optimize the network over an adaptive backpropagation algorithm. We use the gradient ascent method instead of dual objective function, or the estimation of the leave-one-out error. We test our proposed method through a large set of experiments on a variety of benchmark data sets. We have successfully optimized the system over many layers. Empirical results over an extensive set of experiments show that our algorithm achieves high performance compared to the traditional MKL approach and existing MLMKL methods.  相似文献   

16.
Kernel methods provide high performance in a variety of machine learning tasks. However, the success of kernel methods is heavily dependent on the selection of the right kernel function and proper setting of its parameters. Several sets of kernel functions based on orthogonal polynomials have been proposed recently. Besides their good performance in the error rate, these kernel functions have only one parameter chosen from a small set of integers, and it facilitates kernel selection greatly. Two sets of orthogonal polynomial kernel functions, namely the triangularly modified Chebyshev kernels and the triangularly modified Legendre kernels, are proposed in this study. Furthermore, we compare the construction methods of some orthogonal polynomial kernels and highlight the similarities and differences among them. Experiments on 32 data sets are performed for better illustration and comparison of these kernel functions in classification and regression scenarios. In general, there is difference among these orthogonal polynomial kernels in terms of accuracy, and most orthogonal polynomial kernels can match the commonly used kernels, such as the polynomial kernel, the Gaussian kernel and the wavelet kernel. Compared with these universal kernels, the orthogonal polynomial kernels each have a unique easily optimized parameter, and they store statistically significantly less support vectors in support vector classification. New presented kernels can obtain better generalization performance both for classification tasks and regression tasks.  相似文献   

17.
弹性多核学习   总被引:1,自引:0,他引:1  
多核学习 (MKL) 的提出是为了解决多个核矩阵的融合问题, 多核学习求解关于多个核矩阵的最优的线性组合并同时解出对应于这个组合矩阵的支持向量机(SVM)问题. 现有的多核学习的框架倾向于寻找稀疏的组合系数, 但是当有信息的核的比例较高的时候, 对稀疏性的倾向会使得只有少量的核被选中而损失相当的分类信息. 在本文中, 我们提出了弹性多核学习的框架来实现自适应的多核学习. 弹性多核学习的框架利用了一个混合正则化函数来均衡稀疏性和非稀疏性, 多核学习和支持向量机问题都可以视作弹性多核学习的特殊情形. 基于针对多核学习的梯度下降法, 我们导出了针对弹性多核学习的梯度下降法. 仿真数据的结果显示了弹性多核学习方法相对多核学习和支持向量机的优势; 我们还进一步将弹性多核学习应用于基因集合分析问题并取得了有意义的结果; 最后, 我们比较研究了弹性多核学习与另一种利用了非稀疏思想的多核学习.  相似文献   

18.
Kernel-based methods are effective for object detection and recognition. However, the computational cost when using kernel functions is high, except when using linear kernels. To realize fast and robust recognition, we apply normalized linear kernels to local regions of a recognition target, and the kernel outputs are integrated by summation. This kernel is referred to as a local normalized linear summation kernel. Here, we show that kernel-based methods that employ local normalized linear summation kernels can be computed by a linear kernel of local normalized features. Thus, the computational cost of the kernel is nearly the same as that of a linear kernel and much lower than that of radial basis function (RBF) and polynomial kernels. The effectiveness of the proposed method is evaluated in face detection and recognition problems, and we confirm that our kernel provides higher accuracy with lower computational cost than RBF and polynomial kernels. In addition, our kernel is also robust to partial occlusion and shadows on faces since it is based on the summation of local kernels.  相似文献   

19.
For improving the classification performance on the cheap, it is necessary to exploit both labeled and unlabeled samples by applying semi-supervised learning methods, most of which are built upon the pair-wise similarities between the samples. While the similarities have so far been formulated in a heuristic manner such as by k-NN, we propose methods to construct similarities from the probabilistic viewpoint. The kernel-based formulation of a transition probability is first proposed via comparing kernel least squares to variational least squares in the probabilistic framework. The formulation results in a simple quadratic programming which flexibly introduces the constraint to improve practical robustness and is efficiently computed by SMO. The kernel-based transition probability is by nature favorably sparse even without applying k-NN and induces the similarity measure of the same characteristics. Besides, to cope with multiple types of kernel functions, the multiple transition probabilities obtained correspondingly from the kernels can be probabilistically integrated with prior probabilities represented by linear weights. We propose a computationally efficient method to optimize the weights in a discriminative manner. The optimized weights contribute to a composite similarity measure straightforwardly as well as to integrate the multiple kernels themselves as multiple kernel learning does, which consequently derives various types of multiple kernel based semi-supervised classification methods. In the experiments on semi-supervised classification tasks, the proposed methods demonstrate favorable performances, compared to the other methods, in terms of classification performances and computation time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号