首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
多核学习(MKL)方法在分类及回归任务中均取得了优于单核学习方法的性能,但传统的MKL方法均用于处理两类或多类分类问题.为了使MKL方法适用于处理单类分类(OCC)问题,提出了基于中心核对齐(CKA)的单类支持向量机(OCSVM).首先利用CKA计算每个核矩阵的权重,然后将所得权重用作线性组合系数,进而将不同类型的核函...  相似文献   

2.
This paper presents a novel algorithm to optimize the Gaussian kernel for pattern classification tasks, where it is desirable to have well-separated samples in the kernel feature space. We propose to optimize the Gaussian kernel parameters by maximizing a classical class separability criterion, and the problem is solved through a quasi-Newton algorithm by making use of a recently proposed decomposition of the objective criterion. The proposed method is evaluated on five data sets with two kernel-based learning algorithms. The experimental results indicate that it achieves the best overall classification performance, compared with three competing solutions. In particular, the proposed method provides a valuable kernel optimization solution in the severe small sample size scenario.  相似文献   

3.
Relationship Between Support Vector Set and Kernel Functions in SVM   总被引:15,自引:0,他引:15       下载免费PDF全文
Based on a constructive learning approach,covering algorithms,we investigate the relationship between support vector sets and kernel functions in support vector machines (SVM).An interesting result is obtained.That is,in the linearly non-separable case,any sample of a given sample set K can become a support vector under a certain kernel function.The result shows that when the sample set K is linearly non-separable,although the chosen kernel function satisfies Mercer‘s condition its corresponding support vector set is not necessarily the subset of K that plays a crucial role in classifying K.For a given sample set,what is the subset that plays the crucial role in classification?In order to explore the problem,a new concept,boundary or boundary points,is defined and its properties are discussed.Given a sample set K,we show that the decision functions for classifying the boundary points of K are the same as that for classifying the K itself.And the boundary points of K only depend on K and the structure of the space at which k is located and independent of the chosen approach for finding the boundary.Therefore,the boundary point set may become the subset of K that plays a crucial role in classification.These results are of importance to understand the principle of the support vector machine(SVM) and to develop new learning algorithms.  相似文献   

4.
The kernel function is the core of the Support Vector Machine (SVM), and its selection directly affects the performance of SVM. There has been no theoretical basis on choosing a kernel function for speech recognition. In order to improve the learning ability and generalization ability of SVM for speech recognition, this paper presents the Optimal Relaxation Factor (ORF) kernel function, which is a set of new SVM kernel functions for speech recognition, and proves that the ORF function is a Mercer kernel function. The experiments show the ORF kernel function's effectiveness on mapping trend, bi-spiral, and speech recognition problems. The paper draws the conclusion that the ORF kernel function performs better than the Radial Basis Function (RBF), the Exponential Radial Basis Function (ERBF) and the Kernel with Moderate Decreasing (KMOD). Furthermore, the results of speech recognition with the ORF kernel function illustrate higher recognition accuracy.  相似文献   

5.
王朔琛  汪西莉 《计算机应用》2015,35(10):2974-2979
半监督复合核支持向量机在构造聚类核时,普遍存在复杂度高、不适于大规模图像分类的问题;且K均值(K-means)图像聚类的参数难以估计。针对上述问题,提出基于均值漂移(Mean-Shift)参数自适应的半监督复合核支持向量机图像分类方法。结合Mean-Shift对像素点进行聚类分析以避免K-means图像聚类的局限性;利用图像的结构特征自适应算法参数以避免算法的波动性;由Mean-Shift结果构造Mean Map聚类核以增强同一聚类中的样本属于同一类别的可能性,使复合核更好地指导支持向量机对图像分类。实验验证了改进的聚类算法和参数取值方法可以更好地获取图像的聚类信息,使算法对普通图像和加噪图像的分类正确率较对比的半监督算法一般情况下提高1~7个百分点,且对于较大规模图像也有一定适用性,能够更高效、更稳定地进行图像分类。  相似文献   

6.
A common approach in structural pattern classification is to define a dissimilarity measure on patterns and apply a distance-based nearest-neighbor classifier. In this paper, we introduce an alternative method for classification using kernel functions based on edit distance. The proposed approach is applicable to both string and graph representations of patterns. By means of the kernel functions introduced in this paper, string and graph classification can be performed in an implicit vector space using powerful statistical algorithms. The validity of the kernel method cannot be established for edit distance in general. However, by evaluating theoretical criteria we show that the kernel functions are nevertheless suitable for classification, and experiments on various string and graph datasets clearly demonstrate that nearest-neighbor classifiers can be outperformed by support vector machines using the proposed kernel functions.  相似文献   

7.
高斯序列核支持向量机用于说话人识别   总被引:2,自引:1,他引:2       下载免费PDF全文
说话人识别问题具有重要的理论价值和深远的实用意义,在研究支持向量机核方法理论的基础上,将其与传统高斯混合模型(GMM)相结合构建成基于高斯序列核的支持向量机(SVM)。SVM的灵活性和强大分类能力主要在于可以根据要处理的问题来相应的选取核函数。在识别的过程中引入特征空间归正技术NAP(Nuisance Attribute Projection)对同一说话人在不同信道和环境所带来的特征差异进行弥补。用美国国家标准与技术研究所(NIST)2004年评测数据集进行实验,结果表明该方法可以大幅度提高识别率。  相似文献   

8.
基于支持向量机核函数的条件,将Sobolev Hilbert空间的再生核函数进行改进,给出一种新的支持向量机核函数,并提出一种改进的最小二乘再生核支持向量机的回归模型,该回归模型的参数被减少,且仿真实验结果表明:最小二乘支持向量机的核函数采用改进的再生核函数是可行的,改进后的再生核函数不仅具有核函数的非线性映射特征,而且也继承了该再生核函数对非线性逐级精细逼近的特征,回归的效果比一般的核函数更为细腻。  相似文献   

9.
模糊多核支持向量机将模糊支持向量机与多核学习方法结合,通过构造隶属度函数和利用多个核函数的组合形式有效缓解了传统支持向量机模型对噪声数据敏感和多源异构数据学习困难等问题,广泛应用于模式识别和人工智能领域.综述了模糊多核支持向量机的理论基础及其研究现状,详细介绍模糊多核支持向量机中的关键问题,即模糊隶属度函数设计与多核学习方法,最后对模糊多核支持向量机算法未来的研究进行展望.  相似文献   

10.
孙辉  许洁萍  刘彬彬 《计算机应用》2015,35(6):1753-1756
针对不同特征向量下选择最优核函数的学习方法问题,将多核学习支持向量机(MK-SVM)应用于音乐流派自动分类中,提出了将最优核函数进行加权组合构成合成核函数进行流派分类的方法。多核分类学习能够针对不同的声学特征采用不同的最优核函数,并通过学习得到各个核函数在分类中的权重,从而明确各声学特征在流派分类中的权重,为音乐流派分类中特征向量的分析和选择提供了一个清晰、明确的结果。在ISMIR 2011竞赛数据集上验证了提出的基于多核学习支持向量机(MKL-SVM)的分类方法,并与传统的基于单核支持向量机的方法进行了比较分析。实验结果表明基于MKL-SVM的音乐流派自动分类准确率比传统单核支持向量机的分类准确率提高了6.58%,且该方法与传统的特征选择结果比较,更清楚地解释了所选择的特征向量对流派分类的影响大小,通过选择影响较大的特征组合进行分类,分类结果也有了明显的提升。  相似文献   

11.
Invariant kernel functions for pattern analysis and machine learning   总被引:1,自引:0,他引:1  
In many learning problems prior knowledge about pattern variations can be formalized and beneficially incorporated into the analysis system. The corresponding notion of invariance is commonly used in conceptionally different ways. We propose a more distinguishing treatment in particular in the active field of kernel methods for machine learning and pattern analysis. Additionally, the fundamental relation of invariant kernels and traditional invariant pattern analysis by means of invariant representations will be clarified. After addressing these conceptional questions, we focus on practical aspects and present two generic approaches for constructing invariant kernels. The first approach is based on a technique called invariant integration. The second approach builds on invariant distances. In principle, our approaches support general transformations in particular covering discrete and non-group or even an infinite number of pattern-transformations. Additionally, both enable a smooth interpolation between invariant and non-invariant pattern analysis, i.e. they are a covering general framework. The wide applicability and various possible benefits of invariant kernels are demonstrated in different kernel methods. Editor: Phil Long.  相似文献   

12.
Given n training examples, the training of a least squares support vector machine (LS-SVM) or kernel ridge regression (KRR) corresponds to solving a linear system of dimension n. In cross-validating LS-SVM or KRR, the training examples are split into two distinct subsets for a number of times (l) wherein a subset of m examples are used for validation and the other subset of (n-m) examples are used for training the classifier. In this case l linear systems of dimension (n-m) need to be solved. We propose a novel method for cross-validation (CV) of LS-SVM or KRR in which instead of solving l linear systems of dimension (n-m), we compute the inverse of an n dimensional square matrix and solve l linear systems of dimension m, thereby reducing the complexity when l is large and/or m is small. Typical multi-fold, leave-one-out cross-validation (LOO-CV) and leave-many-out cross-validations are considered. For five-fold CV used in practice with five repetitions over randomly drawn slices, the proposed algorithm is approximately four times as efficient as the naive implementation. For large data sets, we propose to evaluate the CV approximately by applying the well-known incomplete Cholesky decomposition technique and the complexity of these approximate algorithms will scale linearly on the data size if the rank of the associated kernel matrix is much smaller than n. Simulations are provided to demonstrate the performance of LS-SVM and the efficiency of the proposed algorithm with comparisons to the naive and some existent implementations of multi-fold and LOO-CV.  相似文献   

13.
支持向量机核函数选择研究与仿真   总被引:2,自引:0,他引:2       下载免费PDF全文
支持向量机是一种基于核的学习方法,核函数选取对支持向量机性能有着重要的影响,如何有效地进行核函数选择是支持向量机研究领域的一个重要问题。目前大多数核选择方法不考虑数据的分布特征,没有充分利用隐含在数据中的先验信息。为此,引入能量熵概念,借助超球体描述和核函数蕴藏的度量特征,提出一种基于样本分布能量熵的支持向量机核函数选择方法,以提高SVM学习能力和泛化能力。数值实例仿真验证表明了该方法的可行性和有效性。  相似文献   

14.
针对分类问题,基于可拓学的思想,提出了可拓支持向量分类机算法。与标准的支持向量分类机不同,可拓支持向量机在进行分类预测的同时,更注重于找到那些通过变化特征值而转换类别的样本。文中给出了可拓变量和可拓分类问题的定义,并构建了求解可拓分类问题的两种可拓支持向量机算法。把可拓学与SVM结合是一种新的方向,文中所提出的算法还有待进一步的理论分析,将在未来的工作里,继续探索如何在可拓学的基础上,构建更加完善的可拓SVM方法。  相似文献   

15.
The least squares twin support vector machine (LSTSVM) generates two non-parallel hyperplanes by directly solving a pair of linear equations as opposed to solving two quadratic programming problems (QPPs) in the conventional twin support vector machine (TSVM), which makes learning speed of LSTSVM faster than that of the TSVM. However, LSTSVM fails to discover underlying similarity information within samples which may be important for classification performance. To address the above problem, we apply the similarity information of samples into LSTSVM to build a novel non-parallel plane classifier, called K-nearest neighbor based least squares twin support vector machine (KNN-LSTSVM). The proposed method not only retains the superior advantage of LSTSVM which is simple and fast algorithm but also incorporates the inter-class and intra-class graphs into the model to improve classification accuracy and generalization ability. The experimental results on several synthetic as well as benchmark datasets demonstrate the efficiency of our proposed method. Finally, we further went on to investigate the effectiveness of our classifier for human action recognition application.  相似文献   

16.
A mixed effects least squares support vector machine (LS-SVM) classifier is introduced to extend the standard LS-SVM classifier for handling longitudinal data. The mixed effects LS-SVM model contains a random intercept and allows to classify highly unbalanced data, in the sense that there is an unequal number of observations for each case at non-fixed time points. The methodology consists of a regression modeling and a classification step based on the obtained regression estimates. Regression and classification of new cases are performed in a straightforward manner by solving a linear system. It is demonstrated that the methodology can be generalized to deal with multi-class problems and can be extended to incorporate multiple random effects. The technique is illustrated on simulated data sets and real-life problems concerning human growth.  相似文献   

17.
一种支持向量机的混合核函数   总被引:2,自引:0,他引:2  
核函数是支持向量机的核心,不同的核函数将产生不同的分类效果.由于普通核函数各有其利弊,为了得到学习能力和泛化能力较强的核函数,根据核函数的基本性质,两个核函数之和仍然是核函数,将局部核函数和全局核函数线性组合构成新的核函数--混合核函数.该核函数吸取了局部核函数和全局核函数的优点.利用混合核函数进行流程企业供应链预测实验,仿真结果验证了该核函数的有效性和正确性.  相似文献   

18.
黄华娟  韦修喜  周永权   《智能系统学报》2019,14(6):1271-1277
针对传统的粒度支持向量机(granular support vector machine, GSVM)将训练样本在原空间粒化后再映射到核空间,导致数据与原空间的分布不一致,从而降低GSVM的泛化能力的问题,本文提出了一种基于模糊核聚类粒化的粒度支持向量机学习算法(fuzzy kernel cluster granular support vector machine, FKC-GSVM)。FKC-GSVM通过利用模糊核聚类直接在核空间对数据进行粒的划分和支持向量粒的选取,在相同的核空间中进行支持向量粒的GSVM训练。在UCI数据集和NDC大数据上的实验表明:与其他几个算法相比,FKC-GSVM在更短的时间内获得了精度更高的解。  相似文献   

19.
针对科学实践、经济生活等诸多领域数据分布相对复杂的分类问题,使用传统支持向量机(SVM)无法很好地刻画其变量间的相关性,从而影响分类性能。对于这一情况,提出使用经典高斯函数的参数推广形式--Q-高斯函数作为SVM的核函数构建财务危机预警模型。结合沪深股市A股制造业上市公司的财务数据分别建立T-2和T-3财务预警模型进行实证分析,采用显著性检验筛选出合适的财务指标并利用交叉验证方法确定模型参数。相比高斯核SVM财务危机预警模型,使用Q-高斯核SVM建立的T-2和T-3模型的预报准确率都提高了大约3%,而且成本较高的第Ⅰ类错误最多降低了14.29%。  相似文献   

20.
基于核函数的支持向量机样本选取算法   总被引:2,自引:0,他引:2  
使用支持向量机求解大规模数据分类需要较大内存来存储Hessian矩阵,而矩阵的大小则依赖于样本数1,因此在一定程度上导致支持向量机分类效率及质量难以提高.考虑到只有成为支持向量的样本才对决策函数起作用,为了减少训练样本时所需空间及时间开销,提高支持向量机分类效率与质量,提出了一种基于核函数的样本选取算法.该算法通过选取最大可能成为支持向量的样本,以达到减少训练时存储Hessian矩阵所需空间及时间开销的目的.实验结果表明,该算法所筛选出的样本不仅可以提高样本训练准确率,而且可以提高分类计算速度和减少存储空间开销.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号