首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 93 毫秒
1.
黄正华  王士同 《计算机工程与设计》2007,28(14):3501-3503,3507
很多情况下,研究者掌握了一些分类数据的生成信息,这些信息能够为核函数提供有价值的分类特征.已有大量结合生成模型构造核函数的研究,边际核是其中较新的研究成果.以边际核理论为基础,在边际核特征空间中引入特征向量之间的距离作为相似性的量度,构造了基于距离量度的边际核函数.随后将它和原边际核均应用于具体的grbB(旋转酶B亚单位)氨基酸序列分类实验中,实验结果表明:基于距离量度的边际核拥有比原边际核更佳的识别效率,且也具备一定的推广能力.  相似文献   

2.
一种新的混合核函数支持向量机   总被引:1,自引:0,他引:1  
针对单核函数支持向量机性能的局限性问题,提出将sigmoid核函数与高斯核函数组成一种新的混合核函数支持向量机.高斯核是典型的局部核;sigmoid核在神经网络中被证明具有良好的全局分类性能.新混合核函数结合二者的优点,其支持向量机的分类性能优于由单核函数构成的支持向量机,实验结果表明该方法的有效性.  相似文献   

3.
何亮  刘加 《计算机应用》2011,31(8):2083-2086
为了提高文本无关的说话人识别系统的性能,提出了基于线性对数似然核函数的说话人识别系统。线性对数似然核函数利用高斯混合模型对频谱特征序列进行压缩;将频谱特征序列之间的相似程度转化为高斯混合模型参数之间的距离;根据距离表达式,利用极化恒等式求得频谱特征序列向高维矢量空间的映射方法;最后,在高维矢量空间,采用支持向量机(SVM)为目标说话人建立模型。在美国国家标准技术署公布的说话人识别数据库上的实验结果表明,所提核函数具有优异的识别性能。  相似文献   

4.
基于混合核支持向量机的金融时间序列分析   总被引:2,自引:0,他引:2       下载免费PDF全文
核函数是支持向量机(SVM)的重要部分,它直接影响到SVM的各项性能。当前SVM在金融时间序列分析中,基本上采用高斯径向核函数(RBF),其次才是多项式核函数。然而,每种核函数都有它的优势和不足,整合两个或多个核函数对于学习能力和泛化能力的提高是一个有效的途径。采用高斯径向核函数与多项式核函数的混合核函数运用于金融时间序列预测中,且与其单个核函数的支持向量机的实验结果进行了比较。结果表明,混合核函数具有更好的性能。  相似文献   

5.
孙霞  王自强 《计算机工程》2012,38(11):139-142
为解决人脸识别中的维数灾难问题,提出一种基于自适应核边际费希尔分析的人脸识别算法。在考虑图像流形结构的基础上给出与图像数据相关的自适应核函数,采用核边际费希尔分析对高维人脸图像进行非线性降维,利用最小二乘支持向量机在降维后的低维特征空间中进行分类识别。实验结果表明,该算法的识别性能优于其他常用的人脸识别算法。  相似文献   

6.
支持向量机是一种比较新的机器学习方法,它满足结构风险最小的要求,并且能够适用于高维的特征空间,因此在生物序列分析中得到了广泛地应用。结合基因序列的特点,提出了一种新的核函数--位置权重子序列核函数。这个核函数融合了基因序列中子序列的组成特征和位置信息,能够比较充分地体现序列特征。将这个核函数用于基因剪接位点的识别分析,得到的结果表明,采用了位置权重子序列核函数的支持向量机能够很好的识别剪接位点,与其它方法相比,取得了更高的识别精度。  相似文献   

7.
无监督异常检测的核聚类和序列分析方法   总被引:2,自引:0,他引:2  
利用核函数构造数据的特征空间并在此空间采用核函数结合RA算法选取初始聚类中心,在核k-means聚类基础上,划分出大簇小簇,然后在大簇中进行异类分离以发现实验数据中以小概率事件出现的R2L,U2R和PROBE攻击;并且在大簇中挖掘闭合序列模式,获得描述大簇的序列规则,从中判断是否存在DoS攻击.算法分析和实验结果表明提出的方法可以获得较高的检测率并降低误报率.  相似文献   

8.
将统计检验方法应用于核函数度量.以核函数、规范化核函数、中心化核函数和核距离作为样本在特征空间中的几何关系度量,使用t检验和F检验等7种统计检验方法检验特征空间中同类样本间几何关系度量值与异类样本间几何关系度量值的分布差异,以此反映特征空间中同类样本间内聚性与异类样本间分离性间的差异.在11个UCI数据集上进行的核函数选择实验表明,基于统计检验的核度量方法达到或超过了核校准与特征空间核度量标准等方法的效果,适用于核函数度量;并且发现两类数据分布差异主要体现在了方差差异上.此外,对核函数的处理(规范化或中心化)会改变特征空间,使得度量结果失真.  相似文献   

9.
一种支持向量机的组合核函数   总被引:11,自引:0,他引:11  
张冰  孔锐 《计算机应用》2007,27(1):44-46
核函数是支持向量机的核心,不同的核函数将产生不同的分类效果,核函数也是支持向量机理论中比较难理解的一部分。通过引入核函数,支持向量机可以很容易地实现非线性算法。首先探讨了核函数的本质,说明了核函数与所映射空间之间的关系,进一步给出了核函数的构成定理和构成方法,说明了核函数分为局部核函数与全局核函数两大类,并指出了两者的区别和各自的优势。最后,提出了一个新的核函数——组合核函数,并将该核函数应用于支持向量机中,并进行了人脸识别实验,实验结果也验证了该核函数的有效性。  相似文献   

10.
应用于垃圾邮件过滤的词序列核   总被引:1,自引:0,他引:1  
针对支持向量机(SVM)中常用核函数由于忽略文本结构而导致大量语义信息丢失的现象,提出一种类别相关度量的词序列核(WSK),并将其应用于垃圾邮件过滤。首先提取邮件文本特征并计算特征的类别相关度量,然后利用词序列核作为核函数训练支持向量机,训练过程中利用类别相关度量计算词的衰减系数,最后对邮件进行分类。实验结果表明,与常用核函数和字符串核相比,改进的词序列核分类准确率更高,提高了垃圾邮件过滤的准确率。  相似文献   

11.
During the past few years, several works have been done to derive string kernels from probability distributions. For instance, the Fisher kernel uses a generative model M (e.g. a hidden Markov model) and compares two strings according to how they are generated by M. On the other hand, the marginalized kernels allow the computation of the joint similarity between two instances by summing conditional probabilities. In this paper, we adapt this approach to edit distance-based conditional distributions and we present a way to learn a new string edit kernel. We show that the practical computation of such a kernel between two strings x and x built from an alphabet Σ requires (i) to learn edit probabilities in the form of the parameters of a stochastic state machine and (ii) to calculate an infinite sum over Σ* by resorting to the intersection of probabilistic automata as done for rational kernels. We show on a handwritten character recognition task that our new kernel outperforms not only the state of the art string kernels and string edit kernels but also the standard edit distance used by a neighborhood-based classifier.  相似文献   

12.
The kernel method, especially the kernel-fusion method, is widely used in social networks, computer vision, bioinformatics, and other applications. It deals effectively with nonlinear classification problems, which can map linearly inseparable biological sequence data from low to high-dimensional space for more accurate differentiation, enabling the use of kernel methods to predict the structure and function of sequences. Therefore, the kernel method is significant in the solution of bioinformatics problems. Various kernels applied in bioinformatics are explained clearly, which can help readers to select proper kernels to distinguish tasks. Mass biological sequence data occur in practical applications. Research of the use of machine learning methods to obtain knowledge, and how to explore the structure and function of biological methods for theoretical prediction, have always been emphasized in bioinformatics. The kernel method has gradually become an important learning algorithm that is widely used in gene expression and biological sequence prediction. This review focuses on the requirements of classification tasks of biological sequence data. It studies kernel methods and optimization algorithms, including methods of constructing kernel matrices based on the characteristics of biological sequences and kernel fusion methods existing in a multiple kernel learning framework.  相似文献   

13.
A novel pattern recognition algorithm called an orthogonal kernel machine (OKM) is presented for the prediction of functional sites in proteins. Two novelties in OKM are that the kernel function is specially designed for measuring the similarity between a pair of protein sequences and the kernels are selected using the orthogonal method. Based on a set of well-recognized orthogonal kernels, this algorithm demonstrates its superior performance compared with other methods. An application of this algorithm to a real problem is presented.  相似文献   

14.
We propose a family of kernels based on the Binet-Cauchy theorem, and its extension to Fredholm operators. Our derivation provides a unifying framework for all kernels on dynamical systems currently used in machine learning, including kernels derived from the behavioral framework, diffusion processes, marginalized kernels, kernels on graphs, and the kernels on sets arising from the subspace angle approach. In the case of linear time-invariant systems, we derive explicit formulae for computing the proposed Binet-Cauchy kernels by solving Sylvester equations, and relate the proposed kernels to existing kernels based on cepstrum coefficients and subspace angles. We show efficient methods for computing our kernels which make them viable for the practitioner. Besides their theoretical appeal, these kernels can be used efficiently in the comparison of video sequences of dynamic scenes that can be modeled as the output of a linear time-invariant dynamical system. One advantage of our kernels is that they take the initial conditions of the dynamical systems into account. As a first example, we use our kernels to compare video sequences of dynamic textures. As a second example, we apply our kernels to the problem of clustering short clips of a movie. Experimental evidence shows superior performance of our kernels. Parts of this paper were presented at SYSID 2003 and NIPS 2004.  相似文献   

15.
Kernel-based methods are effective for object detection and recognition. However, the computational cost when using kernel functions is high, except when using linear kernels. To realize fast and robust recognition, we apply normalized linear kernels to local regions of a recognition target, and the kernel outputs are integrated by summation. This kernel is referred to as a local normalized linear summation kernel. Here, we show that kernel-based methods that employ local normalized linear summation kernels can be computed by a linear kernel of local normalized features. Thus, the computational cost of the kernel is nearly the same as that of a linear kernel and much lower than that of radial basis function (RBF) and polynomial kernels. The effectiveness of the proposed method is evaluated in face detection and recognition problems, and we confirm that our kernel provides higher accuracy with lower computational cost than RBF and polynomial kernels. In addition, our kernel is also robust to partial occlusion and shadows on faces since it is based on the summation of local kernels.  相似文献   

16.
General purpose computation on graphics processing unit (GPU) is rapidly entering into various scientific and engineering fields. Many applications are being ported onto GPUs for better performance. Various optimizations, frameworks, and tools are being developed for effective programming of GPU. As part of communication and computation optimizations for GPUs, this paper proposes and implements an optimization method called as kernel coalesce that further enhances GPU performance and also optimizes CPU to GPU communication time. With kernel coalesce methods, proposed in this paper, the kernel launch overheads are reduced by coalescing the concurrent kernels and data transfers are reduced incase of intermediate data generated and used among kernels. Computation optimization on a device (GPU) is performed by optimizing the number of blocks and threads launched by tuning it to the architecture. Block level kernel coalesce method resulted in prominent performance improvement on a device without the support for concurrent kernels. Thread level kernel coalesce method is better than block level kernel coalesce method when the design of a grid structure (i.e., number of blocks and threads) is not optimal to the device architecture that leads to underutilization of the device resources. Both the methods perform similar when the number of threads per block is approximately the same in different kernels, and the total number of threads across blocks fills the streaming multiprocessor (SM) capacity of the device. Thread multi‐clock cycle coalesce method can be chosen if the programmer wants to coalesce more than two concurrent kernels that together or individually exceed the thread capacity of the device. If the kernels have light weight thread computations, multi clock cycle kernel coalesce method gives better performance than thread and block level kernel coalesce methods. If the kernels to be coalesced are a combination of compute intensive and memory intensive kernels, warp interleaving gives higher device occupancy and improves the performance. Multi clock cycle kernel coalesce method for micro‐benchmark1 considered in this paper resulted in 10–40% and 80–92% improvement compared with separate kernel launch, without and with shared input and intermediate data among the kernels, respectively, on a Fermi architecture device, that is, GTX 470. A nearest neighbor (NN) kernel from Rodinia benchmark is coalesced to itself using thread level kernel coalesce method and warp interleaving giving 131.9% and 152.3% improvement compared with separate kernel launch and 39.5% and 36.8% improvement compared with block level kernel coalesce method, respectively.Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

17.
多核学习方法   总被引:56,自引:5,他引:51  
多核学习方法是当前核机器学习领域的一个新的热点. 核方法是解决非线性模式分析问题的一种有效方法, 但在一些复杂情形下, 由单个核函数构成的核机器并不能满足诸如数据异构或不规则、样本规模巨大、样本不平坦分布等实际的应用需求, 因此将多个核函数进行组合, 以获得更好的结果是一种必然选择. 本文根据多核的构成, 从合成核、多尺度核、无限核三个角度, 系统综述了多核方法的构造理论, 分析了多核学习典型方法的特点及不足, 总结了各自的应用领域, 并凝炼了其进一步的研究方向.  相似文献   

18.
Kernel methods provide high performance in a variety of machine learning tasks. However, the success of kernel methods is heavily dependent on the selection of the right kernel function and proper setting of its parameters. Several sets of kernel functions based on orthogonal polynomials have been proposed recently. Besides their good performance in the error rate, these kernel functions have only one parameter chosen from a small set of integers, and it facilitates kernel selection greatly. Two sets of orthogonal polynomial kernel functions, namely the triangularly modified Chebyshev kernels and the triangularly modified Legendre kernels, are proposed in this study. Furthermore, we compare the construction methods of some orthogonal polynomial kernels and highlight the similarities and differences among them. Experiments on 32 data sets are performed for better illustration and comparison of these kernel functions in classification and regression scenarios. In general, there is difference among these orthogonal polynomial kernels in terms of accuracy, and most orthogonal polynomial kernels can match the commonly used kernels, such as the polynomial kernel, the Gaussian kernel and the wavelet kernel. Compared with these universal kernels, the orthogonal polynomial kernels each have a unique easily optimized parameter, and they store statistically significantly less support vectors in support vector classification. New presented kernels can obtain better generalization performance both for classification tasks and regression tasks.  相似文献   

19.
In this paper we extend the conformal method of modifying a kernel function to improve the performance of Support Vector Machine classifiers [14, 15]. The kernel function is conformally transformed in a data-dependent way by using the information of Support Vectors obtained in primary training. We further investigate the performances of modified Gaussian Radial Basis Function and Polynomial kernels. Simulation results for two artificial data sets show that the method is very effective, especially for correcting bad kernels. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号