首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
在最大边缘线性分类器和闭凸包收缩思想的基础上,针对二分类问题,通过闭凸包收缩技术,将线性不可分问题转化为线性可分问题。将上述思想推广到解决多分类问题中,提出了一类基于闭凸包收缩的多分类算法。该方法几何意义明确,在一定程度上克服了以往多分类方法目标函数过于复杂的缺点,并利用核思想将其推广到非线性分类问题上。  相似文献   

2.
基于核方法可在高维特征空间中完成数据聚类,但缺乏对原输入空间聚类中心及结果的直观刻画.提出一种核自组织映射竞争聚类算法.该算法是利用核的特征,导出SOM算法的获胜神经元及权重更新规则,而竞争学习机制依然保持在原输入空间中,这样既解决了当输入样本分布结构呈高度非线性时,其分类能力下降的问题,而且解决了Donald[1]算法导致的特征空间中的获胜神经元在原始输入空间中的原像不存在,而无法对聚类结果利用可视化技术进行解释的问题.实验结果表明,提出的核自组织映射竞争聚类算法在某些数据集中可以获得比SOM算法更好的结果.  相似文献   

3.
沙秀艳  辛杰 《计算机工程》2011,37(10):187-188
传统聚类算法易陷入局部极值,在数据线性不可分时分类效果较差。为此,提出一种基于最大熵的模糊核聚类图像分割方法。采用最大熵算法对原始图像进行初步分割,求得初始聚类中心;引入Mercer核函数,把输入空间的样本映射到高维特征空间,并在特征空间中进行图像分割。实验结果表明,该方法能减少迭代次数,使分类结果更稳定,从而较好地把目标从背景中分割出来。  相似文献   

4.
孔锐  张冰 《计算机应用》2005,25(6):1327-1329
探讨了核Fisher判决分析算法(KernelFisherDiscriminantAnalysis,KFDA),并提出了一种基于KFDA的高性能多类分类算法。在进行多类分类时,首先通过一个非线性映射将训练样本映射到一个高维的核空间中,建立一个KFDA子空间,在该高维空间中,不同类别的样本之间的差异增大,同类样本聚集在一起,因此,在这个高维核空间中,就可以利用简单的最近邻法进行多类分类。实验结果表明,该算法在保证分类精度的条件下提高了分类器的训练和分类的速度。  相似文献   

5.
在模式识别时常常需要对模式进行分类,线性可分模式的分类是其中最基本的一种.常用的线性分类算法是LMSE算法,它们在本质上都属于几何分类法,当模式线性可分时,一般都能达到令人满意的效果.然而考虑到LMSE算法并非是最简单和有效的线性分类算法,本文基于神经网络中单层感知器的概念,利用单层感知器可以把输入空间划分成两个区域来进行输入向量分类的特点,提出了在模式线性可分时用神经网络中单层感知器进行模式划分的一种新算法.然后对该线性分类算法的原理和算法过程进行了阐述,最后用MATLAB实现了这种分类算法,并解决了两个不同类型的线性模式的划分问题.  相似文献   

6.
在多分类问题中,分类算法的优劣直接影响到最终分类结果的好坏。现有的多分类算法中,基于支持向量机的多分类算法在综合性能方面要优于其他算法,但是,这些较优算法同样面临一些多分类中常见的问题,如不可分问题和效率低问题。针对这些问题,文中提出了一种改进的二叉树支持向量机多分类算法,该算法综合考虑了两个类之间的距离和分布情况对可分离性的影响,并采用最容易分离的类最先分割出来的策略来建立树的结构。通过在不同的数据集上进行测试,表明该方法不仅解决了多分类的不可分问题,还能提高分类的效率和准确度,可更好地解决现实中的多分类问题。  相似文献   

7.
为了增强最近邻凸包分类器的非线性分类能力,提出了基于核函数方法的最近邻凸包分类算法。该算法首先利用核函数方法将输入空间映射到高维特征空间,然后在高维特征空间采用最近邻凸包分类器对样本进行分类。最近邻凸包分类器是一类以测试点到各类别凸包的距离为相似性度量,并按最近邻原则归类的分类算法。人脸识别实验结果证实,这种核函数方法与最近邻凸包分类算法的融合是可行的和有效的。  相似文献   

8.
目的 为了更有效地提高中智模糊C-均值聚类对非凸不规则数据的聚类性能和噪声污染图像的分割效果,提出了核空间中智模糊均值聚类算法。方法 引入核函数概念。利用满足Mercer条件的非线性问题,用非线性变换把低维空间线性不可分的输入模式空间映射到一个先行可分的高维特征空间进行中智模糊聚类分割。结果 通过对大量图像添加不同的加性和乘性噪声进行分割测试获得的核空间中智模糊聚类算法提高了现有算法的对含噪声聚类的鲁棒性和分类性能。峰值信噪比至少提高0.8 dB。结论 本文算法具有显著的分割效果和良好的鲁棒性,并适应于医学,遥感图像处理需要。  相似文献   

9.
零样本学习旨在解决样本缺失情况下的分类问题.以往嵌入式零样本学习算法通常只利用可见类构建嵌入空间,在测试时不可避免会出现过拟合可见类的问题.基于此本文提出了一种基于类别语义相似度的多标签分类损失,该损失可在构建嵌入空间的过程中引导模型同时考虑与当前可见类语义上相似的未见类,进而将语义空间的相似性迁移到最终执行分类的嵌入空间.同时现有零样本学习算法大部分直接使用图像深度特征作为输入,特征提取过程没有考虑语义信息,基于此本文采用Swin Transformer作为骨干网络,输入原始图片利用自注意力机制得到基于语义信息的视觉特征.本文在3个零样本学习基准数据集上进行了大量实验,与目前最先进的算法相比取得了最佳的调和平均精度.  相似文献   

10.
《信息与电脑》2021,(1):27-29
基于尺度化凸包核化后的SK算法(KSK-S算法)具有运行效率快、分类精度高的优势,能够更加高效地处理非线性可分问题并且几何特征明显。因此本文将单类凸包缩放的SK算法运用在不平衡分类这一特定分类问题上。该算法只需要改变多数类凸包的尺度因子,就可以改变样本分布,达到正确分类的目的,并且该方法更加简单直观。  相似文献   

11.
Optimizing the kernel in the empirical feature space   总被引:17,自引:0,他引:17  
In this paper, we present a method of kernel optimization by maximizing a measure of class separability in the empirical feature space, an Euclidean space in which the training data are embedded in such a way that the geometrical structure of the data in the feature space is preserved. Employing a data-dependent kernel, we derive an effective kernel optimization algorithm that maximizes the class separability of the data in the empirical feature space. It is shown that there exists a close relationship between the class separability measure introduced here and the alignment measure defined recently by Cristianini. Extensive simulations are carried out which show that the optimized kernel is more adaptive to the input data, and leads to a substantial, sometimes significant, improvement in the performance of various data classification algorithms.  相似文献   

12.
基于核的非凸数据模糊K-均值聚类研究   总被引:4,自引:4,他引:0  
将模糊K-均值聚类算法与核函数相结合,采用基于核的模糊K-均值聚类算法来进行聚类。核函数隐含地定义了一个非线性变换,将数据非线性映射到高维特征空间来增加数据的可分性。该算法能够解决模糊K-均值聚类算法对于非凸形状数据不能正确聚类的问题。  相似文献   

13.
Recent research has shown the effectiveness of rich feature representation for tasks in natural language processing (NLP). However, exceedingly large number of features do not always improve classification performance. They may contain redundant information, lead to noisy feature presentations, and also render the learning algorithms intractable. In this paper, we propose a supervised embedding framework that modifies the relative positions between instances to increase the compatibility between the input features and the output labels and meanwhile preserves the local distribution of the original data in the embedded space. The proposed framework attempts to support flexible balance between the preservation of intrinsic geometry and the enhancement of class separability for both interclass and intraclass instances. It takes into account characteristics of linguistic features by using an inner product‐based optimization template. (Dis)similarity features, also known as empirical kernel mapping, is employed to enable computationally tractable processing of extremely high‐dimensional input, and also to handle nonlinearities in embedding generation when necessary. Evaluated on two NLP tasks with six data sets, the proposed framework provides better classification performance than the support vector machine without using any dimensionality reduction technique. It also generates embeddings with better class discriminability as compared to many existing embedding algorithms.  相似文献   

14.
李琼  陈利  王维虎 《微机发展》2014,(2):205-208
手写体数字识别是图像处理与模式识别中具有较高实用价值的研究热点之一。在保证较高识别精度的前提下,为提高手写体数字的识别速度,提出了一种基于SVM的快速手写体数字识别方法。该方法通过各类别在特征空间中的可分性强度确定SVM最优核参数,快速训练出SVM分类器对手写体数字进行分类识别。由于可分性强度的计算是一个简单的迭代过程,所需时间远小于传统参数优化方法中训练相应SVM分类器所需时间,故参数确定时间被大大缩减,训练速度得到相应提高,从而加快了手写体数字的识别过程,同时保证了较好的分类准确率。通过对MNIST手写体数字库的实验验证,结果表明该算法是可行有效的。  相似文献   

15.
Feature selection with kernel class separability   总被引:2,自引:0,他引:2  
Classification can often benefit from efficient feature selection. However, the presence of linearly nonseparable data, quick response requirement, small sample problem and noisy features makes the feature selection quite challenging. In this work, a class separability criterion is developed in a high-dimensional kernel space, and feature selection is performed by the maximization of this criterion. To make this feature selection approach work, the issues of automatic kernel parameter tuning, the numerical stability, and the regularization for multi-parameter optimization are addressed. Theoretical analysis uncovers the relationship of this criterion to the radius-margin bound of the SVMs, the KFDA, and the kernel alignment criterion, providing more insight on using this criterion for feature selection. This criterion is applied to a variety of selection modes with different search strategies. Extensive experimental study demonstrates its efficiency in delivering fast and robust feature selection.  相似文献   

16.
Input space versus feature space in kernel-based methods   总被引:21,自引:0,他引:21  
This paper collects some ideas targeted at advancing our understanding of the feature spaces associated with support vector (SV) kernel functions. We first discuss the geometry of feature space. In particular, we review what is known about the shape of the image of input space under the feature space map, and how this influences the capacity of SV methods. Following this, we describe how the metric governing the intrinsic geometry of the mapped surface can be computed in terms of the kernel, using the example of the class of inhomogeneous polynomial kernels, which are often used in SV pattern recognition. We then discuss the connection between feature space and input space by dealing with the question of how one can, given some vector in feature space, find a preimage (exact or approximate) in input space. We describe algorithms to tackle this issue, and show their utility in two applications of kernel methods. First, we use it to reduce the computational complexity of SV decision functions; second, we combine it with the kernel PCA algorithm, thereby constructing a nonlinear statistical denoising technique which is shown to perform well on real-world data  相似文献   

17.
Autoassociators are a special type of neural networks which, by learning to reproduce a given set of patterns, grasp the underlying concept that is useful for pattern classification. In this paper, we present a novel nonlinear model referred to as kernel autoassociators based on kernel methods. While conventional non-linear autoassociation models emphasize searching for the non-linear representations of input patterns, a kernel autoassociator takes a kernel feature space as the nonlinear manifold, and places emphasis on the reconstruction of input patterns from the kernel feature space. Two methods are proposed to address the reconstruction problem, using linear and multivariate polynomial functions, respectively. We apply the proposed model to novelty detection with or without novelty examples and study it on the promoter detection and sonar target recognition problems. We also apply the model to mclass classification problems including wine recognition, glass recognition, handwritten digit recognition, and face recognition. The experimental results show that, compared with conventional autoassociators and other recognition systems, kernel autoassociators can provide better or comparable performance for concept learning and recognition in various domains.  相似文献   

18.
提出了一种基于核的聚类算法,并将其应用到入侵检测中,构造了一种新的检测模型。通过利用Mercer核,我们把输入空间的样本映射到高维特征空间后,在特征空间中进行聚类。由于经过了核函数的映射,使原来没有显现的特征凸显出来,从而能够更好地聚类。而且在初始化聚类中心的选择上利用了数据分段的方法,该聚类方法在性能上比经典的聚类算法有较大的改进,具有更快的收敛速度以及更为准确的聚类。仿真试验的结果证实了该方法的可行性和有效性。  相似文献   

19.
This paper presents a novel algorithm to optimize the Gaussian kernel for pattern classification tasks, where it is desirable to have well-separated samples in the kernel feature space. We propose to optimize the Gaussian kernel parameters by maximizing a classical class separability criterion, and the problem is solved through a quasi-Newton algorithm by making use of a recently proposed decomposition of the objective criterion. The proposed method is evaluated on five data sets with two kernel-based learning algorithms. The experimental results indicate that it achieves the best overall classification performance, compared with three competing solutions. In particular, the proposed method provides a valuable kernel optimization solution in the severe small sample size scenario.  相似文献   

20.
An algorithm for optimizing data clustering in feature space is studied in this work. Using graph Laplacian and extreme learning machine (ELM) mapping technique, we develop an optimal weight matrix W for feature mapping. This work explicitly performs a mapping of the original data for clustering into an optimal feature space, which can further increase the separability of original data in the feature space, and the patterns points in same cluster are still closely clustered. Our method, which can be easily implemented, gets better clustering results than some popular clustering algorithms, like k-means on the original data, kernel clustering method, spectral clustering method, and ELM k-means on data include three UCI real data benchmarks (IRIS data, Wisconsin breast cancer database, and Wine database).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号