首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 171 毫秒
1.
Kernel-kNN: 基于信息能度量的核k-最近邻算法   总被引:2,自引:0,他引:2  
刘松华  张军英  许进  贾宏恩 《自动化学报》2010,36(12):1681-1688
提出一种核k最近邻算法. 首先给出用于最近邻学习的信息能度量方法, 该方法克服了高维数据不便于用传统距离度量表示的困难, 提高了数据间类别相似性和距离的一致性. 在此基础上, 将传统的kNN扩展为非线性形式, 并采用半正定规划学习全局最优的度量矩阵. 算法主要特点是: 能较好地适用于高维数据, 并有效提升kNN 的分类性能. 多个数据集的实验和分析表明, 本文的Kernel-kNN算法与传统的kNN算法比较, 在低维数据上, 分类准确率相当; 在高维数据上, 分类性能有明显提高.  相似文献   

2.
互k最近邻算法(mKnnc)是k最近邻分类算法(Knn)的一种改进算法,该算法用互k最近邻原则对训练样本以及k最近邻进行噪声消除,从而提高算法的分类效果。然而在利用互k最近邻原则进行噪声消除时,并没有将类别属性考虑进去,因此有可能把真实有效的数据当成噪声消除掉,从而影响分类效果。基于类别子空间距离加权的互k最近邻算法考虑到近邻的距离权重,既能消除冗余或无用属性对最近邻分类算法依赖的相似性度量的影响,又能较好地消除邻居中的噪声点。最后在UCI公共数据集上的实验结果验证了该算法的有效性。  相似文献   

3.
基于新的距离度量的K-Modes聚类算法   总被引:4,自引:1,他引:4  
传统的K-Modes聚类算法采用简单的0-1匹配差异方法来计算同一分类属性下两个属性值之间的距离, 没有充分考虑其相似性. 对此, 基于粗糙集理论, 提出了一种新的距离度量. 该距离度量在度量同一分类属性下两个属性值之间的差异时, 克服了简单0-1匹配差异法的不足, 既考虑了它们本身的异同, 又考虑了其他相关分类属性对它们的区分性. 并将提出的距离度量应用于传统K-Modes聚类算法中. 通过与基于其他距离度量的K-Modes聚类算法进行实验比较, 结果表明新的距离度量是更加有效的.  相似文献   

4.
最近特征空间嵌入NFSE方法在训练过程中选取最近特征空间时采用传统的欧氏距离度量会导致类内离散度和类间离散度变化同步;测试时,最近邻规则也使用欧氏距离度量,而高维空间样本间直线距离具有趋同性。这些都会降低识别率,为解决此问题,提出了基于非线性距离和夹角组合的最近特征空间嵌入方法。在训练阶段,该方法使用非线性距离度量选取最近特征空间,使类内离散度的变化速度远小于类间离散度的变化速度,从而使转换空间中同类样本距离更小,不同类样本距离更大。在匹配阶段,使用结合夹角度量的最近邻分类器,充分利用样本相似性与样本夹角的关系,更适合高维空间中样本分类。仿真实验表明,基于非线性距离和夹角组合的最近特征空间嵌入方法的性能总体上优于对比算法。  相似文献   

5.
《计算机科学与探索》2019,(7):1165-1173
针对目前符号数据的分类性能较低,通过挖掘属性值与标签之间可能存在的空间结构关系,提出了一种基于空间相关性分析的符号数据分类方法。该方法首先采用独热编码的方式对符号数据进行特征扩容,然后基于互信息和条件熵信息度量方法,定义了一种符号数据空间关系表示方法。在此基础上,分别结合支持向量机(support vector machine,SVM)和K-最近邻(K-nearest neighbor,KNN)模型分类器,提出了基于空间相关性分析的SVM分类算法(SVM classification algorithm based on space correlation analysis,SCA_SVM)和基于空间相关性分析的KNN分类算法(KNN classification algorithm based on space correlation analysis,SCA_KNN)两种分类算法。该方法既能够体现出属性值与标签之间的关联关系,也可以有效地度量不同属性值之间的距离或差异性。在标准UCI数据集上的实验结果表明,该方法在分类性能上更加有效。  相似文献   

6.
《计算机工程》2018,(3):132-137
针对当前p-Sensitive k-匿名模型未考虑敏感属性语义相似性,不能抵制相似性攻击的问题,提出一种可抵制相似性攻击的(p,k,d)-匿名模型。根据语义层次树对敏感属性值进行语义分析,计算敏感属性值之间的语义相异值,使每个等价类在满足k匿名的基础上至少存在p个满足d-相异的敏感属性值来阻止相似性攻击。同时考虑到数据的可用性,模型采用基于距离的度量方法划分等价类以减少信息损失。实验结果表明,提出的(p,k,d)-匿名模型相对于p-Sensitive k-匿名模型不仅可以降低敏感属性泄露的概率,更能有效地保护个体隐私,还可以提高数据可用性。  相似文献   

7.
传统的K-modes算法采用简单的属性匹配方式计算同一属性下不同属性值的距离,并且计算样本距离时令所有属性权重相等。在此基础上,综合考虑有序型分类数据中属性值的顺序关系、无序型分类数据中不同属性值之间的相似性以及各属性之间的关系等,提出一种更加适用于混合型分类数据的改进聚类算法,该算法对无序型分类数据和有序型分类数据采用不同的距离度量,并且用平均熵赋予相应的权重。实验结果表明,改进算法在人工数据集和真实数据集上均有比K-modes算法及其改进算法更好的聚类效果。  相似文献   

8.
K-modes算法中原有的分类变量间距离度量方法无法体现属性值之间差异,对此提出了一种基于朴素贝叶斯分类器中间运算结果的距离度量。该度量构建代表分类变量的特征向量并计算向量间的欧氏距离作为变量间的距离。将提出的距离度量代入K-modes聚类算法并在多个UCI公共数据集上与其他度量方法进行比较,实验结果表明该距离度量更加有效。  相似文献   

9.
l1范数最近邻凸包分类器在人脸识别中的应用   总被引:2,自引:2,他引:0  
l1范数作为重要的距离测度,在模式识别中有着较为广泛的应用。在不同的范数定义下,相同分类机理的分类算法一般会有不同的分类效果。本文提出l1范数下的最近邻凸包人脸识别算法。该算法将最近邻凸包分类算法的范数定义由l2范数推广到l1范数,以测试点到各训练类凸包的l2范数距离作为最近邻分类的相似性度量。在ORL标准人脸数据库上的验证实验中,该方法取得了良好的识别效果。  相似文献   

10.
密度峰值聚类算法在处理分类型数据时难以产生较好的聚类效果。针对该现象,详细分析了其产生的原因:距离计算的重叠问题和密度计算的聚集问题。同时为了解决上述问题,提出了一种面向分类型数据的密度峰值聚类算法(Cauchy kernel-based density peaks clustering for categorical data,CDPCD)。算法首先指出分类型数据距离度量过程中有序特性(分类型数据属性值之间的顺序关系)鲜有考虑的现状,进而提出一种基于概率分布的加权有序距离度量来缓解重叠问题。通过结合柯西核函数,在共享最近邻密度峰值聚类算法基础上重新评估数据密度值,改进了密度计算和二次分配方式,增强了密度多样性,降低了聚集问题带来的影响。多个真实数据集上的实验结果表明,相较于传统的基于划分和密度的聚类算法,CDPCD都取得了更好的聚类结果。  相似文献   

11.
A novel similarity, neighborhood counting measure, was recently proposed which counts the number of neighborhoods of a pair of data points. This similarity can handle numerical and categorical attributes in a conceptually uniform way, can be calculated efficiently through a simple formula, and gives good performance when tested in the framework of k-nearest neighbor classifier. In particular it consistently outperforms a combination of the classical Euclidean distance and Hamming distance. This measure was also shown to be related to a probability formalism, G probability, which is induced from a target probability function P. It was however unclear how G is related to P, especially for classification. Therefore it was not possible to explain some characteristic features of the neighborhood counting measure. In this paper we show that G is a linear function of P, and G-based Bayes classification is equivalent to P-based Bayes classification. We also show that the k-nearest neighbor classifier, when weighted by the neighborhood counting measure, is in fact an approximation of the G-based Bayes classifier, and furthermore, the P-based Bayes classifier. Additionally we show that the neighborhood counting measure remains unchanged when binary attributes are treated as categorical or numerical data. This is a feature that is not shared by other distance measures, to the best of our knowledge. This study provides a theoretical insight into the neighborhood counting measure.  相似文献   

12.
In this paper, we propose a novel method to measure the dissimilarity of categorical data. The key idea is to consider the dissimilarity between two categorical values of an attribute as a combination of dissimilarities between the conditional probability distributions of other attributes given these two values. Experiments with real data show that our dissimilarity estimation method improves the accuracy of the popular nearest neighbor classifier.  相似文献   

13.
Nearest neighbors by neighborhood counting   总被引:2,自引:0,他引:2  
Finding nearest neighbors is a general idea that underlies many artificial intelligence tasks, including machine learning, data mining, natural language understanding, and information retrieval. This idea is explicitly used in the k-nearest neighbors algorithm (kNN), a popular classification method. In this paper, this idea is adopted in the development of a general methodology, neighborhood counting, for devising similarity functions. We turn our focus from neighbors to neighborhoods, a region in the data space covering the data point in question. To measure the similarity between two data points, we consider all neighborhoods that cover both data points. We propose to use the number of such neighborhoods as a measure of similarity. Neighborhood can be defined for different types of data in different ways. Here, we consider one definition of neighborhood for multivariate data and derive a formula for such similarity, called neighborhood counting measure or NCM. NCM was tested experimentally in the framework of kNN. Experiments show that NCM is generally comparable to VDM and its variants, the state-of-the-art distance functions for multivariate data, and, at the same time, is consistently better for relatively large k values. Additionally, NCM consistently outperforms HEOM (a mixture of Euclidean and Hamming distances), the "standard" and most widely used distance function for multivariate data. NCM has a computational complexity in the same order as the standard Euclidean distance function and NCM is task independent and works for numerical and categorical data in a conceptually uniform way. The neighborhood counting methodology is proven sound for multivariate data experimentally. We hope it works for other types of data.  相似文献   

14.
Kernel-based methods have been widely investigated in the soft-computing community. However, they focus mainly on numeric data. In this paper, we propose a novel method for kernel learning on categorical data, and show how the method can be used to derive effective classifiers for linear classification. Based on kernel density estimation for categorical attributes, three popular classification methods, i.e., Naive Bayes, nearest neighbor and prototype-based classification, are effectively extended to classify categorical data. We also propose two data-driven approaches to the bandwidth selection problem, with one aimed at minimizing the mean squared error of the kernel estimate and the other endeavored to attribute weights optimization. Theoretical analysis indicates that, as in the numeric case, kernel learning of categorical attributes is capable to make the classes to be more separable, resulting in outstanding performances of the new classifiers on various real-world data sets.  相似文献   

15.
古凌岚  彭利民 《计算机科学》2016,43(12):213-217
针对传统的基于欧氏距离的相似性度量不能完全反映复杂结构的数据分布特性的问题,提出了一种基于相对密度和流形上k近邻的聚类算法。基于能描述全局一致性信息的流形距离,及可体现局部相似性和紧密度的k近邻概念,通过流形上k近邻相似度度量数据对象间的相似性,采用k近邻的相对紧密度发现不同密度下的类簇,设计近邻点对约束规则搜寻k近邻点对构成的近邻链,归类数据对象及识别离群点。与标准k-means算法、流形距离改进的k-means算法进行了性能比较,在人工数据集和UCI数据集上的仿真实验结果均表明,该算法能有效地处理复杂结构的数据聚类问题,且聚类效果更好。  相似文献   

16.
Computing Optimal Attribute Weight Settings for Nearest Neighbor Algorithms   总被引:2,自引:0,他引:2  
Nearest neighbor (NN) learning algorithms, examples of the lazy learning paradigm, rely on a distance function to measure the similarity of testing examples with the stored training examples. Since certain attributes are more discriminative, while others can be less or totally irrelevant, attributes should be weighed differently in the distance function. Most previous studies on weight setting for NN learning algorithms are empirical. In this paper we describe our attempt on deciding theoretically optimal weights that minimize the predictive error for NN algorithms. Assuming a uniform distribution of examples in a 2-d continuous space, we first derive the average predictive error introduced by a linear classification boundary, and then determine the optimal weight setting for any polygonal classification region. Our theoretical results of optimal attribute weights can serve as a baseline or lower bound for comparing other empirical weight setting methods.  相似文献   

17.
Development of classification methods using case-based reasoning systems is an active area of research. In this paper, two new case-based reasoning systems with two similarity measures that support mixed categorical and numerical data as well as only categorical data are proposed. The principal difference between these two measures lies in the calculations of distance for categorical data. The first one, named distance in unsupervised learning (DUL), is derived from co-occurrence of values, and the other one, named distance in supervised learning (DSL), is used to calculate the distance between two values of the same feature with respect to every other feature for a given class. However, the distance between numerical data is computed using the Euclidean distance. Furthermore, the importance of numeric features is determined by linear discrimination analysis (LDA) and the weight assignment to categorical features depends on co-occurrence of feature values when calculating the similarity between a new case and the old one. The performance of the proposed case-based reasoning systems has been investigated on the University of California, Irvine (UCI) data sets by 5-fold cross validation. The results indicate that these case-based reasoning systems will produce a proper performance in predictive accuracy and interpretability.  相似文献   

18.
互k近邻MKnn算法是k-近邻算法的一种有效改进算法,但其对类属性数据通常采用属性值相同为0,不同为1的方法处理,从而在类属性数据较多的数据集上分类效率受到一定程度的抑制。针对MKnn对类属性数据处理方法的不足,对类属性数据的处理引进类别基尼系数的概念,对同类样本,用基尼系数统计某一类属性中不同值分布对这个类的贡献度作为此类属性的权重,并以此作为估算不同样本之间的相似性对MKnn进行优化,扩宽MKnn的使用面。实验结果验证了该方法的有效性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号