首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 421 毫秒
1.
针对传统原型选择算法易受样本读取序列、异常样本等干扰的缺陷,通过分析原型算法学习规则,借鉴最近特征线法思想,改进传统原型算法,提出一种自适应边界逼近的原型选择算法。该算法在原型学习过程中改进压缩近邻法的同类近邻吸收策略,保留更优于当前最近边界原型的同类样本,同时建立原型更新准则,并运用该准则实现原型集的周期性动态更新。该算法不仅克服读取序列、异常样本对原型选取的影响,而且降低原型集规模。最后通过人工数据和UCI基准数据集验证文中算法。实验表明,文中算法选择的原型集比其他算法产生的原型集更能体现数据集的分布特征,平均压缩率有所提高,且分类精度与运行时间优于其他算法。  相似文献   

2.
k-近邻分类是一种流行且成功的非参数分类方法.但其分类性能由于离群点的存在而受到损害.为克服离群点对分类性能的不利影响,提出了一个k-近邻分类的变形和一个基于局部均值向量与类均值向量的近邻分类方法.该方法利用了未分类样本在每个训练类中k个近邻的局部均值的信息和整体均值的知识,不仅能够克服离群点对分类性能的影响,而且取得了比传统的k-近邻分类一致好的分类性能.  相似文献   

3.
为改进传统的基于局部均值与类均值的近邻分类算法的分类精度,有必要对分类中类均值向量的权重分配进行研究。将类均值向量的权重基于客观决策信息确定为数学公式,并运用步长优化的统一迭代法来对加权权重进行选取,通过UCI数据集大量实验表明,使用改进的分类算法进行仿真实验,结果表明,改进算法在正确率方面优于传统算法。  相似文献   

4.
张清华  周靖鹏  代永杨  王国胤 《软件学报》2023,34(12):5629-5648
密度峰值聚类(density peaks clustering, DPC)是一种基于密度的聚类算法,该算法可以直观地确定类簇数量,识别任意形状的类簇,并且自动检测、排除异常点.然而, DPC仍存在些许不足:一方面, DPC算法仅考虑全局分布,在类簇密度差距较大的数据集聚类效果较差;另一方面, DPC中点的分配策略容易导致“多米诺效应”.为此,基于代表点(representative points)与K近邻(K-nearest neighbors, KNN)提出了RKNN-DPC算法.首先,构造了K近邻密度,再引入代表点刻画样本的全局分布,提出了新的局部密度;然后,利用样本的K近邻信息,提出一种加权的K近邻分配策略以缓解“多米诺效应”;最后,在人工数据集和真实数据集上与5种聚类算法进行了对比实验,实验结果表明,所提出的RKNN-DPC可以更准确地识别类簇中心并且获得更好的聚类结果.  相似文献   

5.
针对伪近邻分类算法(LMPNN)对异常点和噪声点仍然敏感的问题,提出了一种基于双向选择的伪近邻算法(BS-PNN)。利用邻近性度量选取[k]个最近邻,让测试样本和近邻样本通过互近邻定义进行双向选择;通过计算每类中互近邻的个数及其局部均值的加权距离,从而得到测试样本到伪近邻的欧氏距离;利用改进的类可信度作为投票度量方式,对测试样本进行分类。BS-PNN算法在处理复杂的分类任务时,具有能够准确识别噪声点,降低近邻个数[k]的敏感性,提高分类精度等优势。在UCI和KEEL的15个实际数据集上进行仿真实验,并与KNN、WKNN、LMKNN、PNN、LMPNN、DNN算法以及P-KNN算法进行比较,实验结果表明,基于双向选择的伪近邻算法的分类性能明显优于其他几种近邻分类算法。  相似文献   

6.
基于k近邻的标签噪声过滤对近邻参数k的选取较敏感.针对此问题,文中提出近邻感知的标签噪声过滤算法,可有效解决二分类数据集的类内标签噪声的问题.算法分开考虑正类样本和负类样本,使分类问题中的标签噪声检测问题转化为两个单类别数据的离群点检测问题.首先通过近邻感知策略自动确定每个样本的个性化近邻参数,避免近邻参数敏感的问题.然后根据噪声因子将样本分为核心样本与非核心样本,并把非核心样本作为标签噪声候选集.最后结合候选样本的近邻标签信息,进行噪声的识别与过滤.实验表明,文中方法的噪声过滤效果和分类预测性能均较优.  相似文献   

7.
密度峰值聚类(DPC)算法在对密度分布差异较大的数据进行聚类时效果不佳,聚类结果受局部密度及其相对距离影响,且需要手动选取聚类中心,从而降低了算法的准确性与稳定性。为此,提出一种基于加权共享近邻与累加序列的密度峰值算法DPC-WSNN。基于加权共享近邻重新定义局部密度的计算方式,以避免截断距离选取不当对聚类效果的影响,同时有效处理不同类簇数据集分布不均的问题。在原有DPC算法决策值的基础上,生成一组累加序列,将累加序列的均值作为聚类中心和非聚类中心的临界点从而实现聚类中心的自动选取。利用人工合成数据集与UCI上的真实数据集测试与评估DPC-WSNN算法,并将其与FKNN-DPC、DPC、DBSCAN等算法进行比较,结果表明,DPC-WSNN算法具有更好的聚类表现,聚类准确率较高,鲁棒性较强。  相似文献   

8.
针对K-medoids聚类算法对初始聚类中心敏感、聚类结果依赖于初始聚类中心的缺陷,提出一种局部方差优化的K-medoids聚类算法,以期使K-medoids的初始聚类中心分布在不同的样本密集区域,聚类结果尽可能地收敛到全局最优解.该算法引入局部方差的概念,根据样本所处位置的局部样本分布定义样本的局部方差,以样本局部标准差为邻域半径,选取局部方差最小且位于不同区域的样本作为K-medoids的初始中心,充分利用了方差所提供的样本分布信息.在规模大小不等的UCI数据集以及带有不同比例噪声的不同规模的人工模拟数据集上进行实验,并利用六种聚类算法性能测试指标进行测试,结果表明该算法具有聚类效果好、抗噪性能强的优点,而且适用于大规模数据集的聚类.提出的Num-近邻方差优化的K-medoids聚类算法优于快速K-me-doids聚类算法及基于邻域的改进K-medoids聚类算法.  相似文献   

9.
针对传统K近邻分类器在大规模数据集中存在时间和空间复杂度过高的问题,可采取原型选择的方法进行处理,即从原始数据集中挑选出代表原型(样例)进行K近邻分类而不降低其分类准确率.本文在CURE聚类算法的基础上,针对CURE的噪声点不易确定及代表点分散性差的特点,利用共享邻居密度度量给出了一种去噪方法和使用最大最小距离选取代表点进行改进,从而提出了一种新的原型选择算法PSCURE (improved prototype selection algorithm based on CURE algorithm).基于UCI数据集进行实验,结果表明:提出的PSCURE原型选择算法与相关原型算法相比,不仅能筛选出较少的原型,而且可获得较高的分类准确率.  相似文献   

10.
针对模糊C均值(Fuzzy C-Means,FCM)聚类算法对初始聚类中心和噪声敏感、对边界样本聚类不够准确且易收敛于局部极小值等问题,提出了一种K邻近(KNN)优化的密度峰值(DPC)算法和FCM相结合的融合聚类算法(KDPC-FCM)。算法利用样本的K近邻信息定义样本局部密度,快速准确搜索样本的密度峰值点样本作为初始类簇中心,改善FCM聚类算法存在的不足,从而达到优化FCM聚类算法效果的目的。在多个UCI数据集、单个人造数据集、多种基准数据集和Geolife项目中的6个较大规模数据集上的实验结果表明,改进后的新算法与传统FCM算法、DSFCM算法对比,有着更好的抗噪性、聚类效果和更快的全局收敛速度,证明了新算法的可行性和有效性。  相似文献   

11.
代表点选择是面向数据挖掘与模式识别的数据预处理的重要内容之一,是提高分类器分类正确率和执行效率的重要途径。提出了一种基于投票机制的代表点选择算法,该算法能使所得到的代表点尽可能分布在类别边界上,且投票选择机制易于排除异常点,减少数据量,从而有利于提高最近邻分类器的分类精度和效率。通过与多个经典的代表点选择算法的实验比较分析,表明所提出的基于投票机制的代表点选择算法在提高最近邻分类器分类精度和数据降低率上都具有一定的优势。  相似文献   

12.
In this paper a further generalization of differential evolution based data classification method is proposed, demonstrated and initially evaluated. The differential evolution classifier is a nearest prototype vector based classifier that applies a global optimization algorithm, differential evolution, for determining the optimal values for all free parameters of the classifier model during the training phase of the classifier. The earlier version of differential evolution classifier that applied individually optimized distance measure for each new data set to be classified is generalized here so, that instead of optimizing a single distance measure for the given data set, we take a further step by proposing an approach where distance measures are optimized individually for each feature of the data set to be classified. In particular, distance measures for each feature are selected optimally from a predefined pool of alternative distance measures. The optimal distance measures are determined by differential evolution algorithm, which is also determining the optimal values for all free parameters of the selected distance measures in parallel. After determining the optimal distance measures for each feature together with their optimal parameters, we combine all featurewisely determined distance measures to form a single total distance measure, that is to be applied for the final classification decisions. The actual classification process is still based on the nearest prototype vector principle; A sample belongs to the class represented by the nearest prototype vector when measured with the above referred optimized total distance measure. During the training process the differential evolution algorithm determines optimally the class vectors, selects optimal distance metrics for each data feature, and determines the optimal values for the free parameters of each selected distance measure. Based on experimental results with nine well known classification benchmark data sets, the proposed approach yield a statistically significant improvement to the classification accuracy of differential evolution classifier.  相似文献   

13.
Local feature weighting in nearest prototype classification.   总被引:1,自引:0,他引:1  
The distance metric is the corner stone of nearest neighbor (NN)-based methods, and therefore, of nearest prototype (NP) algorithms. That is because they classify depending on the similarity of the data. When the data is characterized by a set of features which may contribute to the classification task in different levels, feature weighting or selection is required, sometimes in a local sense. However, local weighting is typically restricted to NN approaches. In this paper, we introduce local feature weighting (LFW) in NP classification. LFW provides each prototype its own weight vector, opposite to typical global weighting methods found in the NP literature, where all the prototypes share the same one. Providing each prototype its own weight vector has a novel effect in the borders of the Voronoi regions generated: They become nonlinear. We have integrated LFW with a previously developed evolutionary nearest prototype classifier (ENPC). The experiments performed both in artificial and real data sets demonstrate that the resulting algorithm that we call LFW in nearest prototype classification (LFW-NPC) avoids overfitting on training data in domains where the features may have different contribution to the classification task in different areas of the feature space. This generalization capability is also reflected in automatically obtaining an accurate and reduced set of prototypes.  相似文献   

14.
Relief算法是一个过滤式特征选择算法,通过一种贪心的方式最大化最近邻居分类器中的实例边距,结合局部权重方法有作者提出了为每个类别分别训练一个特征权重的类依赖Relief算法(Class Dependent RELIEF algorithm:CDRELIEF).该方法更能反映特征相关性,但是其训练出的特征权重仅仅对于衡量特征对于某一个类的相关性很有效,在实际分类中分类精度不够高.为了将CDRELIEF算法应用于分类过程,本文改变权重更新过程,并给训练集中的每个实例赋予一个实例权重值,通过将实例权重值结合到权重更新公式中从而排除远离分类边界的数据点和离群点对权重更新的影响,进而提高分类准确率.本文提出的实例加权类依赖RELIEF (IWCDRELIEF)在多个UCI二类数据集上,与CDRELIEF进行测试比较.实验结果表明本文提出的算法相比CDRELIEF算法有明显的提高.  相似文献   

15.
基于样本密度和分类误差率的增量学习矢量量化算法研究   总被引:1,自引:0,他引:1  
李娟  王宇平 《自动化学报》2015,41(6):1187-1200
作为一种简单而成熟的分类方法, K最近邻(K nearest neighbor, KNN)算法在数据挖掘、模式识别等领域获得了广泛的应用, 但仍存在计算量大、高空间消耗、运行时间长等问题. 针对这些问题, 本文在增量学习型矢量量化(Incremental learning vector quantization, ILVQ)的单层竞争学习基础上, 融合样本密度和分类误差率的邻域思想, 提出了一种新的增量学习型矢量量化方法, 通过竞争学习策略对代表点邻域实现自适应增删、合并、分裂等操作, 快速获取原始数据集的原型集, 进而在保障分类精度基础上, 达到对大规模数据的高压缩效应. 此外, 对传统近邻分类算法进行了改进, 将原型近邻集的样本密度和分类误差率纳入到近邻判决准则中. 所提出算法通过单遍扫描学习训练集可快速生成有效的代表原型集, 具有较好的通用性. 实验结果表明, 该方法同其他算法相比较, 不仅可以保持甚至提高分类的准确性和压缩比, 且具有快速分类的优势.  相似文献   

16.
基于样本选择的最近邻凸包分类器   总被引:1,自引:0,他引:1       下载免费PDF全文
最近邻凸包分类算法是一种以测试点到各类别样本凸包的距离为分类度量的最近邻分类算法。然而,该算法的凸二次规划问题优化求解的较高的计算复杂度限制了其在较大规模数据集上的应用。本文提出一种样本选择方法——子类凸包生长法。通过迭代,选择距离选出样本凸包最远的点,直到满足终止条件,从而实现数据集的有效约简。ORL数据库和MIT-CBCL人脸识别training-synthetic库上的实验结果表明,子类凸包生长法选出的少量样本生成的凸包能够很好的表征训练集,在不降低最近邻凸包分类器性能的同时,使得算法的计算速度大为提高。  相似文献   

17.
Choosing appropriate classification algorithms for a given data set is very important and useful in practice but also is full of challenges. In this paper, a method of recommending classification algorithms is proposed. Firstly the feature vectors of data sets are extracted using a novel method and the performance of classification algorithms on the data sets is evaluated. Then the feature vector of a new data set is extracted, and its k nearest data sets are identified. Afterwards, the classification algorithms of the nearest data sets are recommended to the new data set. The proposed data set feature extraction method uses structural and statistical information to characterize data sets, which is quite different from the existing methods. To evaluate the performance of the proposed classification algorithm recommendation method and the data set feature extraction method, extensive experiments with the 17 different types of classification algorithms, the three different types of data set characterization methods and all possible numbers of the nearest data sets are conducted upon the 84 publicly available UCI data sets. The results indicate that the proposed method is effective and can be used in practice.  相似文献   

18.
The logical analysis of data (LAD) is one of the most promising data mining methods developed to date for extracting knowledge from data. The key feature of the LAD is the capability of detecting hidden patterns in the data. Because patterns are basically combinations of certain attributes, they can be used to build a decision boundary for classification in the LAD by providing important information to distinguish observations in one class from those in the other. The use of patterns may result in a more stable performance in terms of being able to classify both positive and negative classes due to their robustness to measurement errors.The LAD technique, however, tends to choose too many patterns by solving a set covering problem to build a classifier; this is especially the case when outliers exist in the data set. In the set covering problem of the LAD, each observation should be covered by at least one pattern, even though the observation is an outlier. Thus, existing approaches tend to select too many patterns to cover these outliers, resulting in the problem of overfitting. Here, we propose new pattern selection approaches for LAD that take both outliers and the coverage of a pattern into account. The proposed approaches can avoid the problem of overfitting by building a sparse classifier. The performances of the proposed pattern selection approaches are compared with existing LAD approaches using several public data sets. The computational results show that the sparse classifiers built on the patterns selected by the proposed new approaches yield an improved classification performance compared to the existing approaches, especially when outliers exist in the data set.  相似文献   

19.
A local boosting algorithm for solving classification problems   总被引:1,自引:0,他引:1  
Based on the boosting-by-resampling version of Adaboost, a local boosting algorithm for dealing with classification tasks is proposed in this paper. Its main idea is that in each iteration, a local error is calculated for every training instance and a function of this local error is utilized to update the probability that the instance is selected to be part of next classifier's training set. When classifying a novel instance, the similarity information between it and each training instance is taken into account. Meanwhile, a parameter is introduced into the process of updating the probabilities assigned to training instances so that the algorithm can be more accurate than Adaboost. The experimental results on synthetic and several benchmark real-world data sets available from the UCI repository show that the proposed method improves the prediction accuracy and the robustness to classification noise of Adaboost. Furthermore, the diversity-accuracy patterns of the ensemble classifiers are investigated by kappa-error diagrams.  相似文献   

20.
We present a new classifier fusion method to combine soft-level classifiers with a new approach, which can be considered as a generalized decision templates method. Previous combining methods based on decision templates employ a single prototype for each class, but this global point of view mostly fails to properly represent the decision space. This drawback extremely affects the classification rate in such cases: insufficient number of training samples, island-shaped decision space distribution, and classes with highly overlapped decision spaces. To better represent the decision space, we utilize a prototype selection method to obtain a set of local decision prototypes for each class. Afterward, to determine the class of a test pattern, its decision profile is computed and then compared to all decision prototypes. In other words, for each class, the larger the numbers of decision prototypes near to the decision profile of a given pattern, the higher the chance for that class. The efficiency of our proposed method is evaluated over some well-known classification datasets suggesting superiority of our method in comparison with other proposed techniques.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号