首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
针对基于密度的传统算法不能处理混合属性数据,以及目前的混合属性聚类算法大多数聚类质量不高等问题,提出了基于密度和混合距离度量方法的混合属性聚类算法.该算法通过分析混合属性数据特征,将混合属性数据分为数值占优、分类占优和均衡型混合属性数据3类,分析不同情况的特征选取相应的距离度量方式,通过预设参数能够发现数据密集区域,确定核心点,再利用核心点确定密度相连的对象实现聚类,获得最终的聚类结果.将算法应用于多种数据集上的实验结果表明,该算法具有较高的聚类质量,能够有效处理混合属性数据.  相似文献   

2.
基于粗糙集的混合属性数据聚类算法   总被引:2,自引:0,他引:2  
范黎林  王娟 《计算机应用》2010,30(12):3377-3379
传统聚类方法将对象严格地划分到某一类,但是很多时候边界对象不能被严格地划分。基于粗糙集的k-means聚类算法和基于粗糙集的leader聚类算法,利用粗糙集理论将数据对象划分到一个簇的上近似集或下近似集当中,提供了一种新的处理不确定性的视角,很好地解决了这种边界不确定问题。但其缺点是不能处理混合属性数据,聚类结果对初值有明显的依赖性。针对这些算法存在的不足,给出了一种适用于混合属性数据的距离定义,对初始值的选取提出了改进办法,提出了一种基于粗糙集的混合属性数据聚类算法。仿真实验证明,在不确定聚类簇数的情况下,该算法的聚类准确率比传统k-means算法明显提高。  相似文献   

3.
为了满足数据分析中获取含有混合属性的数据集聚类的边界需求, 提出一种混合属性数据集的聚类边界检测算法(BERGE). 该算法利用模糊聚类隶属度定义边界因子以识别候选边界集, 然后运用证据积累的思想提取聚类的边界. 在综合数据集和真实数据集上的实验结果表明, BERGE 算法能有效地检测混合属性数据集、数值属性数据集以及分类属性数据集的聚类边界, 与现有同类算法相比具有更高的精度.  相似文献   

4.
聚类是一种无监督的机器学习方法,其任务是发现数据中的自然簇。共享最近邻聚类算法(SNN)在处理大小不同、形状不同以及密度不同的数据集上具有很好的聚类效果,但该算法还存在以下不足:(1)时间复杂度为O(n2),不适合处理大规模数据集;(2)没有明确给出参数阈值的简单指导性操作方法;(3)只能处理数值型属性数据集。对共享最近邻算法进行改进,使其能够处理混合属性数据集,并给出参数阈值的简单选择方法,改进后算法运行时间与数据集大小成近似线性关系,适用于大规模高维数据集。在真实数据集和人造数据集上的实验结果表明,提出的改进算法是有效可行的。  相似文献   

5.
介绍了一种基于无向超图的多蚁群聚类组合算法,该算法将单蚁群聚类算法的结果聚类组合成多蚁群聚类算法,用无向超图表示,结合超图划分算法Hmetis得到最终的聚类结果。文中给出了实验数据集和实验结果,证明该算法可以提高聚类效果并且减少孤立点。  相似文献   

6.
面对复杂信息环境下的数据预处理需求,提出了一种可以处理混合属性数据集的双重聚类方法。这种双重聚类方法由双重近邻无向图的构造算法或其改进算法,基于分离集合并的双重近邻图聚类算法、基于宽度优先搜索的双重近邻图聚类算法、或基于深度优先搜索的双重近邻图聚类算法来实现。通过人工数据集和UCI标准数据集的仿真实验,可以验证,尽管这三个聚类算法所采用的搜索策略不同,但最终的结果是一致的。仿真实验结果还表明,对于一些具有明显聚类分布结构且无近邻噪声干扰的数据集,该方法经常能取得比K-means算法和AP算法更好的聚类精度,从而说明这种双重聚类方法具有一定的有效性。为进一步推广并在实际中发掘出该方法的应用价值,最后给出了一点较有价值的研究展望。  相似文献   

7.
为获得更贴近于混合属性数据点集空间的相异性度量,从而探测出数据点集的更有意义的聚类分布,提出了一种推进式优化特征权重的K-中心点聚类算法。对该聚类算法进行了必要的讨论,给出其时间复杂度分析及算法收敛性分析。为实现该聚类算法的特征权重优化步骤,给出了二种不同的特征权重优化方法和几个自适应优化距离权重系数、目标函数系数的方法。这些优化方法在一定的理论层次上解决了相异性度量的自适应优化问题。通过几个UCI标准数据集验证了该聚类算法有时能取得更好的聚类质量,从而说明该加权聚类算法具有一定的有效性。给出了几点研究展望,为下一步的研究指明了方向。  相似文献   

8.
黄德才  钱潮恺 《计算机科学》2015,42(Z11):55-57, 71
针对近邻传播聚类算法不能处理混合属性数据集的问题,提出了一种新的距离度量测度,并将其应用到近邻传播聚类算法中,提出了一种基于维度属性距离的混合属性近邻传播聚类算法。与传统聚类算法不同的是,该算法不需要计算虚拟的中心点,同时考虑了数据集整体分布对聚类结果的影响。将算法在UCI数据库的2个混合属性数据集上进行验证,同时对比了经典的K-Prototypes算法以及K-Modes算法。实验结果表明,改进后的算法具有更好的聚类质量以及执行效率,算法的优越性得到了验证。  相似文献   

9.
面对混合属性数据集的数据预处理需求,本文在给出若干定义及相关性质之后,提出了一种基于近邻连接的两阶段聚类算法。为提高算法的时间效率,给出了算法改进的思路与技术。多个人工数据集和UCI标准数据集的仿真实验结果表明,对于一些具有明显聚类分布结构的数据集,该算法经常能取得比k-means算法和AP算法更好的聚类精度,说明它具有一定的有效性。为进一步推广并在实际中发掘出该算法的应用价值,最后给出了几点研究展望。  相似文献   

10.
针对传统的聚类算法只能处理单属性的数据,不能很好地处理混合属性数据的聚类问题,以及目前大多数混合属性数据聚类算法对初始化敏感、不能处理任意形状的数据的问题,提出一种基于信息熵的混合属性数据谱聚类算法,用于处理混合类型数据。首先,提出了一种新的相似性度量方式,利用谱聚类算法中的数值型数据构成的高斯核函数矩阵与新的基于信息熵的分类型数据构成的影响因子矩阵相结合代替了传统的相似度矩阵,新的相似度矩阵避免了数值属性与分类属性数据之间的转换和参数调整;然后,把新的相似度矩阵运用到谱聚类算法中,以便于处理任意形状的数据,最终得出聚类结果。通过在UCI的数据集上的实验表明,该算法能有效地处理混合属性数据的聚类问题,且具有较高的稳定性以及良好的鲁棒性。  相似文献   

11.
Development of classification methods using case-based reasoning systems is an active area of research. In this paper, two new case-based reasoning systems with two similarity measures that support mixed categorical and numerical data as well as only categorical data are proposed. The principal difference between these two measures lies in the calculations of distance for categorical data. The first one, named distance in unsupervised learning (DUL), is derived from co-occurrence of values, and the other one, named distance in supervised learning (DSL), is used to calculate the distance between two values of the same feature with respect to every other feature for a given class. However, the distance between numerical data is computed using the Euclidean distance. Furthermore, the importance of numeric features is determined by linear discrimination analysis (LDA) and the weight assignment to categorical features depends on co-occurrence of feature values when calculating the similarity between a new case and the old one. The performance of the proposed case-based reasoning systems has been investigated on the University of California, Irvine (UCI) data sets by 5-fold cross validation. The results indicate that these case-based reasoning systems will produce a proper performance in predictive accuracy and interpretability.  相似文献   

12.
Mixed data sets containing numerical and categorical attributes are nowadays ubiquitous. Converting them to one attribute type may lead to a loss of information. We present an approach for handling numerical and categorical attributes in a holistic view. For data sets with many attributes, dimensionality reduction (DR) methods can help to generate visual representations involving all attributes. While automatic DR for mixed data sets is possible using weighted combinations, the impact of each attribute on the resulting projection is difficult to measure. Interactive support allows the user to understand the impact of data dimensions in the formation of patterns. Star Coordinates is a well-known interactive linear DR technique for multi-dimensional numerical data sets. We propose to extend Star Coordinates and its initial configuration schemes to mixed data sets. In conjunction with analysing numerical attributes, our extension allows for exploring the impact of categorical dimensions and individual categories on the structure of the entire data set. The main challenge when interacting with Star Coordinates is typically to find a good configuration of the attribute axes. We propose a guided mixed data analysis based on maximizing projection quality measures by the use of recommended transformations, named hints, in order to find a proper configuration of the attribute axes.  相似文献   

13.
为了解决动态网格环境中资源查找的难题,提出了基于特征加权模糊K-原型聚类的网格资源查找算法。该算法根据资源请求对各维资源关心程度的不同,用特征加权模糊K-原型聚类算法对数值型、类属型并存的混合型网格资源节点集合进行划分。然后根据资源的静态数值特征与类属特征,确定与资源请求属性特征值最相似的类簇。最后综合资源的动态数值特征选择最优的资源节点。模拟实验的结果表明,与其他同类算法比较,算法能提高资源查找的查准率、鲁棒性和降低平均响应时间。  相似文献   

14.
Rough set reduction has been used as an important preprocessing tool for pattern recognition, machine learning and data mining. As the classical Pawlak rough sets can just be used to evaluate categorical features, a neighborhood rough set model is introduced to deal with numerical data sets. Three-way decision theory proposed by Yao comes from Pawlak rough sets and probability rough sets for trading off different types of classification error in order to obtain a minimum cost ternary classifier. In this paper, we discuss reduction questions based on three-way decisions and neighborhood rough sets. First, the three-way decision reducts of positive region preservation, boundary region preservation and negative region preservation are introduced into the neighborhood rough set model. Second, three condition entropy measures are constructed based on three-way decision regions by considering variants of neighborhood classes. The monotonic principles of entropy measures are proved, from which we can obtain the heuristic reduction algorithms in neighborhood systems. Finally, the experimental results show that the three-way decision reduction approaches are effective feature selection techniques for addressing numerical data sets.  相似文献   

15.
Cluster analysis, or clustering, refers to the analysis of the structural organization of a data set. This analysis is performed by grouping together objects of the data that are more similar among themselves than to objects of different groups. The sampled data may be described by numerical features or by a symbolic representation, known as categorical features. These features often require a transformation into numerical data in order to be properly handled by clustering algorithms. The transformation usually assigns a weight for each feature calculated by a measure of importance (i.e., frequency, mutual information). A problem with the weight assignment is that the values are calculated with respect to the whole set of objects and features. This may pose as a problem when a subset of the features have a higher degree of importance to a subset of objects but a lower degree with another subset. One way to deal with such problem is to measure the importance of each subset of features only with respect to a subset of objects. This is known as co-clustering that, similarly to clustering, is the task of finding a subset of objects and features that presents a higher similarity among themselves than to other subsets of objects and features. As one might notice, this task has a higher complexity than the traditional clustering and, if not properly dealt with, may present an scalability issue. In this paper we propose a novel co-clustering technique, called HBLCoClust, with the objective of extracting a set of co-clusters from a categorical data set, without the guarantees of an enumerative algorithm, but with the compromise of scalability. This is done by using a probabilistic clustering algorithm, named Locality Sensitive Hashing, together with the enumerative algorithm named InClose. The experimental results are competitive when applied to labeled categorical data sets and text corpora. Additionally, it is shown that the extracted co-clusters can be of practical use to expert systems such as Recommender Systems and Topic Extraction.  相似文献   

16.
In this paper, we develop a semi-supervised regression algorithm to analyze data sets which contain both categorical and numerical attributes. This algorithm partitions the data sets into several clusters and at the same time fits a multivariate regression model to each cluster. This framework allows one to incorporate both multivariate regression models for numerical variables (supervised learning methods) and k-mode clustering algorithms for categorical variables (unsupervised learning methods). The estimates of regression models and k-mode parameters can be obtained simultaneously by minimizing a function which is the weighted sum of the least-square errors in the multivariate regression models and the dissimilarity measures among the categorical variables. Both synthetic and real data sets are presented to demonstrate the effectiveness of the proposed method.  相似文献   

17.
Many pattern classification algorithms such as Support Vector Machines (SVMs), Multi-Layer Perceptrons (MLPs), and K-Nearest Neighbors (KNNs) require data to consist of purely numerical variables. However many real world data consist of both categorical and numerical variables. In this paper we suggest an effective method of converting the mixed data of categorical and numerical variables into data of purely numerical variables for binary classifications. Since the suggested method is based on the theory of learning Bayesian Network Classifiers (BNCs), it is computationally efficient and robust to noises and data losses. Also the suggested method is expected to extract sufficient information for estimating a minimum-error-rate (MER) classifier. Simulations on artificial data sets and real world data sets are conducted to demonstrate the competitiveness of the suggested method when the number of values in each categorical variable is large and BNCs accurately model the data.  相似文献   

18.
提出度量多个集合之间总体差异程度的拓展集合差异度及相关定理,并给出一种新的解决分类属性高维数据聚类问题的CAESD算法。基于拓展集合差异度及拓展集合特征向量,在CABOSFV_C聚类的基础上通过两阶段聚类完成全部聚类过程。采用UCI数据集与K-modes及其改进算法、CABOSFV_C算法进行比较实验,结果表明CAESD算法具有较高的聚类正确率。  相似文献   

19.
由于符号型数据缺乏清晰的空间结构,很难构造一种合理的相似性度量,从而使诸多数值型聚类算法难以推广至符号型数据聚类.基于此种情况,文中引入一种空间结构表示方法,把符号型数据转化为数值型数据,能够在保持原符号型数据的结构特征的基础上重新构造样本之间的相似度.基于此方法,将仿射传播(AP)聚类算法迁移至符号数据聚类中,提出基于空间结构的符号数据AP算法(SBAP).在UCI数据集中若干符号型数据集上的实验表明,SBAP可以使AP算法有效处理符号型数据聚类问题,并且可以提升算法性能.  相似文献   

20.
Analysing and clustering units described by a mixture of sets of quantitative, categorical and frequency variables is a relevant challenge. Multiple factor analysis is extended to include these three types of variables in order to balance the influence of the different sets when a global distance between units is computed. Suitable coding is adopted to keep as close as possible to the approach offered by principal axes methods, that is, principal component analysis for quantitative sets, multiple correspondence analysis for categorical sets and correspondence analysis for frequency sets. In addition, the presence of frequency sets poses the problem of selecting the unit weighting, since this is fixed by the user (usually uniform) in principal component analysis and multiple correspondence analysis, but imposed by the table margin in correspondence analysis. The method's main steps are presented and illustrated by an example extracted from a survey that aimed to cluster respondents to a questionnaire that included both closed and open-ended questions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号