首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper describes an R package, named SPATCLUS that implements a method recently proposed for spatial cluster detection of case event data. This method is based on a data transformation. This transformation is achieved by the definition of a trajectory, which allows to attribute to each point a selection order and the distance to its nearest neighbour. The nearest point is searched among the points which have not yet been selected in the trajectory. Due to the trajectory effects, the distance is weighted by the expected distance under the uniform distribution hypothesis. Potential clusters are located by using multiple structural change models and a dynamic programming algorithm. The double maximum test allows to select the best model. The significativity of potential clusters is determined by Monte Carlo simulations. This method makes it possible the detection of multiple clusters of any shape.  相似文献   

2.
A statistical procedure is proposed in order to estimate the interaction radius between points of a non-stationary point process when the process can present local aggregated and regular patterns. The model under consideration is a hierarchical process with two levels, points and clusters of points. Points will represent individuals, clusters will represent groups of individuals. Points or clusters do not interact as soon as they are located beyond a given interaction radius, and are assumed to interact if their distance is less than this interaction radius. Interaction radius estimation is performed in the following way. For a given distance, observations are split into several clusters whose in-between distances are larger than this distance. For each cluster, a neighbourhood and an area in which this cluster is randomly located is defined under the assumption that the distance between the cluster and its neighbourhood is larger than the interaction radius. The p-value of a test of this assumption is then computed for each cluster. Modelling the expectation of this p-value as a function of the distance leads to an estimate of the interaction radius by a least-square method. This approach is shown to be robust against non-stationarity. Unlike most classical approaches, this method makes no assumption on the point spatial distribution inside the clusters. Two applications are presented in animal and plant ecology.  相似文献   

3.
Major problems exist in both crisp and fuzzy clustering algorithms. The fuzzy c-means type of algorithms use weights determined by a power m of inverse distances that remains fixed over all iterations and over all clusters, even though smaller clusters should have a larger m. Our method uses a different “distance” for each cluster that changes over the early iterations to fit the clusters. Comparisons show improved results. We also address other perplexing problems in clustering: (i) find the optimal number K of clusters; (ii) assess the validity of a given clustering; (iii) prevent the selection of seed vectors as initial prototypes from affecting the clustering; (iv) prevent the order of merging from affecting the clustering; and (v) permit the clusters to form more natural shapes rather than forcing them into normed balls of the distance function. We employ a relatively large number K of uniformly randomly distributed seeds and then thin them to leave fewer uniformly distributed seeds. Next, the main loop iterates by assigning the feature vectors and computing new fuzzy prototypes. Our fuzzy merging then merges any clusters that are too close to each other. We use a modified Xie-Bene validity measure as the goodness of clustering measure for multiple values of K in a user-interaction approach where the user selects two parameters (for eliminating clusters and merging clusters after viewing the results thus far). The algorithm is compared with the fuzzy c-means on the iris data and on the Wisconsin breast cancer data.  相似文献   

4.
In this paper, we present a modified filtering algorithm (MFA) by making use of center variations to speed up clustering process. Our method first divides clusters into static and active groups. We use the information of cluster displacements to reject unlikely cluster centers for all nodes in the kd-tree. We reduce the computational complexity of filtering algorithm (FA) through finding candidates for each node mainly from the set of active cluster centers. Two conditions for determining the set of candidate cluster centers for each node from active clusters are developed. Our approach is different from the major available algorithm, which passes no information from one stage of iteration to the next. Theoretical analysis shows that our method can reduce the computational complexity, in terms of the number of distance calculations, of FA at each stage of iteration by a factor of FC/AC, where FC and AC are the numbers of total clusters and active clusters, respectively. Compared with the FA, our algorithm can effectively reduce the computing time and number of distance calculations. It is noted that our proposed algorithm can generate the same clusters as that produced by hard k-means clustering. The superiority of our method is more remarkable when a larger data set with higher dimension is used.  相似文献   

5.
In this paper, we present a fast global k-means clustering algorithm by making use of the cluster membership and geometrical information of a data point. This algorithm is referred to as MFGKM. The algorithm uses a set of inequalities developed in this paper to determine a starting point for the jth cluster center of global k-means clustering. Adopting multiple cluster center selection (MCS) for MFGKM, we also develop another clustering algorithm called MFGKM+MCS. MCS determines more than one starting point for each step of cluster split; while the available fast and modified global k-means clustering algorithms select one starting point for each cluster split. Our proposed method MFGKM can obtain the least distortion; while MFGKM+MCS may give the least computing time. Compared to the modified global k-means clustering algorithm, our method MFGKM can reduce the computing time and number of distance calculations by a factor of 3.78-5.55 and 21.13-31.41, respectively, with the average distortion reduction of 5,487 for the Statlog data set. Compared to the fast global k-means clustering algorithm, our method MFGKM+MCS can reduce the computing time by a factor of 5.78-8.70 with the average reduction of distortion of 30,564 using the same data set. The performances of our proposed methods are more remarkable when a data set with higher dimension is divided into more clusters.  相似文献   

6.
In this paper the problem of automatic clustering a data set is posed as solving a multiobjective optimization (MOO) problem, optimizing a set of cluster validity indices simultaneously. The proposed multiobjective clustering technique utilizes a recently developed simulated annealing based multiobjective optimization method as the underlying optimization strategy. Here variable number of cluster centers is encoded in the string. The number of clusters present in different strings varies over a range. The points are assigned to different clusters based on the newly developed point symmetry based distance rather than the existing Euclidean distance. Two cluster validity indices, one based on the Euclidean distance, XB-index, and another recently developed point symmetry distance based cluster validity index, Sym-index, are optimized simultaneously in order to determine the appropriate number of clusters present in a data set. Thus the proposed clustering technique is able to detect both the proper number of clusters and the appropriate partitioning from data sets either having hyperspherical clusters or having point symmetric clusters. A new semi-supervised method is also proposed in the present paper to select a single solution from the final Pareto optimal front of the proposed multiobjective clustering technique. The efficacy of the proposed algorithm is shown for seven artificial data sets and six real-life data sets of varying complexities. Results are also compared with those obtained by another multiobjective clustering technique, MOCK, two single objective genetic algorithm based automatic clustering techniques, VGAPS clustering and GCUK clustering.  相似文献   

7.
针对k-prototypes算法无法自动识别簇数以及无法发现任意形状的簇的问题,提出一种针对混合型数据的新方法:寻找密度峰值的聚类算法。首先,把CFSFDP(Clustering by Fast Search and Find of Density Peaks)聚类算法扩展到混合型数据集,定义混合型数据对象之间的距离后利用CFSFDP算法确定出簇中心,这样也就自动确定了簇的个数,然后其余的点按照密度从大到小的顺序进行分配。其次,研究了该算法中阈值(截断距离)及权值的选取问题:对于密度公式中的阈值,通过计算数据场中的势熵来自动提取;对于距离公式中的权值,利用度量数值型数据集和分类型数据集聚类趋势的统计量来定义。最后通过在三个实际混合型数据集上的测试发现:与传统k-prototypes算法相比,寻找密度峰值的聚类算法能有效提高聚类的精度。  相似文献   

8.
There are decision-making problems that involve grouping and selecting a set of alternatives. Traditional decision-making approaches treat different sets of alternatives with the same method of analysis and selection. In this paper, we propose clustering alternatives into different sets so that different methods of analysis, selection, and implementation for each set can be applied. We consider multiple criteria decision-making alternatives where the decision-maker is faced with several conflicting and non-commensurate objectives (or criteria). For example, consider buying a set of computers for a company that vary in terms of their functions, prices, and computing powers. In this paper, we develop theories and procedures for clustering and selecting discrete multiple criteria alternatives. The sets of alternatives clustered are mutually exclusive and are based on (1) similar features among alternatives, and (2) preferential structure of the decision-maker. The decision-making process can be broken down into three steps: (1) generating alternatives; (2) grouping or clustering alternatives based on similarity of their features; and (3) choosing one or more alternatives from each cluster of alternatives. We utilize unsupervised learning clustering artificial neural networks (ANN) with variable weights for clustering of alternatives, and we use feedforward ANN for the selection of the best alternatives for each cluster of alternatives. The decision-maker is interactively involved by comparing and contrasting alternatives within each group so that the best alternative can be selected from each group. For the learning mechanism of ANN, we proposed using a generalized Euclidean distance where by changing its coefficients new formation of clusters of alternatives can be achieved. The algorithm is interactive and the results are independent of the initial set-up information. Some examples and computational results are presented.  相似文献   

9.
As one of the most fundamental yet important methods of data clustering, center-based partitioning approach clusters the dataset into k subsets, each of which is represented by a centroid or medoid. In this paper, we propose a new medoid-based k-partitions approach called Clustering Around Weighted Prototypes (CAWP), which works with a similarity matrix. In CAWP, each cluster is characterized by multiple objects with different representative weights. With this new cluster representation scheme, CAWP aims to simultaneously produce clusters of improved quality and a set of ranked representative objects for each cluster. An efficient algorithm is derived to alternatingly update the clusters and the representative weights of objects with respect to each cluster. An annealing-like optimization procedure is incorporated to alleviate the local optimum problem for better clustering results and at the same time to make the algorithm less sensitive to parameter setting. Experimental results on benchmark document datasets show that, CAWP achieves favorable effectiveness and efficiency in clustering, and also provides useful information for cluster-specified analysis.  相似文献   

10.
Very large scale networks have become common in distributed systems. To efficiently manage these networks, various techniques are being developed in the distributed and networking research community. In this paper, we focus on one of those techniques, network clustering, i.e., the partitioning of a system into connected subsystems. The clustering we compute is size-oriented: given a parameter K of the algorithm, we compute, as far as possible, clusters of size K. We present an algorithm to compute a binary hierarchy of nested disjoint clusters. A token browses the network and recruits nodes to its cluster. When a cluster reaches a maximal size defined by a parameter of the algorithm, it is divided when possible, and tokens are created in both of the new clusters. The new clusters are then built and divided in the same fashion. The token browsing scheme chosen is a random walk, in order to ensure local load balancing. To allow the division of clusters, a spanning tree is built for each cluster. At each division, information on how to route messages between the clusters is stored. The naming process used for the clusters, along with the information stored during each division, allows routing between any two clusters.  相似文献   

11.
Fuzzy c-means (FCM) is one of the most popular techniques for data clustering. Since FCM tends to balance the number of data points in each cluster, centers of smaller clusters are forced to drift to larger adjacent clusters. For datasets with unbalanced clusters, the partition results of FCM are usually unsatisfactory. Cluster size insensitive FCM (csiFCM) dealt with “cluster-size sensitivity” problem by dynamically adjusting the condition value for the membership of each data point based on cluster size after the defuzzification step in each iterative cycle. However, the performance of csiFCM is sensitive to both the initial positions of cluster centers and the “distance” between adjacent clusters. In this paper, we present a cluster size insensitive integrity-based FCM method called siibFCM to improve the deficiency of csiFCM. The siibFCM method can determine the membership contribution of every data point to each individual cluster by considering cluster's integrity, which is a combination of compactness and purity. “Compactness” represents the distribution of data points within a cluster while “purity” represents how far a cluster is away from its adjacent cluster. We tested our siibFCM method and compared with the traditional FCM and csiFCM methods extensively by using artificially generated datasets with different shapes and data distributions, synthetic images, real images, and Escherichia coli dataset. Experimental results showed that the performance of siibFCM is superior to both traditional FCM and csiFCM in terms of the tolerance for “distance” between adjacent clusters and the flexibility of selecting initial cluster centers when dealing with datasets with unbalanced clusters.  相似文献   

12.
Prototype-based methods are commonly used in cluster analysis and the results may be highly dependent on the prototype used. We propose a two-level fuzzy clustering method that involves adaptively expanding and merging convex polytopes, where the convex polytopes are considered as a “flexible” prototype. Therefore, the dependency on the use of a specified prototype can be eliminated. Also, the proposed method makes it possible to effectively represent an arbitrarily distributed data set without a priori knowledge of the number of clusters in the data set. In the first level of our proposed method, each cluster is represented by a convex polytope which is described by its set of vertices. Specifically, nonlinear membership functions are utilized to determine whether an input pattern creates a new cluster or whether an existing cluster should be modified. In the second level, the expandable clusters that are selected by an intercluster distance measure are merged to improve clustering efficiency and to reduce the order dependency of the incoming input patterns. Several experimental results are given to show the validity of our method  相似文献   

13.
针对图像聚类中面临的高维、准确度低、部分重叠等问题,提出了一种高效的基于链接层次聚类的多标记图像聚类。该方法通过图像距离计算相似度,通过链接聚类检测重叠簇。从而每个图像可能归属于多个簇,使得簇标签的意义更明确。为了检验方法的有效性,对通过搜索引擎检索特定关键词返回的图片数据集进行聚类。结果表明,该方法能有效发现具有重叠划分的簇,且簇的意义比较明确。  相似文献   

14.
A novel non-parametric clustering method based on non-parametric local shrinking is proposed. Each data point is transformed in such a way that it moves a specific distance toward a cluster center. The direction and the associated size of each movement are determined by the median of its K-nearest neighbors. This process is repeated until a pre-defined convergence criterion is satisfied. The optimal value of the number of neighbors is determined by optimizing some commonly used index functions that measure the strengths of clusters generated by the algorithm. The number of clusters and the final partition are determined automatically without any input parameter except the stopping rule for convergence. Experiments on simulated and real data sets suggest that the proposed algorithm achieves relatively high accuracies when compared with classical clustering algorithms.  相似文献   

15.
针对粗糙K-means聚类及其相关衍生算法需要提前人为给定聚类数目、随机选取初始类簇中心导致类簇交叉区域的数据划分准确率偏低等问题,文中提出基于混合度量与类簇自适应调整的粗糙模糊K-means聚类算法.在计算边界区域的数据对象归属于不同类簇的隶属程度时,综合考虑局部密度和距离的混合度量,并采用自适应调整类簇数目的策略,获得最佳聚类数目.选取数据对象稠密区域中距离最小的两个样本的中点作为初始类簇中心,将附近局部密度高于平均密度的对象划分至该簇后再选取剩余的初始类簇中心,使初始类簇中心的选取更合理.在人工数据集和UCI标准数据集上的实验表明,文中算法在处理类簇交叠严重的球簇状数据集时,具有自适应性,聚类精度较优.  相似文献   

16.
In this paper a fuzzy point symmetry based genetic clustering technique (Fuzzy-VGAPS) is proposed which can automatically determine the number of clusters present in a data set as well as a good fuzzy partitioning of the data. The clusters can be of any size, shape or convexity as long as they possess the property of symmetry. Here the membership values of points to different clusters are computed using the newly proposed point symmetry based distance. A variable number of cluster centers are encoded in the chromosomes. A new fuzzy symmetry based cluster validity index, FSym-index is first proposed here and thereafter it is utilized to measure the fitness of the chromosomes. The proposed index can detect non-convex, as well as convex-non-hyperspherical partitioning with variable number of clusters. It is mathematically justified via its relationship to a well-defined hard cluster validity function: the Dunn’s index, for which the condition of uniqueness has already been established. The results of the Fuzzy-VGAPS are compared with those obtained by seven other algorithms including both fuzzy and crisp methods on four artificial and four real-life data sets. Some real-life applications of Fuzzy-VGAPS to automatically cluster the gene expression data as well as segmenting the magnetic resonance brain image with multiple sclerosis lesions are also demonstrated.  相似文献   

17.
As the first major step in each object-oriented feature extraction approach, segmentation plays an essential role as a preliminary step towards further and higher levels of image processing. The primary objective of this paper is to illustrate the potential of Polarimetric Synthetic Aperture Radar (PolSAR) features extracted from Compact Polarimetry (CP) SAR data for image segmentation using Markov Random Field (MRF). The proposed method takes advantage of both spectral and spatial information to segment the CP SAR data. In the first step of the proposed method, k-means clustering was applied to over-segment the image using the appropriate features optimally selected using Genetic Algorithm (GA). As a similarity criterion in each cluster, a probabilistic distance was used for an agglomerative hierarchical merging of small clusters into an appropriate number of larger clusters. In the agglomerative clustering approach, the estimation of the appropriate number of clusters using the data log-likelihood algorithm differs depending on the distance criterion used in the algorithm. In particular, the Wishart Chernoff distance which is independent of samples (pixels) tends to provide a higher appropriate number of clusters compared to the Wishart test statistic distance. This is because the Wishart Chernoff distance preserves detailed data information corresponding to small clusters. The probabilistic distance used in this study is Wishart Chernoff distance which evaluates the similarity of clusters by measuring the distance between their complex Wishart probability density functions. The output of this step, as the initial segmentation of the image, is applied to a Markov Random Field model to improve the final segmentation using vicinity information. The method combines Wishart clustering and enhanced initial clusters in order to access the posterior MRF energy function. The contextual image classifier adopts the Iterated Conditional Mode (ICM) approach to converge to a local minimum and represent a good trade-off between segmentation accuracy and computation burden. The results showed that the PolSAR features extracted from CP mode can provide an acceptable overall accuracy in segmentation when compared to the full polarimetry (FP) and Dual Polarimetry (DP) data. Moreover, the results indicated that the proposed algorithm is superior to the existing image segmentation techniques in terms of segmentation accuracy.  相似文献   

18.
理想的视频库组织方法应该把语义相关并且特征相似的视频的特征向量相邻存储.针对大规模视频库的特点,在语义监督下基于低层视觉特征对视频库进行层次聚类划分,当一个聚类中只包含一个语义类别的视频时,为这个聚类建立索引项,每个聚类所包含的原始特征数据在磁盘上连续存储.统计低层特征和高层特征的概率联系,构造Bayes分类器.查询时对用户的查询范例,首先确定最可能的候选聚类,然后在候选聚类范围内查询相似视频片段.实验结果表明,文中的方法不仅提高了检索速度而且提高了检索的语义敏感度.  相似文献   

19.
Wang  Yizhang  Wang  Di  Zhang  Xiaofeng  Pang  Wei  Miao  Chunyan  Tan  Ah-Hwee  Zhou  You 《Neural computing & applications》2020,32(17):13465-13478

Density peak clustering (DPC) is a recently developed density-based clustering algorithm that achieves competitive performance in a non-iterative manner. DPC is capable of effectively handling clusters with single density peak (single center), i.e., based on DPC’s hypothesis, one and only one data point is chosen as the center of any cluster. However, DPC may fail to identify clusters with multiple density peaks (multi-centers) and may not be able to identify natural clusters whose centers have relatively lower local density. To address these limitations, we propose a novel clustering algorithm based on a hierarchical approach, named multi-center density peak clustering (McDPC). Firstly, based on a widely adopted hypothesis that the potential cluster centers are relatively far away from each other. McDPC obtains centers of the initial micro-clusters (named representative data points) whose minimum distance to the other higher-density data points are relatively larger. Secondly, the representative data points are autonomously categorized into different density levels. Finally, McDPC deals with micro-clusters at each level and if necessary, merges the micro-clusters at a specific level into one cluster to identify multi-center clusters. To evaluate the effectiveness of our proposed McDPC algorithm, we conduct experiments on both synthetic and real-world datasets and benchmark the performance of McDPC against other state-of-the-art clustering algorithms. We also apply McDPC to perform image segmentation and facial recognition to further demonstrate its capability in dealing with real-world applications. The experimental results show that our method achieves promising performance.

  相似文献   

20.
K‐means clustering can be highly accurate when the number of clusters and the initial cluster centre are appropriate. An inappropriate determination of the number of clusters or the initial cluster centre decreases the accuracy of K‐means clustering. However, determining these values is problematic. To solve these problems, we used density‐based spatial clustering of application with noise (DBSCAN) because it does not require a predetermined number of clusters; however, it has some significant drawbacks. Using DBSCAN with high‐dimensional data and data with potentially different densities decreases the accuracy to some degree. Therefore, the objective of this research is to improve the efficiency of DBSCAN through a selection of region clusters based on density DBSCAN to automatically find the appropriate number of clusters and initial cluster centres for K‐means clustering. In the proposed method, DBSCAN is used to perform clustering and to select the appropriate clusters by considering the density of each cluster. Subsequently, the appropriate region data are chosen from the selected clusters. The experimental results yield the appropriate number of clusters and the appropriate initial cluster centres for K‐means clustering. In addition, the results of the selection of region clusters based on density DBSCAN method are more accurate than those obtained by traditional methods, including DBSCAN and K‐means and related methods such as Partitioning‐based DBSCAN (PDBSCAN) and PDBSCAN by applying the Ant Clustering Algorithm DBSCAN (PACA‐DBSCAN).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号