首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
DIVCLUS-T is a divisive hierarchical clustering algorithm based on a monothetic bipartitional approach allowing the dendrogram of the hierarchy to be read as a decision tree. It is designed for either numerical or categorical data. Like the Ward agglomerative hierarchical clustering algorithm and the k-means partitioning algorithm, it is based on the minimization of the inertia criterion. However, unlike Ward and k-means, it provides a simple and natural interpretation of the clusters. The price paid by construction in terms of inertia by DIVCLUS-T for this additional interpretation is studied by applying the three algorithms on six databases from the UCI Machine Learning repository.  相似文献   

3.
A class-consistent k-means clustering algorithm (CCKM) and its hierarchical extension (Hierarchical CCKM) are presented for generating discriminative visual words for recognition problems. In addition to using the labels of training data themselves, we associate a class label with each cluster center to enforce discriminability in the resulting visual words. Our algorithms encourage data points from the same class to be assigned to the same visual word, and those from different classes to be assigned to different visual words. More specifically, we introduce a class consistency term in the clustering process which penalizes assignment of data points from different classes to the same cluster. The optimization process is efficient and bounded by the complexity of k-means clustering. A very efficient and discriminative tree classifier can be learned for various recognition tasks via the Hierarchical CCKM. The effectiveness of the proposed algorithms is validated on two public face datasets and four benchmark action datasets.  相似文献   

4.
Image segmentation denotes a process of partitioning an image into distinct regions. A large variety of different segmentation approaches for images have been developed. Among them, the clustering methods have been extensively investigated and used. In this paper, a clustering based approach using a hierarchical evolutionary algorithm (HEA) is proposed for medical image segmentation. The HEA can be viewed as a variant of conventional genetic algorithms. By means of a hierarchical structure in the chromosome, the proposed approach can automatically classify the image into appropriate classes and avoid the difficulty of searching for the proper number of classes. The experimental results indicate that the proposed approach can produce more continuous and smoother segmentation results in comparison with four existing methods, competitive Hopfield neural networks (CHNN), dynamic thresholding, k-means, and fuzzy c-means methods.  相似文献   

5.
In this paper, we present a fast global k-means clustering algorithm by making use of the cluster membership and geometrical information of a data point. This algorithm is referred to as MFGKM. The algorithm uses a set of inequalities developed in this paper to determine a starting point for the jth cluster center of global k-means clustering. Adopting multiple cluster center selection (MCS) for MFGKM, we also develop another clustering algorithm called MFGKM+MCS. MCS determines more than one starting point for each step of cluster split; while the available fast and modified global k-means clustering algorithms select one starting point for each cluster split. Our proposed method MFGKM can obtain the least distortion; while MFGKM+MCS may give the least computing time. Compared to the modified global k-means clustering algorithm, our method MFGKM can reduce the computing time and number of distance calculations by a factor of 3.78-5.55 and 21.13-31.41, respectively, with the average distortion reduction of 5,487 for the Statlog data set. Compared to the fast global k-means clustering algorithm, our method MFGKM+MCS can reduce the computing time by a factor of 5.78-8.70 with the average reduction of distortion of 30,564 using the same data set. The performances of our proposed methods are more remarkable when a data set with higher dimension is divided into more clusters.  相似文献   

6.
In recent years, there have been numerous attempts to extend the k-means clustering protocol for single database to a distributed multiple database setting and meanwhile keep privacy of each data site. Current solutions for (whether two or more) multiparty k-means clustering, built on one or more secure two-party computation algorithms, are not equally contributory, in other words, each party does not equally contribute to k-means clustering. This may lead a perfidious attack where a party who learns the outcome prior to other parties tells a lie of the outcome to other parties. In this paper, we present an equally contributory multiparty k-means clustering protocol for vertically partitioned data, in which each party equally contributes to k-means clustering. Our protocol is built on ElGamal's encryption scheme, Jakobsson and Juels's plaintext equivalence test protocol, and mix networks, and protects privacy in terms that each iteration of k-means clustering can be performed without revealing the intermediate values.  相似文献   

7.
In the field of multimedia retrieval in video, text frame classification is essential for text detection, event detection, event boundary detection, etc. We propose a new text frame classification method that introduces a combination of wavelet and median moment with k-means clustering to select probable text blocks among 16 equally sized blocks of a video frame. The same feature combination is used with a new Max-Min clustering at the pixel level to choose probable dominant text pixels in the selected probable text blocks. For the probable text pixels, a so-called mutual nearest neighbor based symmetry is explored with a four-quadrant formation centered at the centroid of the probable dominant text pixels to know whether a block is a true text block or not. If a frame produces at least one true text block then it is considered as a text frame otherwise it is a non-text frame. Experimental results on different text and non-text datasets including two public datasets and our own created data show that the proposed method gives promising results in terms of recall and precision at the block and frame levels. Further, we also show how existing text detection methods tend to misclassify non-text frames as text frames in term of recall and precision at both the block and frame levels.  相似文献   

8.
By using a kernel function, data that are not easily separable in the original space can be clustered into homogeneous groups in the implicitly transformed high-dimensional feature space. Kernel k-means algorithms have recently been shown to perform better than conventional k-means algorithms in unsupervised classification. However, few reports have examined the benefits of using a kernel function and the relative merits of the various kernel clustering algorithms with regard to the data distribution. In this study, we reformulated four representative clustering algorithms based on a kernel function and evaluated their performances for various data sets. The results indicate that each kernel clustering algorithm gives markedly better performance than its conventional counterpart for almost all data sets. Of the kernel clustering algorithms studied in the present work, the kernel average linkage algorithm gives the most accurate clustering results.  相似文献   

9.
A new hierarchical Walsh memory which can store and quickly recognize any number of patterns is proposed. A Walsh function based associative memory was found to be capable of storing and recognizing patterns in parallel via purely a software algorithmic technique (namely, without resorting to parallel hardware) while the memory itself only takes a single pattern space of computer memory, due to the Walsh encoding of each pattern. This type of distributed associative memory lends itself to high speed pattern recognition and has been reported earlier in a single memory version. In this paper, the single memory concept has first been extended to a parallel memory module and then to a tree-shaped hierarchy of these parallel modules that are capable of storing and recognizing any number of patterns for practical large scale data applications exemplified by image and speech recognition. The memory hierarchy was built by successively applying k-means clustering to the training data set. In the proposed architecture, the clustered data subsets are stored respectively into a parallel memory module where the module allocation is optimized using the genetic algorithm to realize a minimal implementation of the memory structure. The system can recognize all the training patterns with 100% accuracy and further, can also generalize on similar data. In order to demonstrate its efficacy with large scale real world data, we stored and recognized over 500 faces while at same time, achieving much reduced recognition time and storage space than template matching.  相似文献   

10.
Color quantization is an important operation with many applications in graphics and image processing. Most quantization methods are essentially based on data clustering algorithms. However, despite its popularity as a general purpose clustering algorithm, k-means has not received much respect in the color quantization literature because of its high computational requirements and sensitivity to initialization. In this paper, we investigate the performance of k-means as a color quantizer. We implement fast and exact variants of k-means with several initialization schemes and then compare the resulting quantizers to some of the most popular quantizers in the literature. Experiments on a diverse set of images demonstrate that an efficient implementation of k-means with an appropriate initialization strategy can in fact serve as a very effective color quantizer.  相似文献   

11.
Clustering is a powerful machine learning technique that groups “similar” data points based on their characteristics. Many clustering algorithms work by approximating the minimization of an objective function, namely the sum of within-the-cluster distances between points. The straightforward approach involves examining all the possible assignments of points to each of the clusters. This approach guarantees the solution will be a global minimum; however, the number of possible assignments scales quickly with the number of data points and becomes computationally intractable even for very small datasets. In order to circumvent this issue, cost function minima are found using popular local search-based heuristic approaches such as k-means and hierarchical clustering. Due to their greedy nature, such techniques do not guarantee that a global minimum will be found and can lead to sub-optimal clustering assignments. Other classes of global search-based techniques, such as simulated annealing, tabu search, and genetic algorithms, may offer better quality results but can be too time-consuming to implement. In this work, we describe how quantum annealing can be used to carry out clustering. We map the clustering objective to a quadratic binary optimization problem and discuss two clustering algorithms which are then implemented on commercially available quantum annealing hardware, as well as on a purely classical solver “qbsolv.” The first algorithm assigns N data points to K clusters, and the second one can be used to perform binary clustering in a hierarchical manner. We present our results in the form of benchmarks against well-known k-means clustering and discuss the advantages and disadvantages of the proposed techniques.  相似文献   

12.
Rough k-means clustering describes uncertainty by assigning some objects to more than one cluster. Rough cluster quality index based on decision theory is applicable to the evaluation of rough clustering. In this paper we analyze rough k-means clustering with respect to the selection of the threshold, the value of risk for assigning an object and uncertainty of objects. According to the analysis, clusters presented as interval sets with lower and upper approximations in rough k-means clustering are not adequate to describe clusters. This paper proposes an interval set clustering based on decision theory. Lower and upper approximations in the proposed algorithm are hierarchical and constructed as outer-level approximations and inner-level ones. Uncertainty of objects in out-level upper approximation is described by the assignment of objects among different clusters. Accordingly, ambiguity of objects in inner-level upper approximation is represented by local uniform factors of objects. In addition, interval set clustering can be improved to obtain a satisfactory clustering result with the optimal number of clusters, as well as optimal values of parameters, by taking advantage of the usefulness of rough cluster quality index in the evaluation of clustering. The experimental results on synthetic and standard data demonstrate how to construct clusters with satisfactory lower and upper approximations in the proposed algorithm. The experiments with a promotional campaign for the retail data illustrates the usefulness of interval set clustering for improving rough k-means clustering results.  相似文献   

13.
The volume of spatio-textual data is drastically increasing in these days, and this makes more and more essential to process such a large-scale spatio-textual dataset. Even though numerous works have been studied for answering various kinds of spatio-textual queries, the analyzing method for spatio-textual data has rarely been considered so far. Motivated by this, this paper proposes a k-means based clustering algorithm specialized for a massive spatio-textual data. One of the strong points of the k-means algorithm lies in its efficiency and scalability, implying that it is appropriate for a large-scale data. However, it is challenging to apply the normal k-means algorithm to spatio-textual data, since each spatio-textual object has non-numeric attributes, that is, textual dimension, as well as numeric attributes, that is, spatial dimension. We address this problem by using the expected distance between a random pair of objects rather than constructing actual centroid of each cluster. Based on our experimental results, we show that the clustering quality of our algorithm is comparable to those of other k-partitioning algorithms that can process spatio-textual data, and its efficiency is superior to those competitors.  相似文献   

14.
This paper proposes a new method to weight subspaces in feature groups and individual features for clustering high-dimensional data. In this method, the features of high-dimensional data are divided into feature groups, based on their natural characteristics. Two types of weights are introduced to the clustering process to simultaneously identify the importance of feature groups and individual features in each cluster. A new optimization model is given to define the optimization process and a new clustering algorithm FG-k-means is proposed to optimize the optimization model. The new algorithm is an extension to k-means by adding two additional steps to automatically calculate the two types of subspace weights. A new data generation method is presented to generate high-dimensional data with clusters in subspaces of both feature groups and individual features. Experimental results on synthetic and real-life data have shown that the FG-k-means algorithm significantly outperformed four k-means type algorithms, i.e., k-means, W-k-means, LAC and EWKM in almost all experiments. The new algorithm is robust to noise and missing values which commonly exist in high-dimensional data.  相似文献   

15.
The fast evolution of digital video has brought many new multimedia applications and, as a consequence, has increased the amount of research into new technologies that aim at improving the effectiveness and efficiency of video acquisition, archiving, cataloging and indexing, as well as increasing the usability of stored videos. Among possible research areas, video summarization is an important topic that potentially enables faster browsing of large video collections and also more efficient content indexing and access. Essentially, this research area consists of automatically generating a short summary of a video, which can either be a static summary or a dynamic summary. In this paper, we present VSUMM, a methodology for the production of static video summaries. The method is based on color feature extraction from video frames and k-means clustering algorithm. As an additional contribution, we also develop a novel approach for the evaluation of video static summaries. In this evaluation methodology, video summaries are manually created by users. Then, several user-created summaries are compared both to our approach and also to a number of different techniques in the literature. Experimental results show - with a confidence level of 98% - that the proposed solution provided static video summaries with superior quality relative to the approaches to which it was compared.  相似文献   

16.
In big data era, more and more data are collected from multiple views, each of which reflect distinct perspectives of the data. Many multi-view data are accompanied by incompatible views and high dimension, both of which bring challenges for multi-view clustering. This paper proposes a strategy of simultaneous weighting on view and feature to discriminate their importance. Each feature of multi-view data is given bi-level weights to express its importance in feature level and view level, respectively. Furthermore, we implements the proposed weighting method in the classical k-means algorithm to conduct multi-view clustering task. An efficient gradient-based optimization algorithm is embedded into k-means algorithm to compute the bi-level weights automatically. Also, the convergence of the proposed weight updating method is proved by theoretical analysis. In experimental evaluation, synthetic datasets with varied noise and missing-value are created to investigate the robustness of the proposed approach. Then, the proposed approach is also compared with five state-of-the-art algorithms on three real-world datasets. The experiments show that the proposed method compares very favourably against the other methods.  相似文献   

17.
The k-means algorithm and its variations are known to be fast clustering algorithms. However, they are sensitive to the choice of starting points and are inefficient for solving clustering problems in large datasets. Recently, incremental approaches have been developed to resolve difficulties with the choice of starting points. The global k-means and the modified global k-means algorithms are based on such an approach. They iteratively add one cluster center at a time. Numerical experiments show that these algorithms considerably improve the k-means algorithm. However, they require storing the whole affinity matrix or computing this matrix at each iteration. This makes both algorithms time consuming and memory demanding for clustering even moderately large datasets. In this paper, a new version of the modified global k-means algorithm is proposed. We introduce an auxiliary cluster function to generate a set of starting points lying in different parts of the dataset. We exploit information gathered in previous iterations of the incremental algorithm to eliminate the need of computing or storing the whole affinity matrix and thereby to reduce computational effort and memory usage. Results of numerical experiments on six standard datasets demonstrate that the new algorithm is more efficient than the global and the modified global k-means algorithms.  相似文献   

18.
改进的蚁群算法与凝聚相结合的关键帧提取   总被引:1,自引:0,他引:1  
关键帧提取技术,对基于内容的视频检索有着重要的作用。为了从不同类型的视频中有效地提取关键帧,提出了改进的蚁群算法与凝聚相结合的关键帧提取算法。该方法提取视频中每帧的颜色与边缘特征向量,利用改进的蚁群算法自组织地对颜色和边缘特征向量进行聚类,得到初始聚类。通过凝聚算法对初始聚类进行优化,得到最终聚类。提取每类中距离聚类中心最近的向量,将其对应帧作为关键帧。实验结果表明:使用该算法提取的关键帧不仅可以充分表达出视频的主要内容,而且可以根据视频内容的变化提取出适当数量的关键帧。  相似文献   

19.
Functional verification has become the key bottleneck that delays time-to-market during the embedded system design process. And simulation-based verification is the mainstream practice in functional verification due to its flexibility and scalability. In practice, the success of the simulation-based verification highly depends on the quality of functional tests in use which is usually evaluated by coverage metrics. Since test prioritization can provide a way to simulate the more important tests which can improve the coverage metrics evidently earlier, we propose a test prioritization approach based on the clustering algorithm to obtain a high coverage level earlier in the simulation process. The k-means algorithm, which is one of the most popular clustering algorithms and usually used for the test prioritization, has some shortcomings which have an effect on the effectiveness of test prioritization. Thus we propose three enhanced k-means algorithms to overcome these shortcomings and improve the effectiveness of the test prioritization. Then the functional tests in the simulation environment can be ordered with the test prioritization based on the enhanced k-means algorithms. Finally, the more important tests, which can improve the coverage metrics evidently, can be selected and simulated early within the limited simulation time. Experimental results show that the enhanced k-means algorithms are more accurate and efficient than the standard k-means algorithm for the test prioritization, especially the third enhanced k-means algorithm. In comparison with simulating all the tests randomly, the more important tests, which are selected with the test prioritization based on the third enhanced k-means algorithm, achieve almost the same coverage metrics in a shorter time, which achieves a 90% simulation time saving.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号