首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Data fusion concerns the problem of merging information coming from independent sources. Also known as statistical matching, file grafting or microdata merging, it is a challenging problem for statisticians. The increasing growth of collected data makes combining different sources of information an attractive alternative to single source data. The interest in data fusion derives, in certain cases, from the impossibility of attaining specific information from one source of data and the reduction of the cost entailed by this operation and, in all cases, from taking greater advantage of the available collected information. The GRAFT system is presented. It is a multipurpose data fusion system based on the k-nearest neighbor (k-nn) hot deck imputation method. The system aim is to cope with many data fusion problems and domains. The k-nn is a very demanding algorithm. The solutions envisaged and their cost, which allow this methodology to be used in a wide range of real problems, are presented.  相似文献   

2.
During the last decade, the most significant advances in the field of continuous speech recognition (CSR) have arisen from the use of hidden Markov models (HMM) for acoustic modeling. These models address one of the major issues for CSR: simultaneous modeling of temporal and frequency distortions in the speech signal. In the HMM, the temporal dimension is managed through an oriented states graph, each state accounting for the local frequency distortions through a probability density function. In this study, improvement of the HMM performance is expected from the introduction of a very effective non-parametric probability density function estimate: the k-nearest neighbors (k-nn) estimate.First, experiments on a short-term speech spectrum identification task are performed to compare the k-nn estimate and the widespread estimate based on mixtures of Gaussian functions. Then adaptations implied by the integration of the k-nn estimate in an HMM-based recognition system are developed. An optimal training protocol is obtained based on the introduction of the membership coefficients in the HMM parameters. The membership coefficients measure the degree of association between a reference acoustic vector and a HMM state. The training procedure uses the expectation-maximization (EM) algorithm applied to the membership coefficient estimation. Its convergence is shown according to the maximum likelihood criterion. This study leads to the development of a baseline k-nn/HMM recognition system which is evaluated on the TIMIT speech database. Further improvements of the k-nn/HMM system are finally sought through the introduction of a temporal information into the representation space (delta coefficients) and the adaptation of the references (mainly, gender modeling and contextual modeling).  相似文献   

3.
We present the global k-means algorithm which is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure consisting of N (with N being the size of the data set) executions of the k-means algorithm from suitable initial positions. We also propose modifications of the method to reduce the computational load without significantly affecting solution quality. The proposed clustering methods are tested on well-known data sets and they compare favorably to the k-means algorithm with random restarts.  相似文献   

4.
Conservation and land use planning in humid tropical lowland forests urgently need accurate remote sensing techniques to distinguish among floristically different forest types. We investigated the degree to which floristically and structurally defined Costa Rican lowland rain forest types can be accurately discriminated by a non-parametric k nearest neighbors (k-nn) classifier or linear discriminant analysis. Pixel values of Landsat Thematic Mapper (TM) image and Shuttle Radar Topography Mission (SRTM) elevation model extracted from segments or from 5 × 5 pixel windows were employed in the classifications. 104 field plots were classified into three floristic and one structural type of forest (regrowth forest). Three floristically defined forest types were formed through clustering the old-growth forest plots (n = 52) by their species specific importance values. An error assessment of the image classification was conducted via cross-validation and error matrices, and overall percent accuracy and Kappa scores were used as measures of accuracy. Image classification of the four forest types did not adequately distinguish two old-growth forest classes, so they were merged into a single forest class. The resulting three forest classes were most accurately classified by the k-nn classifier using segmented image data (overall accuracy 91%). The second best method, with respect to accuracy, was the k-nn with 5 × 5 pixel windows data (89% accuracy), followed by the canonical discriminant analysis using the 5 × 5 pixel window data (86%) and the segment data (82%). We conclude the k-nn classifier can accurately distinguish floristically and structurally different rain forest types. The classification accuracies were higher for the k-nn classifier than for the canonical discriminant analysis, but the differences in Kappa scores were not statistically significant. The segmentation did not increase classification accuracy in this study.  相似文献   

5.
The k-nearest neighbour estimation method is one of the main tools used in multi-source forest inventories. It is a powerful non-parametric method for which estimates are easy to compute and relatively accurate. One downside of this method is that it lacks an uncertainty measure for predicted values and for areas of an arbitrary size. We present a method to estimate the prediction uncertainty based on the variogram model which derives the necessary formula for the k-nn method. A data application is illustrated for multi-source forest inventory data, and the results are compared at pixel level to the conventional RMSE method. We find that the variogram model-based method which is analytic, is competitive with the RMSE method.  相似文献   

6.
The k-means algorithm and its variations are known to be fast clustering algorithms. However, they are sensitive to the choice of starting points and are inefficient for solving clustering problems in large datasets. Recently, incremental approaches have been developed to resolve difficulties with the choice of starting points. The global k-means and the modified global k-means algorithms are based on such an approach. They iteratively add one cluster center at a time. Numerical experiments show that these algorithms considerably improve the k-means algorithm. However, they require storing the whole affinity matrix or computing this matrix at each iteration. This makes both algorithms time consuming and memory demanding for clustering even moderately large datasets. In this paper, a new version of the modified global k-means algorithm is proposed. We introduce an auxiliary cluster function to generate a set of starting points lying in different parts of the dataset. We exploit information gathered in previous iterations of the incremental algorithm to eliminate the need of computing or storing the whole affinity matrix and thereby to reduce computational effort and memory usage. Results of numerical experiments on six standard datasets demonstrate that the new algorithm is more efficient than the global and the modified global k-means algorithms.  相似文献   

7.
We show that for any data set in any metric space, it is possible to construct a hierarchical clustering with the guarantee that for every k, the induced k-clustering has cost at most eight times that of the optimal k-clustering. Here the cost of a clustering is taken to be the maximum radius of its clusters. Our algorithm is similar in simplicity and efficiency to popular agglomerative heuristics for hierarchical clustering, and we show that these heuristics have unbounded approximation factors.  相似文献   

8.
We present a multidisciplinary solution to the problems of anonymous microaggregation and clustering, illustrated with two applications, namely privacy protection in databases, and private retrieval of location-based information. Our solution is perturbative, is based on the same privacy criterion used in microdata k-anonymization, and provides anonymity through a substantial modification of the Lloyd algorithm, a celebrated quantization design algorithm, endowed with numerical optimization techniques.Our algorithm is particularly suited to the important problem of k-anonymous microaggregation of databases, with a small integer k representing the number of individual respondents indistinguishable from each other in the published database. Our algorithm also exhibits excellent performance in the problem of clustering or macroaggregation, where k may take on arbitrarily large values. We illustrate its applicability in this second, somewhat less common case, by means of an example of location-based services. Specifically, location-aware devices entrust a third party with accurate location information. This party then uses our algorithm to create distortion-optimized, size-constrained clusters, where k nearby devices share a common centroid location, which may be regarded as a distorted version of the original one. The centroid location is sent back to the devices, which use it when contacting untrusted location-based information providers, in lieu of the exact home location, to enforce k-anonymity.We compare the performance of our novel algorithm to the state-of-the-art microaggregation algorithm MDAV, on both synthetic and standardized real data, which encompass the cases of small and large values of k. The most promising aspect of our proposed algorithm is its capability to maintain the same k-anonymity constraint, while outperforming MDAV by a significant reduction in data distortion, in all the cases considered.  相似文献   

9.
In this paper, we present a modified filtering algorithm (MFA) by making use of center variations to speed up clustering process. Our method first divides clusters into static and active groups. We use the information of cluster displacements to reject unlikely cluster centers for all nodes in the kd-tree. We reduce the computational complexity of filtering algorithm (FA) through finding candidates for each node mainly from the set of active cluster centers. Two conditions for determining the set of candidate cluster centers for each node from active clusters are developed. Our approach is different from the major available algorithm, which passes no information from one stage of iteration to the next. Theoretical analysis shows that our method can reduce the computational complexity, in terms of the number of distance calculations, of FA at each stage of iteration by a factor of FC/AC, where FC and AC are the numbers of total clusters and active clusters, respectively. Compared with the FA, our algorithm can effectively reduce the computing time and number of distance calculations. It is noted that our proposed algorithm can generate the same clusters as that produced by hard k-means clustering. The superiority of our method is more remarkable when a larger data set with higher dimension is used.  相似文献   

10.
Applying k-Means to minimize the sum of the intra-cluster variances is the most popular clustering approach. However, after a bad initialization, poor local optima can be easily obtained. To tackle the initialization problem of k-Means, we propose the MinMax k-Means algorithm, a method that assigns weights to the clusters relative to their variance and optimizes a weighted version of the k-Means objective. Weights are learned together with the cluster assignments, through an iterative procedure. The proposed weighting scheme limits the emergence of large variance clusters and allows high quality solutions to be systematically uncovered, irrespective of the initialization. Experiments verify the effectiveness of our approach and its robustness over bad initializations, as it compares favorably to both k-Means and other methods from the literature that consider the k-Means initialization problem.  相似文献   

11.
Color quantization is an important operation with many applications in graphics and image processing. Most quantization methods are essentially based on data clustering algorithms. However, despite its popularity as a general purpose clustering algorithm, k-means has not received much respect in the color quantization literature because of its high computational requirements and sensitivity to initialization. In this paper, we investigate the performance of k-means as a color quantizer. We implement fast and exact variants of k-means with several initialization schemes and then compare the resulting quantizers to some of the most popular quantizers in the literature. Experiments on a diverse set of images demonstrate that an efficient implementation of k-means with an appropriate initialization strategy can in fact serve as a very effective color quantizer.  相似文献   

12.
In this paper, we present a fast global k-means clustering algorithm by making use of the cluster membership and geometrical information of a data point. This algorithm is referred to as MFGKM. The algorithm uses a set of inequalities developed in this paper to determine a starting point for the jth cluster center of global k-means clustering. Adopting multiple cluster center selection (MCS) for MFGKM, we also develop another clustering algorithm called MFGKM+MCS. MCS determines more than one starting point for each step of cluster split; while the available fast and modified global k-means clustering algorithms select one starting point for each cluster split. Our proposed method MFGKM can obtain the least distortion; while MFGKM+MCS may give the least computing time. Compared to the modified global k-means clustering algorithm, our method MFGKM can reduce the computing time and number of distance calculations by a factor of 3.78-5.55 and 21.13-31.41, respectively, with the average distortion reduction of 5,487 for the Statlog data set. Compared to the fast global k-means clustering algorithm, our method MFGKM+MCS can reduce the computing time by a factor of 5.78-8.70 with the average reduction of distortion of 30,564 using the same data set. The performances of our proposed methods are more remarkable when a data set with higher dimension is divided into more clusters.  相似文献   

13.
In recent years, there have been numerous attempts to extend the k-means clustering protocol for single database to a distributed multiple database setting and meanwhile keep privacy of each data site. Current solutions for (whether two or more) multiparty k-means clustering, built on one or more secure two-party computation algorithms, are not equally contributory, in other words, each party does not equally contribute to k-means clustering. This may lead a perfidious attack where a party who learns the outcome prior to other parties tells a lie of the outcome to other parties. In this paper, we present an equally contributory multiparty k-means clustering protocol for vertically partitioned data, in which each party equally contributes to k-means clustering. Our protocol is built on ElGamal's encryption scheme, Jakobsson and Juels's plaintext equivalence test protocol, and mix networks, and protects privacy in terms that each iteration of k-means clustering can be performed without revealing the intermediate values.  相似文献   

14.
Extracting different clusters of a given data is an appealing topic in swarm intelligence applications. This paper introduces two main data clustering approaches based on particle swarm optimization, namely single swarm and multiple cooperative swarms clustering. A stability analysis is next introduced to determine the model order of the underlying data using multiple cooperative swarms clustering. The proposed approach is assessed using different data sets and its performance is compared with that of k-means, k-harmonic means, fuzzy c-means and single swarm clustering techniques. The obtained results indicate that the proposed approach fairly outperforms the other clustering approaches in terms of different cluster validity measures.  相似文献   

15.
16.
The Fuzzy k-Means clustering model (FkM) is a powerful tool for classifying objects into a set of k homogeneous clusters by means of the membership degrees of an object in a cluster. In FkM, for each object, the sum of the membership degrees in the clusters must be equal to one. Such a constraint may cause meaningless results, especially when noise is present. To avoid this drawback, it is possible to relax the constraint, leading to the so-called Possibilistic k-Means clustering model (PkM). In particular, attention is paid to the case in which the empirical information is affected by imprecision or vagueness. This is handled by means of LR fuzzy numbers. An FkM model for LR fuzzy data is firstly developed and a PkM model for the same type of data is then proposed. The results of a simulation experiment and of two applications to real world fuzzy data confirm the validity of both models, while providing indications as to some advantages connected with the use of the possibilistic approach.  相似文献   

17.
In this paper, the conventional k-modes-type algorithms for clustering categorical data are extended by representing the clusters of categorical data with k-populations instead of the hard-type centroids used in the conventional algorithms. Use of a population-based centroid representation makes it possible to preserve the uncertainty inherent in data sets as long as possible before actual decisions are made. The k-populations algorithm was found to give markedly better clustering results through various experiments.  相似文献   

18.
It is evident that usage of machine learning methods in disease diagnosis has been increasing gradually. In this study, diagnosis of heart disease, which is a very common and important disease, was conducted with such a machine learning system. In this system, a new weighting scheme based on k-nearest neighbour (k-nn) method was utilized as a preprocessing step before the main classifier. Artificial immune recognition system (AIRS) with fuzzy resource allocation mechanism was our used classifier. We took the dataset used in our study from the UCI Machine Learning Database. The obtained classification accuracy of our system was 87% and it was very promising with regard to the other classification applications in the literature for this problem.  相似文献   

19.
We present a new dissimilarity, which combines connectivity and density information. Usually, connectivity and density are conceived as mutually exclusive concepts; however, we discuss a novel procedure to merge both information sources. Once we have calculated the new dissimilarity, we apply MDS in order to find a low dimensional vector space representation. The new data representation can be used for clustering and data visualization, which is not pursued in this paper. Instead we use clustering to estimate the gain from our approach consisting of dissimilarity + MDS. Hence, we analyze the partitions’ quality obtained by clustering high dimensional data with various well known clustering algorithms based on density, connectivity and message passing, as well as simple algorithms like k-means and Hierarchical Clustering (HC). The quality gap between the partitions found by k-means and HC alone compared to k-means and HC using our new low dimensional vector space representation is remarkable. Moreover, our tests using high dimensional gene expression and image data confirm these results and show a steady performance, which surpasses spectral clustering and other algorithms relevant to our work.  相似文献   

20.
To provide more sophisticated healthcare services, it is necessary to collect the precise information on a patient. One impressive area of study to obtain meaningful information is human activity recognition, which has proceeded through the use of supervised learning techniques in recent decades. Previous studies, however, have suffered from generating a training dataset and extending the number of activities to be recognized. In this paper, to find out a new approach that avoids these problems, we propose unsupervised learning methods for human activity recognition, with sensor data collected from smartphone sensors even when the number of activities is unknown. Experiment results show that the mixture of Gaussian exactly distinguishes those activities when the number of activities k is known, while hierarchical clustering or DBSCAN achieve above 90% accuracy by obtaining k based on Caliński–Harabasz index, or by choosing appropriate values for ɛ and MinPts when k is unknown. We believe that the results of our approach provide a way of automatically selecting an appropriate value of k at which the accuracy is maximized for activity recognition, without the generation of training datasets by hand.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号