首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper proposes an unsupervised algorithm for learning a finite Dirichlet mixture model. An important part of the unsupervised learning problem is determining the number of clusters which best describe the data. We extend the minimum message length (MML) principle to determine the number of clusters in the case of Dirichlet mixtures. Parameter estimation is done by the expectation-maximization algorithm. The resulting method is validated for one-dimensional and multidimensional data. For the one-dimensional data, the experiments concern artificial and real SAP image histograms. The validation for multidimensional data involves synthetic data and two real applications: shadow detection in images and summarization of texture image databases for efficient retrieval. A comparison with results obtained for other selection criteria is provided.  相似文献   

2.
Positive vectors clustering using inverted Dirichlet finite mixture models   总被引:1,自引:0,他引:1  
In this work we present an unsupervised algorithm for learning finite mixture models from multivariate positive data. Indeed, this kind of data appears naturally in many applications, yet it has not been adequately addressed in the past. This mixture model is based on the inverted Dirichlet distribution, which offers a good representation and modeling of positive non-Gaussian data. The proposed approach for estimating the parameters of an inverted Dirichlet mixture is based on the maximum likelihood (ML) using Newton Raphson method. We also develop an approach, based on the minimum message length (MML) criterion, to select the optimal number of clusters to represent the data using such a mixture. Experimental results are presented using artificial histograms and real data sets. The challenging problem of software modules classification is investigated within the proposed statistical framework, also.  相似文献   

3.
This paper proposes an unsupervised algorithm for learning a finite mixture of scaled Dirichlet distributions. Parameters estimation is based on the maximum likelihood approach, and the minimum message length (MML) criterion is proposed for selecting the optimal number of components. This research work is motivated by the flexibility issues of the Dirichlet distribution, the widely used model for multivariate proportional data, which has prompted a number of scholars to search for generalizations of the Dirichlet. By introducing the extra parameters of the scaled Dirichlet, several useful statistical models could be obtained. Experimental results are presented using both synthetic and real datasets. Moreover, challenging real-world applications are empirically investigated to evaluate the efficiency of our proposed statistical framework.  相似文献   

4.
This paper presents an online algorithm for mixture model-based clustering. Mixture modeling is the problem of identifying and modeling components in a given set of data. The online algorithm is based on unsupervised learning of finite Dirichlet mixtures and a stochastic approach for estimates updating. For the selection of the number of clusters, we use the minimum message length (MML) approach. The proposed method is validated by synthetic data and by an application concerning the dynamic summarization of image databases.  相似文献   

5.
Mixture modeling is one of the most useful tools in machine learning and data mining applications. An important challenge when applying finite mixture models is the selection of the number of clusters which best describes the data. Recent developments have shown that this problem can be handled by the application of non-parametric Bayesian techniques to mixture modeling. Another important crucial preprocessing step to mixture learning is the selection of the most relevant features. The main approach in this paper, to tackle these problems, consists on storing the knowledge in a generalized Dirichlet mixture model by applying non-parametric Bayesian estimation and inference techniques. Specifically, we extend finite generalized Dirichlet mixture models to the infinite case in which the number of components and relevant features do not need to be known a priori. This extension provides a natural representation of uncertainty regarding the challenging problem of model selection. We propose a Markov Chain Monte Carlo algorithm to learn the resulted infinite mixture. Through applications involving text and image categorization, we show that infinite mixture models offer a more powerful and robust performance than classic finite mixtures for both clustering and feature selection.  相似文献   

6.
We developed a variational Bayesian learning framework for the infinite generalized Dirichlet mixture model (i.e. a weighted mixture of Dirichlet process priors based on the generalized inverted Dirichlet distribution) that has proven its capability to model complex multidimensional data. We also integrate a “feature selection” approach to highlight the features that are most informative in order to construct an appropriate model in terms of clustering accuracy. Experiments on synthetic data as well as real data generated from visual scenes and handwritten digits datasets illustrate and validate the proposed approach.  相似文献   

7.
We describe approaches for positive data modeling and classification using both finite inverted Dirichlet mixture models and support vector machines (SVMs). Inverted Dirichlet mixture models are used to tackle an outstanding challenge in SVMs namely the generation of accurate kernels. The kernels generation approaches, grounded on ideas from information theory that we consider, allow the incorporation of data structure and its structural constraints. Inverted Dirichlet mixture models are learned within a principled Bayesian framework using both Gibbs sampler and Metropolis-Hastings for parameter estimation and Bayes factor for model selection (i.e., determining the number of mixture’s components). Our Bayesian learning approach uses priors, which we derive by showing that the inverted Dirichlet distribution belongs to the family of exponential distributions, over the model parameters, and then combines these priors with information from the data to build posterior distributions. We illustrate the merits and the effectiveness of the proposed method with two real-world challenging applications namely object detection and visual scenes analysis and classification.  相似文献   

8.
针对电网净负荷时序数据关联的特点,提出基于数据关联的狄利克雷混合模型(Data-relevance Dirichlet process mixture model,DDPMM)来表征净负荷的不确定性.首先,使用狄利克雷混合模型对净负荷的观测数据与预测数据进行拟合,得到其混合概率模型;然后,提出考虑数据关联的变分贝叶斯推断方法,改进后验分布对该混合概率模型进行求解,从而得到混合模型的最优参数;最后,根据净负荷预测值的大小得到其对应的预测误差边缘概率分布,实现不确定性表征.本文基于比利时电网的净负荷数据进行检验,算例结果表明:与传统的狄利克雷混合模型和高斯混合模型(Gaussian mixture model,GMM)等方法相比,所提出的基于数据关联狄利克雷混合模型可以更为有效地表征净负荷的不确定性.  相似文献   

9.
Clustering is the task of classifying patterns or observations into clusters or groups. Generally, clustering in high-dimensional feature spaces has a lot of complications such as: the unidentified or unknown data shape which is typically non-Gaussian and follows different distributions; the unknown number of clusters in the case of unsupervised learning; and the existence of noisy, redundant, or uninformative features which normally compromise modeling capabilities and speed. Therefore, high-dimensional data clustering has been a subject of extensive research in data mining, pattern recognition, image processing, computer vision, and other areas for several decades. However, most of existing researches tackle one or two problems at a time which is unrealistic because all problems are connected and should be tackled simultaneously. Thus, in this paper, we propose two novel inference frameworks for unsupervised non-Gaussian feature selection, in the context of finite asymmetric generalized Gaussian (AGG) mixture-based clustering. The choice of the AGG distribution is mainly due to its ability not only to approximate a large class of statistical distributions (e.g. impulsive, Laplacian, Gaussian and uniform distributions) but also to include the asymmetry. In addition, the two frameworks simultaneously perform model parameters estimation as well as model complexity (i.e., both model and feature selection) determination in the same step. This was done by incorporating a minimum message length (MML) penalty in the model learning step and by fading out the redundant densities in the mixture using the rival penalized EM (RPEM) algorithm, for first and second frameworks, respectively. Furthermore, for both algorithms, we tackle the problem of noisy and uninformative features by determining a set of relevant features for each data cluster. The efficiencies of the proposed algorithms are validated by applying them to real challenging problems namely action and facial expression recognition.  相似文献   

10.
The advent of mixture models has opened the possibility of flexible models which are practical to work with. A common assumption is that practitioners typically expect that data are generated from a Gaussian mixture. The inverted Dirichlet mixture has been shown to be a better alternative to the Gaussian mixture and to be of significant value in a variety of applications involving positive data. The inverted Dirichlet is, however, usually undesirable, since it forces an assumption of positive correlation. Our focus here is to develop a Bayesian alternative to both the Gaussian and the inverted Dirichlet mixtures when dealing with positive data. The alternative that we propose is based on the generalized inverted Dirichlet distribution which offers high flexibility and ease of use, as we show in this paper. Moreover, it has a more general covariance structure than the inverted Dirichlet. The proposed mixture model is subjected to a fully Bayesian analysis based on Markov Chain Monte Carlo (MCMC) simulation methods namely Gibbs sampling and Metropolis–Hastings used to compute the posterior distribution of the parameters, and on Bayesian information criterion (BIC) used for model selection. The adoption of this purely Bayesian learning choice is motivated by the fact that Bayesian inference allows to deal with uncertainty in a unified and consistent manner. We evaluate our approach on the basis of two challenging applications concerning object classification and forgery detection.  相似文献   

11.
In this paper, we present a fully Bayesian approach for generalized Dirichlet mixtures estimation and selection. The estimation of the parameters is based on the Monte Carlo simulation technique of Gibbs sampling mixed with a Metropolis-Hastings step. Also, we obtain a posterior distribution which is conjugate to a generalized Dirichlet likelihood. For the selection of the number of clusters, we used the integrated likelihood. The performance of our Bayesian algorithm is tested and compared with the maximum likelihood approach by the classification of several synthetic and real data sets. The generalized Dirichlet mixture is also applied to the problems of IR eye modeling and introduced as a probabilistic kernel for Support Vector Machines.
Riad I. HammoudEmail:
  相似文献   

12.
The multinomial distribution has been widely used to model count data. To increase clustering efficiency, we use an approximation to the Fisher scoring algorithm, which is more robust regarding the choice of initial parameter values. Then, we use a novel approach to estimate the optimal number of components, based on minimum message length criterion. Moreover, we consider a generalization of the multinomial model obtained by introducing the Dirichlet as prior, yielding the Dirichlet Compound Multinomial (DCM). Even though DCM can address the burstiness phenomenon of count data, the presence of Gamma function in its density function usually leads to undesired complications. In this article, we use two alternative representations of DCM distribution to perform clustering based on finite mixture models, where the mixture parameters are estimated using the minorization–maximization framework. To evaluate and compare the performance of our proposed models, we have considered three challenging real‐world applications that involve high‐dimensional count vectors, namely, sentiment analysis, facial expression recognition, and human action recognition. The results show that the proposed algorithms increase the clustering efficiency of their respective models remarkably, and the best results are achieved by the second parametrization of DCM, which can accommodate over‐dispersed count data.  相似文献   

13.
14.
This paper discusses the unsupervised learning problem for finite mixtures of Gamma distributions. An important part of this problem is determining the number of clusters which best describes a set of data. We apply the Minimum Message Length (MML) criterion to the unsupervised learning problem in the case of finite mixtures of Gamma distributions. The MML and other criteria in the literature are compared in terms of their ability to estimate the number of clusters in a data set. The comparison utilizes synthetic and RADARSAT SAR images. The performance of our method is also tested by contextual evaluations involving SAR image segmentation and change detection.  相似文献   

15.
Data clustering is a fundamental unsupervised learning task in several domains such as data mining, computer vision, information retrieval, and pattern recognition. In this paper, we propose and analyze a new clustering approach based on both hierarchical Dirichlet processes and the generalized Dirichlet distribution, which leads to an interesting statistical framework for data analysis and modelling. Our approach can be viewed as a hierarchical extension of the infinite generalized Dirichlet mixture model previously proposed in Bouguila and Ziou (IEEE Trans Neural Netw 21(1):107–122, 2010). The proposed clustering approach tackles the problem of modelling grouped data where observations are organized into groups that we allow to remain statistically linked by sharing mixture components. The resulting clustering model is learned using a principled variational Bayes inference-based algorithm that we have developed. Extensive experiments and simulations, based on two challenging applications namely images categorization and web service intrusion detection, demonstrate our model usefulness and merits.  相似文献   

16.
We propose a novel motion segmentation algorithm based on mixture of Dirichlet process (MDP) models. In contrast to previous approaches, we consider motion segmentation and its model selection regarding to the number of motion models as an inseparable problem. Our algorithm can simultaneously infer the number of motion models, estimate the cluster memberships of correspondences, and identify the outliers. The main idea is to use MDP models to fully exploit the geometric consistencies before making premature decisions about the number of motion models. To handle outliers, we incorporate RANSAC into the inference process of MDP models. In the experiments, we compare the proposed algorithm with naive RANSAC, GPCA and Schindler’s method on both synthetic data and real image data. The experimental results show that we can handle more motions and have satisfactory performance in the presence of various levels of noise and outlier.  相似文献   

17.
In this paper, we propose a Bayesian nonparametric approach for modeling and selection based on a mixture of Dirichlet processes with Dirichlet distributions, which can also be seen as an infinite Dirichlet mixture model. The proposed model uses a stick-breaking representation and is learned by a variational inference method. Due to the nature of Bayesian nonparametric approach, the problems of overfitting and underfitting are prevented. Moreover, the obstacle of estimating the correct number of clusters is sidestepped by assuming an infinite number of clusters. Compared to other approximation techniques, such as Markov chain Monte Carlo (MCMC), which require high computational cost and whose convergence is difficult to diagnose, the whole inference process in the proposed variational learning framework is analytically tractable with closed-form solutions. Additionally, the proposed infinite Dirichlet mixture model with variational learning requires only a modest amount of computational power which makes it suitable to large applications. The effectiveness of our model is experimentally investigated through both synthetic data sets and challenging real-life multimedia applications namely image spam filtering and human action videos categorization.  相似文献   

18.
Nonparametric Bayesian Image Segmentation   总被引:1,自引:0,他引:1  
Image segmentation algorithms partition the set of pixels of an image into a specific number of different, spatially homogeneous groups. We propose a nonparametric Bayesian model for histogram clustering which automatically determines the number of segments when spatial smoothness constraints on the class assignments are enforced by a Markov Random Field. A Dirichlet process prior controls the level of resolution which corresponds to the number of clusters in data with a unique cluster structure. The resulting posterior is efficiently sampled by a variant of a conjugate-case sampling algorithm for Dirichlet process mixture models. Experimental results are provided for real-world gray value images, synthetic aperture radar images and magnetic resonance imaging data.  相似文献   

19.
Finite mixture models are one of the most widely and commonly used probabilistic techniques for image segmentation. Although the most well known and commonly used distribution when considering mixture models is the Gaussian, it is certainly not the best approximation for image segmentation and other related image processing problems. In this paper, we propose and investigate the use of several other mixture models based namely on Dirichlet, generalized Dirichlet and Beta–Liouville distributions, which offer more flexibility in data modeling, for image segmentation. A maximum likelihood (ML) based algorithm is applied for estimating the resulted segmentation model’s parameters. Spatial information is also employed for figuring out the number of regions in an image and several color spaces are investigated and compared. The experimental results show that the proposed segmentation framework yields good overall performance, on various color scenes, that is better than comparable techniques.  相似文献   

20.
Generalized Dirichlet distributions have a more flexible covariance structure than Dirichlet distributions, and the computation for the moments of a generalized Dirichlet distribution is still tractable. For situations under which Dirichlet distributions are inappropriate for data analysis, generalized Dirichlet distributions will generally be an applicable alternative. When the expected values and the covariance matrix of random variables can be estimated from available data, this study introduces ways to estimate the parameters of a generalized Dirichlet distribution for analyzing compositional data. Under the assumption that the sample mean of every variable must be considered for parameter estimation, we present methods for choosing the statistics from a sample covariance matrix to construct a generalized Dirichlet distribution. Some rules for removing inappropriate statistics from a sample covariance matrix to speed up the estimation process are also established. An example for Taiwan’s car market is introduced to demonstrate the applicability of the parameter estimation methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号