首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Classification systems based on linear discriminant analysis are employed in a variety of communications applications, in which the classes are most commonly characterized by known Gaussian PDFs. The performance of these classifiers is analyzed in this paper in terms of the conditional probability of misclassification. Easily computed lower and upper bounds on this error probability are presented and shown to provide corresponding bounds on the number of Monte Carlo trials required to obtain a desired level of accuracy. The error probability bounds yield an exact and easily computed expression for the error probability in the case where there are only two classes and a single hyperplane. In the special case where misclassification into a nominated class is independent of all other misclassifications, successively tighter upper and lower bounds can be computed at the expense of successively higher-order products of the individual misclassification probabilities. Finally, bounds are provided on the number of Monte Carlo trials required to improve, with suitably high confidence level, on the confidence interval formed by the error probability bounds.  相似文献   

2.
3.
A k-dominating set for a graph G(V, E) is a set of vertices D? V such that every vertex vV\ D is adjacent to at least k vertices in D. The k-domination number of G, denoted by γ k (G), is the cardinality of a smallest k-dominating set of G. Here we establish lower and upper bounds of γ k (C m ×C n ) for k=2. In some cases, these bounds agree so that the exact 2-domination number is obtained.  相似文献   

4.
改进的混合高斯算法   总被引:1,自引:0,他引:1  
针对混合高斯模型的初始建模速度慢,检测出的运动目标含大量阴影和频繁闪动等问题,提出了一种融合背景减除法的改进混合高斯算法。该算法在初始建模时,自适应地更新均值和方差,能快速准确地建立背景模型;结合背景减除法,克服频繁闪动,抑制阴影。实验结果表明,该算法在初始建模、运动目标检测效果等方面优于混合高斯算法,具有较强的稳定性和适应性。  相似文献   

5.
This paper proposes a joint maximum likelihood and Bayesian methodology for estimating Gaussian mixture models. In Bayesian inference, the distributions of parameters are modeled, characterized by hyperparameters. In the case of Gaussian mixtures, the distributions of parameters are considered as Gaussian for the mean, Wishart for the covariance, and Dirichlet for the mixing probability. The learning task consists of estimating the hyperparameters characterizing these distributions. The integration in the parameter space is decoupled using an unsupervised variational methodology entitled variational expectation-maximization (VEM). This paper introduces a hyperparameter initialization procedure for the training algorithm. In the first stage, distributions of parameters resulting from successive runs of the expectation-maximization algorithm are formed. Afterward, maximum-likelihood estimators are applied to find appropriate initial values for the hyperparameters. The proposed initialization provides faster convergence, more accurate hyperparameter estimates, and better generalization for the VEM training algorithm. The proposed methodology is applied in blind signal detection and in color image segmentation.  相似文献   

6.
In this paper we introduce and illustrate non-trivial upper and lower bounds on the learning curves for one-dimensional Guassian Processes. The analysis is carried out emphasising the effects induced on the bounds by the smoothness of the random process described by the Modified Bessel and the Squared Exponential covariance functions. We present an explanation of the early, linearly-decreasing behavior of the learning curves and the bounds as well as a study of the asymptotic behavior of the curves. The effects of the noise level and the lengthscale on the tightness of the bounds are also discussed.  相似文献   

7.
The brain must deal with a massive flow of sensory information without receiving any prior information. Therefore, when creating cognitive models, it is important to acquire as much information as possible from the data itself. Moreover, the brain has to deal with an unknown number of components (concepts) contained in a dataset without any prior knowledge. Most of the algorithms in use today are not able to effectively copy this strategy. We propose a novel approach based on neural modelling fields theory (NMF) to overcome this problem. The algorithm combines NMF and greedy Gaussian mixture models. The novelty lies in the combination of information criterion with the merging algorithm. The performance of the algorithm was compared with other well-known algorithms and tested both on artificial and real-world datasets.  相似文献   

8.
The current major theme in contrast enhancement is to partition the input histogram into multiple sub-histograms before final equalization of each sub-histogram is performed. This paper presents a novel contrast enhancement method based on Gaussian mixture modeling of image histograms, which provides a sound theoretical underpinning of the partitioning process. Our method comprises five major steps. First, the number of Gaussian functions to be used in the model is determined using a cost function of input histogram partitioning. Then the parameters of a Gaussian mixture model are estimated to find the best fit to the input histogram under a threshold. A binary search strategy is then applied to find the intersection points between the Gaussian functions. The intersection points thus found are used to partition the input histogram into a new set of sub-histograms, on which the classical histogram equalization (HE) is performed. Finally, a brightness preservation operation is performed to adjust the histogram produced in the previous step into a final one. Based on three representative test images, the experimental results demonstrate the contrast enhancement advantage of the proposed method when compared to twelve state-of-the-art methods in the literature.  相似文献   

9.
Bounds on the number of samples needed for neural learning.   总被引:1,自引:0,他引:1  
The relationship between the number of hidden nodes in a neural network, the complexity of a multiclass discrimination problem, and the number of samples needed for effect learning are discussed. Bounds for the number of samples needed for effect learning are given. It is shown that Omega(min (d,n) M) boundary samples are required for successful classification of M clusters of samples using a two-hidden-layer neural network with d-dimensional inputs and n nodes in the first hidden layer.  相似文献   

10.
研究了势平衡多目标多伯努利(cardinality balanced multi-target multi-Bernoulli,CBMeMBer)滤波器高斯混合(Gaussian mixture,GM)实现的收敛性问题.证明在线性高斯条件下,若GM-CBMeMBer滤波器的高斯项足够多,则它一致收敛于真实的CBMeMBer滤波器.并且证明在弱非线性条件下,GM-CBMeMBer滤波器的扩展卡尔曼(extended Kalman,EK)滤波近似实现—EK-GM-CBMeMBer滤波器,若每个高斯项的协方差足够小,也一致收敛于真实的CBMeMBer滤波器,本文的研究目的是从理论上给出CBMeMBer滤波器GM实现的收敛结果,以完善CBMeMBer滤波器对多目标跟踪的理论研究.  相似文献   

11.
高斯混合概率假设密度(GM-PHD)滤波是一种杂波环境下多目标跟踪问题算法,针对算法中存在的目标漏检和距离相近时精度下降的问题,提出一种改进的高斯混合PHD滤波算法。该算法在高斯混合框架下通过修正PHD递归方程,有效地解决了漏检引起的有用信息丢失问题;利用权值判断高斯分量是否用于提取目标状态,避免了具有较高权值的高斯分量合并在一起,从而改善目标相互接近时的跟踪性能。仿真实验表明,改进算法在滤波精度和目标数估计方面均优于传统的GM-PHD算法。  相似文献   

12.
In many practical applications, the performance of a learning algorithm is not actually affected only by an unitary factor just like the complexity of hypothesis space, stability of the algorithm and data quality. This paper addresses in the performance of the regularization algorithm associated with Gaussian kernels. The main purpose is to provide a framework of evaluating the generalization performance of the algorithm conjointly in terms of hypothesis space complexity, algorithmic stability and data quality. The new bounds on generalization error of such algorithm measured by regularization error and sample error are established. It is shown that the regularization error has polynomial decays under some conditions, and the new bounds are based on uniform stability of the algorithm, covering number of hypothesis space and data information simultaneously. As an application, the obtained results are applied to several special regularization algorithms, and some new results for the special algorithms are deduced.  相似文献   

13.
14.
The generalized Gaussian mixture model (GGMM) provides a flexible and suitable tool for many computer vision and pattern recognition problems. However, generalized Gaussian distribution is unbounded. In many applications, the observed data are digitalized and have bounded support. A new bounded generalized Gaussian mixture model (BGGMM), which includes the Gaussian mixture model (GMM), Laplace mixture model (LMM), and GGMM as special cases, is presented in this paper. We propose an extension of the generalized Gaussian distribution in this paper. This new distribution has a flexibility to fit different shapes of observed data such as non-Gaussian and bounded support data. In order to estimate the model parameters, we propose an alternate approach to minimize the higher bound on the data negative log-likelihood function. We quantify the performance of the BGGMM with simulations and real data.  相似文献   

15.
图像小波系数的高斯混合模型研究   总被引:3,自引:0,他引:3  
图像小波系数的统计分布具有非高斯特性,可以用高斯混合模型进行描述。提出了一种随像素自适应调整的混合高斯模型,每个系数建模为两个均值为零、方差不同的正态分布之和,利用局部贝叶斯阈值对小波系数进行分类,通过当前系数邻域窗中两类系数的信息,得到大、小方差以及有关概率的模型参数估计。将此模型应用于图像去噪,根据贝叶斯后验均值估计理论设计了Wiener滤波器。通过与三种代表性去噪算法的比较实验,表明了这种基于模型的滤波算法的有效性。  相似文献   

16.
动态场景的自适应高斯混合模型的研究   总被引:1,自引:0,他引:1  
混合高斯模型能够拟合像素颜色值分布、跟踪复杂的场景变化,基于它的算法已经成为对视频序列实施背景减法时的一个标准背景建模方法。分析了GMM算法的理论框架,提出了算法改进的两个方面:模型参数更新和BG/FG分类决策。在综述各种已有的算法的基础上,从学习因子控制、模态个数调节、算法评价以及算法初始化等几个方面展开分析。这些分析结果将为后续研究提供思路和方向。  相似文献   

17.
混合高斯模型背景法的一种改进算法   总被引:5,自引:0,他引:5       下载免费PDF全文
针对混合高斯模型背景法的不足,提出了一种将混合高斯模型背景法与三帧差分法相结合的运动目标检测算法。利用三帧法快速检测出变化区域,提高了算法的灵敏度;引入目标是否存在的判决阈值,减低了算法的运算量;对目标区域和背景区域进行不同的混合高斯背景模型的更新策略,提高了模型的收敛速度。实验结果表明,改进的方法与混合高斯模型背景法相比其处理速度快,效果更好,适用于实时视频监控系统。  相似文献   

18.
In long-range radar tracking, the measurement uncertainty region has a thin and curved shape in Cartesian space due to the fact that the measurement is accurate in range but inaccurate in angle. Such a shape reflects grievous measurement nonlinearity, which can lead to inconsistency in tracking performance and significant tracking errors in traditional nonlinear filters, such as the extended Kalman filter (EKF) and the unscented Kalman filter (UKF). In this paper, we propose a modified version of the Gaussian Mixture Measurement-Integrated Track Splitting (GMM-ITS) filter to deal with the nonlinearity of measurements in long-range radar tracking. Not only is the state probability density function (pdf) approximated by a set of Gaussian track components, but the likelihood function (LF) is approximated by several Gaussian measurement components. In this way, both the state pdf and LF in the proposed filter have more accurate approximation than traditional filters that approximate measurements using just one Gaussian distribution. Simulation experiments show that the proposed filter can successfully avoid the inconsistency problem and also obtain high tracking accuracy in both 2-D (with range-angle measurements) and 3-D (with range-direction-cosine measurements) long-range radar tracking.  相似文献   

19.
Bayesian inference and prediction for a generalized autoregressive conditional heteroskedastic (GARCH) model where the innovations are assumed to follow a mixture of two Gaussian distributions is performed. The mixture GARCH model can capture the patterns usually exhibited by many financial time series such as volatility clustering, large kurtosis and extreme observations. A Griddy-Gibbs sampler implementation is proposed for parameter estimation and volatility prediction. Bayesian prediction of the Value at Risk is also addressed providing point estimates and predictive intervals. The method is illustrated using the Swiss Market Index.  相似文献   

20.
The expectation maximization algorithm has been classically used to find the maximum likelihood estimates of parameters in probabilistic models with unobserved data, for instance, mixture models. A key issue in such problems is the choice of the model complexity. The higher the number of components in the mixture, the higher will be the data likelihood, but also the higher will be the computational burden and data overfitting. In this work, we propose a clustering method based on the expectation maximization algorithm that adapts online the number of components of a finite Gaussian mixture model from multivariate data or method estimates the number of components and their means and covariances sequentially, without requiring any careful initialization. Our methodology starts from a single mixture component covering the whole data set and sequentially splits it incrementally during expectation maximization steps. The coarse to fine nature of the algorithm reduce the overall number of computations to achieve a solution, which makes the method particularly suited to image segmentation applications whenever computational time is an issue. We show the effectiveness of the method in a series of experiments and compare it with a state-of-the-art alternative technique both with synthetic data and real images, including experiments with images acquired from the iCub humanoid robot.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号