首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
A novel non-parametric density estimator is developed based on geometric principles. A penalised centroidal Voronoi tessellation forms the basis of the estimator, which allows the data to self-organise in order to minimise estimate bias and variance. This approach is a marked departure from usual methods based on local averaging, and has the advantage of being naturally adaptive to local sample density (scale-invariance). The estimator does not require the introduction of a plug-in kernel, thus avoiding assumptions of symmetricity and morphology. A numerical experiment is conducted to illustrate the behaviour of the estimator, and it's characteristics are discussed.  相似文献   

2.
在计算机视频监控系统中,主要的目的是在摄像机固定的视频图像中检测出运动目标,在诸多检测方法中最常用的是减背景技术。减背景技术的关键是背景建模,噪声的干扰、检测方法的自适应性、模型的正确性等问题都是在背景建模过程中必须解决的问题。为了提高建模精度,本文提出了一个非参数化建模技术,称为自适应核密度估计,具有较好的适应性和鲁棒性。它是一种基于场景中像素的概率密度函数来构建的非参数核密度估计的统计模型。  相似文献   

3.
A version of the Tråvén's [1] Gaussian clustering algorithm for normal mixture densities is studied. Unlike in the case of the Tråvén's algorithm, no constraints on covariance structure of mixture components are imposed. Simulations suggest that the modified algorithm is a very promising method of estimating arbitrary continuous d-dimensional densities. In particular, the simulations have shown that the algorithm is robust against assuming the initial number of mixture components to be too large.This work was supported in part by the State Committee for Scientific Research (KBN) under grant PB 0589/P3/94/06. It was completed while the second author was on leave to the Department of Statistics, Rice University, Houston, Texas.  相似文献   

4.
The problem of bivariate density estimation is studied with the aim of finding the density function with the smallest number of local extreme values which is adequate with the given data. Adequacy is defined via Kuiper metrics. The concept of the taut-string algorithm which provides adequate approximations with a small number of local extrema is generalised for analysing two- and higher dimensional data, using Delaunay triangulation and diffusion filtering. Results are based on equivalence relations in one dimension between the taut-string algorithm and the method of solving the discrete total variation flow equation. The generalisation and some modifications are developed and the performance for density estimation is shown.  相似文献   

5.
While most previous work in the subject of Bayesian Fault diagnosis and control loop diagnosis use discretized evidence for performing diagnosis (an example of evidence being a monitor reading), discretizing continuous evidence can result in information loss. This paper proposes the use of kernel density estimation, a non-parametric technique for estimating the density functions of continuous random variables. Kernel density estimation requires the selection of a bandwidth parameter, used to specify the degree of smoothing, and a number of bandwidth selection techniques (optimal Gaussian, sample-point adaptive, and smoothed cross-validation) are discussed and compared. Because kernel density estimation is known to have reduced performance in high dimensions, this paper also discusses a number of existing preprocessing methods that can be used to reduce the dimensionality (grouping according to dependence, and independent component analysis). Bandwidth selection and dimensionality reduction techniques are tested on a simulation and an industrial process.  相似文献   

6.
On-line control of nonlinear nonstationary processes using multivariate statistical methods has recently prompt a lot of interest due to its industrial practical importance. Indeed basic process control methods do not allow monitoring of such processes. For this purpose this study proposes a variable window real-time monitoring system based on a fast block adaptive Kernel Principal Component Analysis scheme. While previous adaptive KPCA models allow only handling of one observation at a time, in this study we propose a way to fast update or downdate the KPCA model when a block of data is provided and not only one observation. Using a variable window size procedure to determine the model size and adaptive chart parameters, this model is applied to monitor two simulated benchmark processes. A comparison of performances of the adopted control strategy with various Principal Component Analysis (PCA) control models shows that the derived strategy is robust and yields better detection abilities of disturbances.  相似文献   

7.
A conditional density function, which describes the relationship between response and explanatory variables, plays an important role in many analysis problems. In this paper, we propose a new kernel-based parametric method to estimate conditional density. An exponential function is employed to approximate the unknown density, and its parameters are computed from the given explanatory variable via a nonlinear mapping using kernel principal component analysis (KPCA). We develop a new kernel function, which is a variant to polynomial kernels, to be used in KPCA. The proposed method is compared with the Nadaraya-Watson estimator through numerical simulation and practical data. Experimental results show that the proposed method outperforms the Nadaraya-Watson estimator in terms of revised mean integrated squared error (RMISE). Therefore, the proposed method is an effective method for estimating the conditional densities.  相似文献   

8.
This paper is a continuation of the authors' earlier work [1], where a version of the Tråvén's [2] Gaussian clustering neural network (being a recursive counterpart of the EM algorithm) has been investigated. A comparative simulation study of the Gaussian clustering algorithm [1], two versions of plug-in kernel estimators and a version of Friedman's projection pursuit algorithm are presented for two- and three-dimensional data. Simulations show that the projection pursuit algorithm is a good or a very good estimator, provided, however, that the number of projections is suitably chosen. Although practically confined to estimating normal mixtures, the simulations confirm general reliability of plug-in estimators, and show the same property of the Gaussian clustering algorithm. Indeed, the simulations confirm the earlier conjecture that this last estimator proivdes a way of effectively estimating arbitrary and highly structured continuous densities on Rd, at least for small d, either by using this estimator itself or, rather, by using it as a pilot estimator for a newly proposed plug-in estimator.  相似文献   

9.
A new multivariate density estimator suitable for pattern classifier design is proposed. The data are first transformed so that the pattern vector components with the most non-Gaussian structure are separated from the Gaussian components. Nonparametric density estimation is then used to capture the non-Gaussian structure of the data while parametric Gaussian conditional density estimation is applied to the rest of the components. Both simulated and real data sets are used to demonstrate the potential usefulness of the proposed approach.  相似文献   

10.
When analysing the movements of an animal, a common task is to generate a continuous probability density surface that characterises the spatial distribution of its locations, termed a home range. Traditional kernel density estimation (KDE), the Brownian Bridges kernel method, and time-geographic density estimation are all commonly used for this purpose, although their applicability in some practical situations is limited. Other studies have argued that KDE is inappropriate analysing moving objects, while the latter two methods are only suitable for tracking data collected at frequent enough intervals such that an object’s movement pattern can be adequately represented using a space–time path created by connecting consecutive points. This research formulates and evaluates KDE using generalised movement trajectories approximated by Delaunay triangulation (KDE-DT) as a method for analysing infrequently sampled animal tracking data. In this approach, a DT is constructed from a point pattern of tracking data in order to approximate the network of movement trajectories for an animal. This network represents the generalised movement patterns of an animal rather than its specific, individual trajectories between locations. Then, kernel density estimates are calculated with distances measured using that network. First, this paper describes the method and then applies it to generate a probability density surface for a Florida panther from radio-tracking data collected three times per week. Second, the performance of the technique is evaluated in the context of delineating wildlife home ranges and core areas from simulated animal locational data. The results of the simulations suggest that KDE-DT produces more accurate home range estimates than traditional KDE, which was evaluated with the same data in a previous study. In addition to animal home range analysis, the technique may be useful for characterising a variety of spatial point patterns generated by objects that move through continuous space, such as pedestrians or ships.  相似文献   

11.
Kernel density estimation is a popular and widely used non-parametric method for data-driven density estimation. Its appeal lies in its simplicity and ease of implementation, as well as its strong asymptotic results regarding its convergence to the true data distribution. However, a major difficulty is the setting of the bandwidth, particularly in high dimensions and with limited amount of data. An approximate Bayesian method is proposed, based on the Expectation-Propagation algorithm with a likelihood obtained from a leave-one-out cross validation approach. The proposed method yields an iterative procedure to approximate the posterior distribution of the inverse bandwidth. The approximate posterior can be used to estimate the model evidence for selecting the structure of the bandwidth and approach online learning. Extensive experimental validation shows that the proposed method is competitive in terms of performance with state-of-the-art plug-in methods.  相似文献   

12.
In this paper we examine a new method for constructing confidence intervals for the difference of success probabilities to analyze dependent data from response adaptive designs with binary responses. Specifically we investigate the feasibility of the Jeffreys-Perks procedure for interval estimation. Simulation results are derived to demonstrate the performance of the Jeffreys-Perks procedure compared with the profile likelihood method. It is found that both asymptotic methods perform well for small sample sizes despite being approximate procedures.  相似文献   

13.
In this paper we propose a Gaussian-kernel-based online kernel density estimation which can be used for applications of online probability density estimation and online learning. Our approach generates a Gaussian mixture model of the observed data and allows online adaptation from positive examples as well as from the negative examples. The adaptation from the negative examples is realized by a novel concept of unlearning in mixture models. Low complexity of the mixtures is maintained through a novel compression algorithm. In contrast to the existing approaches, our approach does not require fine-tuning parameters for a specific application, we do not assume specific forms of the target distributions and temporal constraints are not assumed on the observed data. The strength of the proposed approach is demonstrated with examples of online estimation of complex distributions, an example of unlearning, and with an interactive learning of basic visual concepts.  相似文献   

14.
Many practical problems involve density estimation from indirect observations and they are classified as indirect density estimation problems. For example, image deblurring and image reconstruction in emission tomography belong to this class. In this paper we propose an iterative approach to solve these problems. This approach has been successfully applied to emission tomography (Ma, 2008). The popular EM algorithm can also be used for indirect density estimation, but it requires that observations follow Poisson distributions. Our method does not involve such assumptions; rather, it is established simply from the Bayes conditional probability model and is termed the Iterative Bayes (IB) algorithm. Under certain regularity conditions, this algorithm converges to the positively constrained solution minimizing the Kullback-Leibler distance, an asymmetric measure involving both logarithmic and linear scales of dissimilarities between two probability distributions.  相似文献   

15.
Maite  Ana  Manuel 《Neurocomputing》2009,72(16-18):3556
A widely accepted magnetic resonance imaging (MRI) model states that the observed voxel intensity is a piecewise constant signal intensity function corresponding to the tissue spatial distribution, corrupted with multiplicative and additive noise. The multiplicative noise is assumed to be a smooth bias field, it is called intensity inhomogeneity (IIH) field. Our approach to IIH correction is based on the definition of an energy function that incorporates some smoothness constraints into the conventional classification error function of the IIH corrected image. The IIH field estimation algorithm is a gradient descent of this energy function relative to the IIH field. We call it adaptive field rule (AFR). We comment on the likeness of our approach to the self-organizing map (SOM) learning rule, on the basis of the neighboring function that controls the influence of the neighborhood on each voxel's IIH estimation. We discuss the convergence properties of the algorithm. Experimental results show that AFR compares well with state of the art algorithms. Moreover, the mean signal intensity corresponding to each class of tissue can be estimated from the image data applying the gradient descent of the proposed energy function relative to the intensity class means. We test several variations of this gradient descent approach, which embody diverse assumptions about available a priori information.  相似文献   

16.
A method is developed to track planar and near-planar objects by incorporating a model of the expected image template distortion, and fitting the sampling region to pre-trained examples with general regression. The approach does not assume a particular form of the underlying space, allows a natural handling of occluding objects, and permits dynamic changes of the scale and size of the sampled region. The implementation of the algorithm runs comfortably in modest hardware at video-rate. Research supported by Grants GR/N03266 and GR/S97774 from the UK Engineering and Physical Science Research Council, and by a Mexican CONACYT scholarship to WWM.  相似文献   

17.
We describe a fast, data-driven bandwidth selection procedure for kernel conditional density estimation (KCDE). Specifically, we give a Monte Carlo dual-tree algorithm for efficient, error-controlled approximation of a cross-validated likelihood objective. While exact evaluation of this objective has an unscalable O(n2) computational cost, our method is practical and shows speedup factors as high as 286,000 when applied to real multivariate datasets containing up to one million points. In absolute terms, computation times are reduced from months to minutes. This enables applications at much greater scale than previously possible. The core idea in our method is to first derive a standard deterministic dual-tree approximation, whose loose deterministic bounds we then replace with tight, probabilistic Monte Carlo bounds. The resulting Monte Carlo dual-tree algorithm exhibits strong error control and high speedup across a broad range of datasets several orders of magnitude greater in size than those reported in previous work. The cost of this high acceleration is the loss of the formal error guarantee of the deterministic dual-tree framework; however, our experiments show that error is still amply controlled by our Monte Carlo algorithm, and the many-order-of-magnitude speedups are worth this sacrifice in the large-data case, where cross-validated bandwidth selection for KCDE would otherwise be impractical.  相似文献   

18.
The plug-in bandwidth selection method in nonparametric kernel hazard estimation is considered, and a weak dependence on the sample data is assumed. A general result of asymptotic optimality for the plug-in bandwidth is presented, that is valid for the hazard function, as well as for the density and distribution functions. In a simulation study, this method is compared with the “leave more than one out” cross-validation criterion under dependence. Simulations show that smaller errors and much less sample variability can be reached, and that a good selection of the pilot bandwidth can be done by means of “leave one out” cross-validation. Finally, an application to an earthquake data set is made.  相似文献   

19.
A simplified adaptive scheme is suggested for the estimation of the state vector of linear systems driven by white process noise that is added to an unknown deterministic signal. The design approach is based on embedding the Kalman filter (KF) within a simplified adaptive control loop that is driven by the innovation process. The simplified adaptive loop is idle during steady-state phases that involve white driving noise only. However, when the deterministic signal is added to the driving noise signal, the simplified adaptive control loop enhances the KF gains and helps in reducing the resulting transients. The stability of the overall estimation scheme is established under strictly passive conditions of a related system. The suggested method is applied to the target acceleration estimation problem in a Theater Missile Defence scenario.  相似文献   

20.
In both nonparametric density estimation and regression, the so-called boundary effects, i.e. the bias and variance increase due to one sided data information, can be quite serious. For estimation performed on transformed variables this problem can easily get boosted and may distort substantially the final estimates, and consequently the conclusions. After a brief review of some existing methods a new, straightforward and very simple boundary correction is proposed, applying local bandwidth variation at the boundaries. The statistical behavior is discussed and the performance for density and regression estimation is studied for small and moderate sample sizes. In a simulation study this method is shown to perform very well. Furthermore, it appears to be excellent for estimating the world income distribution, and Engel curves in economics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号