首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到7条相似文献,搜索用时 0 毫秒
1.
In this paper, we consider the distributed maximum likelihood estimation (MLE) with dependent quantized data under the assumption that the structure of the joint probability density function (pdf) is known, but it contains unknown deterministic parameters. The parameters may include different vector parameters corresponding to marginal pdfs and parameters that describe the dependence of observations across sensors. Since MLE with a single quantizer is sensitive to the choice of thresholds due to the uncertainty of pdf, we concentrate on MLE with multiple groups of quantizers (which can be determined by the use of prior information or some heuristic approaches) to fend off against the risk of a poor/outlier quantizer. The asymptotic efficiency of the MLE scheme with multiple quantizers is proved under some regularity conditions and the asymptotic variance is derived to be the inverse of a weighted linear combination of Fisher information matrices based on multiple different quantizers which can be used to show the robustness of our approach. As an illustrative example, we consider an estimation problem with a bivariate non-Gaussian pdf that has applications in distributed constant false alarm rate (CFAR) detection systems. Simulations show the robustness of the proposed MLE scheme especially when the number of quantized measurements is small.  相似文献   

2.
The Cox model is the model of choice when analyzing right-censored and possibly left-truncated survival data. The present paper proposes a program to estimate the hazard function in a proportional hazards model and also to treat more complex observation schemes involving general censored and left-truncated data. The hazard function estimator is defined non-parametrically as the function which maximizes a penalized likelihood, and the solution is approximated using splines. The smoothing parameter is chosen using approximate cross-validation. Confidence bands for the estimator are given. As an illustration, the age-specific incidence of dementia is estimated and one of its risk factors is studied.  相似文献   

3.
In this paper we derive an explicit expression for the log likelihood function of a continuous-time autoregressive model. Then, using earlier results relating the autoregressive coefficients to the set of positive parameters called residual variances ratios, we develop an iterative algorithm for computing the maximum likelihood estimator of the model, similar to one in the discrete-time case. A simple noniterative estimation method, which can be used to produce an initial estimate for the algorithm, is also proposed.  相似文献   

4.
This paper considers the identification problems of Hammerstein finite impulse response moving average (FIR-MA) systems using the maximum likelihood principle and stochastic gradient method based on the key term separation technique. In order to improve the convergence rate, a maximum likelihood multi-innovation stochastic gradient algorithm is presented. The simulation results show that the proposed algorithms can effectively estimate the parameters of the Hammerstein FIR-MA systems.  相似文献   

5.
In this paper, we propose a novel model to restore an image corrupted by blur and Cauchy noise. The model is composed of a data fidelity term and two regularization terms including total variation and high-order total variation. Total variation provides well-preserved edge features, but suffers from staircase effects in smooth regions, whereas high-order total variation can alleviate staircase effects. Moreover, we introduce a strategy for adaptively selecting regularization parameters. We develop an efficient alternating minimization algorithm for solving the proposed model. Numerical examples suggest that the proposed method has the advantages of better preserving edges and reducing staircase effects.  相似文献   

6.
When analysing the movements of an animal, a common task is to generate a continuous probability density surface that characterises the spatial distribution of its locations, termed a home range. Traditional kernel density estimation (KDE), the Brownian Bridges kernel method, and time-geographic density estimation are all commonly used for this purpose, although their applicability in some practical situations is limited. Other studies have argued that KDE is inappropriate analysing moving objects, while the latter two methods are only suitable for tracking data collected at frequent enough intervals such that an object’s movement pattern can be adequately represented using a space–time path created by connecting consecutive points. This research formulates and evaluates KDE using generalised movement trajectories approximated by Delaunay triangulation (KDE-DT) as a method for analysing infrequently sampled animal tracking data. In this approach, a DT is constructed from a point pattern of tracking data in order to approximate the network of movement trajectories for an animal. This network represents the generalised movement patterns of an animal rather than its specific, individual trajectories between locations. Then, kernel density estimates are calculated with distances measured using that network. First, this paper describes the method and then applies it to generate a probability density surface for a Florida panther from radio-tracking data collected three times per week. Second, the performance of the technique is evaluated in the context of delineating wildlife home ranges and core areas from simulated animal locational data. The results of the simulations suggest that KDE-DT produces more accurate home range estimates than traditional KDE, which was evaluated with the same data in a previous study. In addition to animal home range analysis, the technique may be useful for characterising a variety of spatial point patterns generated by objects that move through continuous space, such as pedestrians or ships.  相似文献   

7.
In many data stream mining applications, traditional density estimation methods such as kernel density estimation, reduced set density estimation can not be applied to the density estimation of data streams because of their high computational burden, processing time and intensive memory allocation requirement. In order to reduce the time and space complexity, a novel density estimation method Dm-KDE over data streams based on the proposed algorithm m-KDE which can be used to design a KDE estimator with the fixed number of kernel components for a dataset is proposed. In this method, Dm-KDE sequence entries are created by algorithm m-KDE instead of all kernels obtained from other density estimation methods. In order to further reduce the storage space, Dm-KDE sequence entries can be merged by calculating their KL divergences. Finally, the probability density functions over arbitrary time or entire time can be estimated through the obtained estimation model. In contrast to the state-of-the-art algorithm SOMKE, the distinctive advantage of the proposed algorithm Dm-KDE exists in that it can achieve the same accuracy with much less fixed number of kernel components such that it is suitable for the scenarios where higher on-line computation about the kernel density estimation over data streams is required.We compare Dm-KDE with SOMKE and M-kernel in terms of density estimation accuracy and running time for various stationary datasets. We also apply Dm-KDE to evolving data streams. Experimental results illustrate the effectiveness of the proposed method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号