首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 9 毫秒
1.
A useful class of partially nonstationary vector autoregressive moving average (VARMA) models is considered with regard to parameter estimation. An exact maximum likelihood (EML) approach is developed on the basis of a simple transformation applied to the error-correction representation of the models considered. The employed transformation is shown to provide a standard VARMA model with the important property that it is stationary. Parameter estimation can thus be carried out by applying standard EML methods to the stationary VARMA model obtained from the error-correction representation. This approach resolves at least two problems related to the current limited availability of EML estimation methods for partially nonstationary VARMA models. Firstly, it resolves the apparent impossibility of computing the exact log-likelihood for such models using currently available methods. And secondly, it resolves the inadequacy of considering lagged endogenous variables as exogenous variables in the error-correction representation. Theoretical discussion is followed by an example using a popular data set. The example illustrates the feasibility of the EML estimation approach as well as some of its potential benefits in cases of practical interest which are easy to come across. As in the case of stationary models, the proposed EML method provides estimated model structures that are more reliable and accurate than results produced by conditional methods.  相似文献   

2.
In fault diagnosis intermittent failure models are an important tool to adequately deal with realistic failure behavior. Current model-based diagnosis approaches account for the fact that a component cj may fail intermittently by introducing a parameter gj that expresses the probability the component exhibits correct behavior. This component parameter gj, in conjunction with a priori fault probability, is used in a Bayesian framework to compute the posterior fault candidate probabilities. Usually, information on gj is not known a priori. While proper estimation of gj can be critical to diagnostic accuracy, at present, only approximations have been proposed. We present a novel framework, coined Barinel, that computes estimations of the gj as integral part of the posterior candidate probability computation using a maximum likelihood estimation approach. Barinel's diagnostic performance is evaluated for both synthetic systems, the Siemens software diagnosis benchmark, as well as for real-world programs. Our results show that our approach is superior to reasoning approaches based on classical persistent failure models, as well as previously proposed intermittent failure models.  相似文献   

3.
水平门限同积模型参数的拟极大似然估计   总被引:1,自引:1,他引:0  
提出了用于非平稳非线性时间序列建模的水平门限同积模型,给出了模型参数的拟极大似然估计,由于对门限参数和同积向量似然函数既不可微也不光滑,不能直接运用传统的极大似然估计.因此首先利用遗传模拟退火算法估计门限参数和同积向量,然后用极大似然估计计算其余的参数,仿真结果表明,拟极大似然估计不受模型维数限制具有有效性和可行性,此外,数值计算结果的比较分析表明遗传模拟退火优于传统的遗传算法、模拟退火和随机搜索等优化算法.  相似文献   

4.
Dirichlet distributions are natural choices to analyse data described by frequencies or proportions since they are the simplest known distributions for such data apart from the uniform distribution. They are often used whenever proportions are involved, for example, in text-mining, image analysis, biology or as a prior of a multinomial distribution in Bayesian statistics. As the Dirichlet distribution belongs to the exponential family, its parameters can be easily inferred by maximum likelihood. Parameter estimation is usually performed with the Newton-Raphson algorithm after an initialisation step using either the moments or Ronning's methods. However this initialisation can result in parameters that lie outside the admissible region. A simple and very efficient alternative based on a maximum likelihood approximation is presented. The advantages of the presented method compared to two other methods are demonstrated on synthetic data sets as well as for a practical biological problem: the clustering of protein sequences based on their amino acid compositions.  相似文献   

5.
如何确定高维数据的固有维数是降维成功与否的关键。基于极大似然估计(MLE)的维数估计方法是一种新近出现的方法,实现简单,选择合适的近邻能取得不错的结果。但当近邻数过小或过大时,均有比较明显的偏差。其根本原因是没有考虑每个点对固有维数的不同贡献。在充分考虑数据集的分布信息之后,提出了一种改进的MLE——自适应极大似然估计(AMLE)。实验表明,无论在合成数据集还是真实数据集上,AMLE较MLE在估计准确度上均有很大的提高,对近邻数的变化也不甚敏感。  相似文献   

6.
The objective of this paper is to develop a robust maximum likelihood estimation (MLE) for the stochastic state space model via the expectation maximisation algorithm to cope with observation outliers. Two types of outliers and their influence are studied in this paper: namely,the additive outlier (AO) and innovative outlier (IO). Due to the sensitivity of the MLE to AO and IO, we propose two techniques for robustifying the MLE: the weighted maximum likelihood estimation (WMLE) and the trimmed maximum likelihood estimation (TMLE). The WMLE is easy to implement with weights estimated from the data; however, it is still sensitive to IO and a patch of AO outliers. On the other hand, the TMLE is reduced to a combinatorial optimisation problem and hard to implement but it is efficient to both types of outliers presented here. To overcome the difficulty, we apply the parallel randomised algorithm that has a low computational cost. A Monte Carlo simulation result shows the efficiency of the proposed algorithms.  相似文献   

7.
In this paper, we consider the distributed maximum likelihood estimation (MLE) with dependent quantized data under the assumption that the structure of the joint probability density function (pdf) is known, but it contains unknown deterministic parameters. The parameters may include different vector parameters corresponding to marginal pdfs and parameters that describe the dependence of observations across sensors. Since MLE with a single quantizer is sensitive to the choice of thresholds due to the uncertainty of pdf, we concentrate on MLE with multiple groups of quantizers (which can be determined by the use of prior information or some heuristic approaches) to fend off against the risk of a poor/outlier quantizer. The asymptotic efficiency of the MLE scheme with multiple quantizers is proved under some regularity conditions and the asymptotic variance is derived to be the inverse of a weighted linear combination of Fisher information matrices based on multiple different quantizers which can be used to show the robustness of our approach. As an illustrative example, we consider an estimation problem with a bivariate non-Gaussian pdf that has applications in distributed constant false alarm rate (CFAR) detection systems. Simulations show the robustness of the proposed MLE scheme especially when the number of quantized measurements is small.  相似文献   

8.
Maximum likelihood estimation has a rich history. It has been successfully applied to many problems including dynamical system identification. Different approaches have been proposed in the time and frequency domains. In this paper we discuss the relationship between these approaches and we establish conditions under which the different formulations are equivalent for finite length data. A key point in this context is how initial (and final) conditions are considered and how they are introduced in the likelihood function.  相似文献   

9.
When performing block-matching based motion estimation with the ML estimator, one would try to match blocks from the two images, within a predefined search area. The estimated motion vector is that which maximizes a likelihood function, formulated according to the image formation model. Two new maximum likelihood motion estimation schemes for ultrasound images are presented. The new likelihood functions are based on the assumption that both images are contaminated by a Rayleigh distributed multiplicative noise. The new approach enables motion estimation in cases where a noiseless reference image is not available. Experimental results show a motion estimation improvement with regards to other known ML estimation methods.  相似文献   

10.
A numerical maximum likelihood (ML) estimation procedure is developed for the constrained parameters of multinomial distributions. The main difficulty involved in computing the likelihood function is the precise and fast determination of the multinomial coefficients. For this the coefficients are rewritten into a telescopic product. The presented method is applied to the ML estimation of the Zipf-Mandelbrot (ZM) distribution, which provides a true model in many real-life cases. The examples discussed arise from ecological and medical observations. Based on the estimates, the hypothesis that the data is ZM distributed is tested using a chi-square test. The computer code of the presented procedure is available on request by the author.  相似文献   

11.
For identifying errors-in-variables models, the time domain maximum likelihood (TML) method and the sample maximum likelihood (SML) method are two approaches. Both methods give optimal estimation accuracy but under different assumptions. In the TML method, an important assumption is that the noise-free input signal is modelled as a stationary process with rational spectrum. For SML, the noise-free input needs to be periodic. It is interesting to know which of these assumptions contain more information to boost the estimation performance. In this paper, the estimation accuracy of the two methods is analyzed statistically for both errors-in-variables (EIV) and output error models (OEM). Numerical comparisons between these two estimates are also done under different signal-to-noise ratios (SNRs). The results suggest that TML and SML have similar estimation accuracy at moderate or high SNR for EIV. For OEM identification, these two methods have the same accuracy at any SNR.  相似文献   

12.
包健  刘然 《计算机应用》2012,32(3):661-664
针对M-ary支持向量机(SVM)多类分类算法结构简单,但泛化能力较弱的特点,提出了与纠错编码理论相结合的改进的M-ary SVM算法。首先,将原始类别信息编码作为信息码;然后结合纠错编码理论及期望的纠错能力,产生一定程度上性能最佳的编码,作为分类器训练的依据;最后,对于识别阶段输出编码中的错误分类利用检错纠错原理进行校正。实验结果表明,改进的算法通过引入尽可能少的冗余子分类器增强了标准M-ary SVM多类分类算法的性能。  相似文献   

13.
A central issue in dimension reduction is choosing a sensible number of dimensions to be retained. This work demonstrates the surprising result of the asymptotic consistency of the maximum likelihood criterion for determining the intrinsic dimension of a dataset in an isotropic version of probabilistic principal component analysis (PPCA). Numerical experiments on simulated and real datasets show that the maximum likelihood criterion can actually be used in practice and outperforms existing intrinsic dimension selection criteria in various situations. This paper exhibits and outlines the limits of the maximum likelihood criterion. It leads to recommend the use of the AIC criterion in specific situations. A useful application of this work would be the automatic selection of intrinsic dimensions in mixtures of isotropic PPCA for classification.  相似文献   

14.
The identification of the spatially dependent parameters in Partial Differential Equations (PDEs) is important in both physics and control problems. A methodology is presented to identify spatially dependent parameters from spatio-temporal measurements. Local non-rational transfer functions are derived based on three local measurements allowing for a local estimate of the parameters. A sample Maximum Likelihood Estimator (SMLE) in the frequency domain is used, because it takes noise properties into account and allows for high accuracy consistent parameter estimation. Confidence bounds on the parameters are estimated based on the noise properties of the measurements. This method is successfully applied to the simulations of a finite difference model of a parabolic PDE with piecewise constant parameters.  相似文献   

15.
Recently, a nonparametric marginal structural model (NPMSM) approach to Causal Inference has been proposed [Neugebauer, R., van der Laan, M., 2006. Nonparametric causal effects based on marginal structural models. J. Statist. Plann. Inference (in press), 〈www http://www.sciencedirect.com/science/journal/03783758〉.] as an appealing practical alternative to the original parametric MSM (PMSM) approach introduced by Robins [Robins, J., 1998a. Marginal structural models. In: 1997 Proceedings of the American Statistical Association, American Statistical Association, Alexandria, VA, pp. 1-10]. The new MSM-based causal inference methodology generalizes the concept of causal effects: the proposed nonparametric causal effects are interpreted as summary measures of the causal effects defined with PMSMs. In addition, causal inference with NPMSM does not rely on the assumed correct specification of a parametric MSM but instead defines causal effects based on a user-specified working causal model which can be willingly misspecified. The NPMSM approach was developed for studies with point treatment data or with longitudinal data where the outcome is not time-dependent (typically collected at the end of data collection). In this paper, we generalize this approach to longitudinal studies where the outcome is time-dependent, i.e. collected throughout the span of the studies, and address the subsequent estimation inconsistency which could easily arise from a hasty generalization of the algorithm for maximum likelihood estimation. More generally, we provide an overview of the multiple causal effect representations which have been developed based on MSMs in longitudinal studies.  相似文献   

16.
This paper studies the linear dynamic errors-in-variables problem for filtered white noise excitations. First, a frequency domain Gaussian maximum likelihood (ML) estimator is constructed that can handle discrete-time as well as continuous-time models on (a) part(s) of the unit circle or imaginary axis. Next, the ML estimates are calculated via a computationally simple and numerically stable Gauss-Newton minimization scheme. Finally, the Cramér-Rao lower bound is derived.  相似文献   

17.
A new likelihood based AR approximation is given for ARMA models. The usual algorithms for the computation of the likelihood of an ARMA model require O(n) flops per function evaluation. Using our new approximation, an algorithm is developed which requires only O(1) flops in repeated likelihood evaluations. In most cases, the new algorithm gives results identical to or very close to the exact maximum likelihood estimate (MLE). This algorithm is easily implemented in high level quantitative programming environments (QPEs) such as Mathematica, MatLab and R. In order to obtain reasonable speed, previous ARMA maximum likelihood algorithms are usually implemented in C or some other machine efficient language. With our algorithm it is easy to do maximum likelihood estimation for long time series directly in the QPE of your choice. The new algorithm is extended to obtain the MLE for the mean parameter. Simulation experiments which illustrate the effectiveness of the new algorithm are discussed. Mathematica and R packages which implement the algorithm discussed in this paper are available [McLeod, A.I., Zhang, Y., 2007. Online supplements to “Faster ARMA Maximum Likelihood Estimation”, 〈http://www.stats.uwo.ca/faculty/aim/2007/faster/〉]. Based on these package implementations, it is expected that the interested researcher would be able to implement this algorithm in other QPEs.  相似文献   

18.
为了进一步提高直线时栅位移传感器的测量精度,在建立该传感器周期误差、阿贝误差、热膨胀误差的全误差模型基础上,提出了一种组合校准的方法,利用傅里叶谐波分析和材料线膨胀原理对直线时栅的各种误差进行修正,修正后的精度可达±0.5μm/m。实验证明:该方法解决了直线测量中误差难以分离的问题,同时解决了计算机连续自动采样问题,提高了标定效率,使该方法广泛地应用于生产实践成为可能。  相似文献   

19.
In this paper we derive an explicit expression for the log likelihood function of a continuous-time autoregressive model. Then, using earlier results relating the autoregressive coefficients to the set of positive parameters called residual variances ratios, we develop an iterative algorithm for computing the maximum likelihood estimator of the model, similar to one in the discrete-time case. A simple noniterative estimation method, which can be used to produce an initial estimate for the algorithm, is also proposed.  相似文献   

20.
In this paper we consider the beta regression model recently proposed by Ferrari and Cribari-Neto [2004. Beta regression for modeling rates and proportions. J. Appl. Statist. 31, 799-815], which is tailored to situations where the response is restricted to the standard unit interval and the regression structure involves regressors and unknown parameters. We derive the second order biases of the maximum likelihood estimators and use them to define bias-adjusted estimators. As an alternative to the two analytically bias-corrected estimators discussed, we consider a bias correction mechanism based on the parametric bootstrap. The numerical evidence favors the bootstrap-based estimator and also one of the analytically corrected estimators. Several different strategies for interval estimation are also proposed. We present an empirical application.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号