首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A simulation study is performed to investigate the robustness of the maximum likelihood estimator of fixed effects from a linear mixed model when the error distribution is misspecified. Inference for the fixed effects under the assumption of independent normally distributed errors with constant variance is shown to be robust when the errors are either non-gaussian or heteroscedastic, except when the error variance depends on a covariate included in the model with interaction with time. Inference is impaired when the errors are correlated. In the latter case, the model including a random slope in addition to the random intercept is more robust than the random intercept model. The use of Cholesky residuals and conditional residuals to evaluate the fit of a linear mixed model is also discussed.  相似文献   

2.
The use of residuals for detecting departures from the assumptions of the linear model with full-rank covariance, whether the design matrix is full rank or not, has long been recognized as an important diagnostic tool. Once it became feasible to compute different kinds of residual in a straight forward way, various methods have focused on their underlying properties and their effectiveness. The recursive residuals are attractive in Econometric applications where there is a natural ordering among the observations through time. New formulations for the recursive residuals for models having uncorrelated errors with equal variances are given in terms of the observation vector or the usual least-squares residuals, which do not require the computation of least-squares parameter estimates and for which the transformation matrices are expressed wholly in terms of the rows of the Theil Z matrix. Illustrations of these new formulations are given.  相似文献   

3.
Portmanteau test statistics represent useful diagnostic tools for checking the adequacy of multivariate time series models. For stationary and partially non-stationary vector time series models, Duchesne and Roy [Duchesne, P., Roy, R., 2004. On consistent testing for serial correlation of unknown form in vector time series models. Journal of Multivariate Analysis 89, 148-180] and Duchesne [Duchesne, P., 2005a. Testing for serial correlation of unknown form in cointegrated time series models. Annals of the Institute of Statistical Mathematics 57, 575-595] have proposed kernel-based test statistics, obtained by comparing the spectral density of the errors under the null hypothesis of non-correlation with a kernel-based spectral density estimator; these test statistics are asymptotically standard normal under the null hypothesis of non-correlation in the error term of the model. Following the method of Chen and Deo [Chen, W.W., Deo, R.S., 2004a. Power transformations to induce normality and their applications. Journal of the Royal Statistical Society, Ser. B 66, 117-130], we determine an appropriate power transformation to improve the normal approximation in small samples. Additional corrections for the mean and variance of the distance measures intervening in these test statistics are obtained. An alternative procedure to estimate the finite distribution of the test statistics is to use the bootstrap method; we introduce bootstrap-based versions of the original spectral test statistics. In a Monte Carlo study, comparisons are made under various alternatives between: the original spectral test statistics, the new corrected test statistics, the bootstrap-based versions, and finally the classical Hosking portmanteau test statistic.  相似文献   

4.
An approximate F-form of the Lagrange multiplier (LM) test for serial correlation in dynamic regression models is compared with three bootstrap tests. In one bootstrap procedure, residuals from restricted estimation under the null hypothesis are resampled. The other two bootstrap tests use residuals from unrestricted estimation under an alternative hypothesis. A fixed autocorrelation alternative is assumed in one of the two unrestricted bootstrap tests and the other is based upon a Pitman-type sequence of local alternatives. Monte Carlo experiments are used to estimate rejection probabilities under the null hypothesis and in the presence of serial correlation.  相似文献   

5.
A program package RRAP: Random Regression Residual Analysis Program using SAS [1] and S-PLUS [2] is available for performing random regression residual analysis. The PROCEDURE MIXED from SAS is used for statistical inference. Both elementary-level and individual-level residuals are used. The S-PLUS programs provide: (1) a transformation to orthogonalize the elementary-level correlated residuals for standard regression residual analyses; and (2) several statistics and plots for checking model assumptions, assessing model fitting and detecting outlying individuals. RRRAP starts with a SAS Macro RRRAPMAC on the data followed by a S-PLUS Program DoRRRAP on a UNIX system.  相似文献   

6.
Generalized linear mixed models are popular for regressing a discrete response when there is clustering, e.g. in longitudinal studies or in hierarchical data structures. It is standard to assume that the random effects have a normal distribution. Recently, it has been examined whether wrongly assuming a normal distribution for the random effects is important for the estimation of the fixed effects parameters. While it has been shown that misspecifying the distribution of the random effects has a minor effect in the context of linear mixed models, the conclusion for generalized mixed models is less clear. Some studies report a minor impact, while others report that the assumption of normality really matters especially when the variance of the random effect is relatively high. Since it is unclear whether the normality assumption is truly satisfied in practice, it is important that generalized mixed models are available which relax the normality assumption. A replacement of the normal distribution with a mixture of Gaussian distributions specified on a grid whereby only the weights of the mixture components are estimated using a penalized approach ensuring a smooth distribution for the random effects is proposed. The parameters of the model are estimated in a Bayesian context using MCMC techniques. The usefulness of the approach is illustrated on two longitudinal studies using R-functions.  相似文献   

7.
The applicability of the stochastic volatility (SV) model and the SV model with jumps for US. Treasury Bill yields data is investigated. The transformation of the continuous time models into regression models is considered and their error terms are examined. The applicability of the continuous time models to the real data is assessed by comparing some atypical properties of such error terms with an application to the real data and the generated data from the models. The empirical results indicate that the SV model and the SV model with jumps are not applicable to modeling the daily/weekly released US T-Bill secondary market yields data. Some trends and correlation structure are detected to exist in the error terms of the transformed regression models for the daily/weekly released US T-Bill yields data, while the error terms of the continuous time models are supposed to be uncorrelated. These results suggest that alternative models are needed to model such T-Bill yields data.  相似文献   

8.
This article is about testing the equality of several normal means when the variances are unknown and arbitrary, i.e., the set up of the one-way ANOVA. Even though several tests are available in the literature, none of them perform well in terms of Type I error probability under various sample size and parameter combinations. In fact, Type I errors can be highly inflated for some of the commonly used tests; a serious issue that appears to have been overlooked. We propose a parametric bootstrap (PB) approach and compare it with three existing location-scale invariant tests—the Welch test, the James test and the generalized F (GF) test. The Type I error rates and powers of the tests are evaluated using Monte Carlo simulation. Our studies show that the PB test is the best among the four tests with respect to Type I error rates. The PB test performs very satisfactorily even for small samples while the Welch test and the GF test exhibit poor Type I error properties when the sample sizes are small and/or the number of means to be compared is moderate to large. The James test performs better than the Welch test and the GF test. It is also noted that the same tests can be used to test the significance of the random effect variance component in a one-way random model under unequal error variances. Such models are widely used to analyze data from inter-laboratory studies. The methods are illustrated using some examples.  相似文献   

9.
This article is about testing the equality of several normal means when the variances are unknown and arbitrary, i.e., the set up of the one-way ANOVA. Even though several tests are available in the literature, none of them perform well in terms of Type I error probability under various sample size and parameter combinations. In fact, Type I errors can be highly inflated for some of the commonly used tests; a serious issue that appears to have been overlooked. We propose a parametric bootstrap (PB) approach and compare it with three existing location-scale invariant tests—the Welch test, the James test and the generalized F (GF) test. The Type I error rates and powers of the tests are evaluated using Monte Carlo simulation. Our studies show that the PB test is the best among the four tests with respect to Type I error rates. The PB test performs very satisfactorily even for small samples while the Welch test and the GF test exhibit poor Type I error properties when the sample sizes are small and/or the number of means to be compared is moderate to large. The James test performs better than the Welch test and the GF test. It is also noted that the same tests can be used to test the significance of the random effect variance component in a one-way random model under unequal error variances. Such models are widely used to analyze data from inter-laboratory studies. The methods are illustrated using some examples.  相似文献   

10.
Navigation Satellite Clock Error Prediction Based on Functional Network   总被引:1,自引:0,他引:1  
The high precision prediction of atomic clocks on board is a key technology for the long-term autonomous operation of a navigation satellite system. Some researches show that the performance of traditional prediction models of atomic clocks can not meet the requirements of practical applications. In order to improve the atomic clock error prediction accuracy, we propose a model based on functional network in this paper. According to the data characteristics of atomic clocks, the clock error series is firstly fit by polynomial and then the residuals is modeled by functional network. Finally, by using the data of GPS satellites, five independent prediction tests have been done to verify the model. The simulation results show that, compared with the traditional models, the proposed model can fit and predict clock error more effectively.  相似文献   

11.
Statistical inference on the ordering of the elements of a mean vector is an important issue in many applied problems, particularly in biostatistical applications. Some common ordering models include simple, tree and umbrella orderings. Many statistical methods have been developed to detect these orderings within the normal model, and outside the normal model using nonparametric methods. Estimates as well as confidence regions have also been developed for the mean vector under constraints imposed by these ordering models. In order to attempt to distinguish between ordered models, multiple testing procedures are usually required to control the overall error rate of the sequence of tests. This paper shows how observed confidence levels allow for the exploration of very general ordering models without the need for specialized asymptotic theory or multiple testing methods. The proposed methods are applied to several examples.  相似文献   

12.
Multivariate extensions of well-known linear mixed-effects models have been increasingly utilized in inference by multiple imputation in the analysis of multilevel incomplete data. The normality assumption for the underlying error terms and random effects plays a crucial role in simulating the posterior predictive distribution from which the multiple imputations are drawn. The plausibility of this normality assumption on the subject-specific random effects is assessed. Specifically, the performance of multiple imputation created under a multivariate linear mixed-effects model is investigated on a diverse set of incomplete data sets simulated under varying distributional characteristics. Under moderate amounts of missing data, the simulation study confirms that the underlying model leads to a well-calibrated procedure with negligible biases and actual coverage rates close to nominal rates in estimates of the regression coefficients. Estimation quality of the random-effect variance and association measures, however, are negatively affected from both the misspecification of the random-effect distribution and number of incompletely-observed variables. Some of the adverse impacts include lower coverage rates and increased biases.  相似文献   

13.
Measurement error models often arise in epidemiological and clinical research. Usually, in this set up it is assumed that the latent variable has a normal distribution. However, the normality assumption may not be always correct. Skew-normal/independent distribution is a class of asymmetric thick-tailed distributions which includes the skew-normal distribution as a special case. In this paper, we explore the use of skew-normal/independent distribution as a robust alternative to null intercept measurement error model under a Bayesian paradigm. We assume that the random errors and the unobserved value of the covariate (latent variable) follows jointly a skew-normal/independent distribution, providing an appealing robust alternative to the routine use of symmetric normal distribution in this type of model. Specific distributions examined include univariate and multivariate versions of the skew-normal distribution, the skew-t distributions, the skew-slash distributions and the skew contaminated normal distributions. The methods developed is illustrated using a real data set from a dental clinical trial.  相似文献   

14.
Sparsity of a parameter vector in stochastic dynamic systems and precise reconstruction of its zero and nonzero elements appear in many areas including systems and control [1,2,3,4], signal processing [5,?6], statistics [7,?8], and machine learning [9,?10] since it provides a way to discover a parsimonious model that leads to more reliable and robust prediction. Classical system identification theory has been a well-developed field [11,?12]. It usually characterizes the identification error between the estimates and the unknown parameters using different criteria such as randomness of noises, frequency domain sample data, and uncertainty bound of system, so that consistency, convergence rate, and asymptotical normality of estimates can be established as the number of data points goes to infinity. However, these theory and methods are ill suited for sparse identification if the dimension of the unknown parameter vector is high....  相似文献   

15.
采用近红外光谱分析法对不同种类的苹果样品进行分类,提出一种基于非相关判别转换的苹果近红外光谱定性分析新方法。实验分别采用主成分分析、Fisher判别分析和非相关判别转换三种方法对苹果光谱数据进行特征提取,并使用K-近邻分类算法建立三种苹果分类识别模型,最后使用"留一"交叉验证法进行模型检验。结果表明,使用非相关判别转换方法建立的模型正确识别率优于使用主成分分析和Fisher判别分析建立的模型。  相似文献   

16.
戴欢  何磊  顾晓峰 《计算机工程》2012,38(24):74-77
为降低接收信号强度指示所产生的测距误差对定位精度的影响,提出一种基于统计不相关矢量集的集中式定位算法。通过坐标变换简化双重中心化矩阵的求解过程,使用统计不相关矢量集构造双重中心化矩阵,从而计算出节点坐标。仿真结果表明,在测距误差比较大的情况下,该算法仍能有效降低测距噪声干扰、提高定位精度,适用于低成本硬件的无线传感器网络。  相似文献   

17.
Parameter tuning of evolutionary algorithms (EAs) is attracting more and more interest. In particular, the sequential parameter optimization (SPO) framework for the model-assisted tuning of stochastic optimizers has resulted in established parameter tuning algorithms. In this paper, we enhance the SPO framework by introducing transformation steps before the response aggregation and before the actual modeling. Based on design-of-experiments techniques, we empirically analyze the effect of integrating different transformations. We show that in particular, a rank transformation of the responses provides significant improvements. A deeper analysis of the resulting models and additional experiments with adaptive procedures indicates that the rank and the Box-Cox transformation are able to improve the properties of the resultant distributions with respect to symmetry and normality of the residuals. Moreover, model-based effect plots document a higher discriminatory power obtained by the rank transformation.  相似文献   

18.
A method for specifying a hidden random field (HRF) included in a hierarchical spatial model is proposed. In hierarchical models of interest the first stage describes, conditional on a realization of the HRF, a response variable which is observable on a continuous spatial domain; the second stage models the HRF which reflects unobserved spatial heterogeneity. The question which is investigated is how can the HRF be modeled, i.e. specified. The method developed to address this question is based on residuals obtained when the base model, i.e. the hierarchical model in which the HRF is assumed constant, is fitted to data. It is shown that the residuals are linked with the HRF, and the link is used to specify the HRF. The method is applied to simulated data in order to assess its performance, and then to real data on radionuclide concentrations on Rongelap Island.  相似文献   

19.
This paper discusses the development and application of two alternative strategies, in the form of global and sequential local response surface (RS) techniques, for the solution of reliability-based optimization (RBO) problems. The problem of a thin-walled composite circular cylinder under axial buckling instability is used as a demonstrative example. In this case, the global technique uses a single second-order RS model to estimate the axial buckling load over the entire feasible design space (FDS), whereas the local technique uses multiple first-order RS models, with each applied to a small subregion of the FDS. Alternative methods for the calculation of unknown coefficients in each RS model are explored prior to the solution of the optimization problem. The example RBO problem is formulated as a function of 23 uncorrelated random variables that include material properties, the thickness and orientation angle of each ply, the diameter and length of the cylinder, as well as the applied load. The mean values of the 8 ply thicknesses are treated as independent design variables. While the coefficients of variation of all random variables are held fixed, the standard deviations of the ply thicknesses can vary during the optimization process as a result of changes in the design variables. The structural reliability analysis is based on the first-order reliability method with the reliability index treated as the design constraint. In addition to the probabilistic sensitivity analysis of the reliability index, the results of the RBO problem are presented for different combinations of cylinder length and diameter and laminate ply patterns. The two strategies are found to produce similar results in terms of accuracy, with the sequential local RS technique having a considerably better computational efficiency.  相似文献   

20.
A method for specifying a hidden random field (HRF) included in a hierarchical spatial model is proposed. In hierarchical models of interest the first stage describes, conditional on a realization of the HRF, a response variable which is observable on a continuous spatial domain; the second stage models the HRF which reflects unobserved spatial heterogeneity. The question which is investigated is how can the HRF be modeled, i.e. specified. The method developed to address this question is based on residuals obtained when the base model, i.e. the hierarchical model in which the HRF is assumed constant, is fitted to data. It is shown that the residuals are linked with the HRF, and the link is used to specify the HRF. The method is applied to simulated data in order to assess its performance, and then to real data on radionuclide concentrations on Rongelap Island.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号