首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The effect of missing data in causal inference problems is widely recognized. In malaria drug efficacy studies, it is often difficult to distinguish between new and old infections after treatment, resulting in indeterminate outcomes. Methods that adjust for possible bias from missing data include a variety of imputation procedures (extreme case analysis, hot-deck, single and multiple imputation), weighting methods, and likelihood based methods (data augmentation, EM procedures and their extensions). In this article, we focus our discussion on multiple imputation and two weighting procedures (the inverse probability weighted and the doubly robust (DR) extension), comparing the methods' applicability to the efficient estimation of malaria treatment effects. Simulation studies indicate that DR estimators are generally preferable because they offer protection to misspecification of either the outcome model or the missingness model. We apply the methods to analyze malaria efficacy studies from Uganda.  相似文献   

2.
Women tend to repeat reproductive outcomes, with past history of an adverse outcome being associated with an approximate two-fold increase in subsequent risk. These observations support the need for statistical designs and analyses that address this clustering. Failure to do so may mask effects, result in inaccurate variance estimators, produce biased or inefficient estimates of exposure effects. We review and evaluate basic analytic approaches for analysing reproductive outcomes, including ignoring reproductive history, treating it as a covariate or avoiding the clustering problem by analysing only one pregnancy per woman, and contrast these to more modern approaches such as generalized estimating equations with robust standard errors and mixed models with various correlation structures. We illustrate the issues by analysing a sample from the Collaborative Perinatal Project dataset, demonstrating how the statistical model impacts summary statistics and inferences when assessing etiologic determinants of birth weight.  相似文献   

3.
Abstract

Some ranking and selection (R&S) procedures for steady-state simulation require estimates of the asymptotic variance parameter of each system to guarantee a certain probability of correct selection. In this paper, we show that the performance of such R&S procedures depends highly on the quality of the variance estimates that are used. In fact, we study the performance of R&S procedures using three new variance estimators—overlapping area, overlapping Cramér–von Mises, and overlapping modified jackknifed Durbin–Watson estimators—that show better long-run performance than other estimators previously used in conjunction with R&S procedures for steady-state simulations.  相似文献   

4.
Determining good parameter estimates in (exponential smooth transition autoregressive) models is known to be difficult. We show that the phenomena of getting strongly biased estimators is a consequence of the so‐called identification problem, the problem of properly distinguishing the transition function in relation to extreme parameter combinations. This happens in particular for either very small or very large values of the error term variance. Furthermore, we introduce a new alternative model – the TSTAR model – which has similar properties as the ESTAR model but reduces the effects of the identification problem. We also derive a linearity and a unit root test for this model.  相似文献   

5.
Smooth non-parametric kernel density and regression estimators are studied when the data are strongly dependent. In particular, we derive central (and non-central) limit theorems for the kernel density estimator of a multivariate Gaussian process and an infinite-order moving average of an independent identically distributed process, as well as the estimator's consistency for other types of data, such as non-linear functions of a Gaussian process. We find that the kernel density estimator at two different points, under certain conditions, is not only perfectly correlated but may converge to the same random variable. Also, central (and non-central) limit theorems of the non-parametric kernel regression estimator are studied. One important and surprising characteristic found is that its asymptotic variance does not depend on the point at which the regression function is estimated and also that its asymptotic properties are the same whether or not regressors are strongly dependent. Finally, a Monte Carlo experiment is reported to assess the behaviour of the estimators in finite samples.  相似文献   

6.
Abstract.  Maximum quasi-likelihood estimation is investigated for the NEAR(2) model, an autoregressive time series model with marginal exponential distributions. In certain regions of the parameter space, simulations indicate that maximum quasi-likelihood estimators perform better than two-stage conditional least squares estimators in terms of the per cent of estimates falling in the parameter space. The problem of out-of-range estimates is shown to be caused by the lack of information in the data rather than the characteristics of the method of estimation.  相似文献   

7.
Abstract. A modification of the minimum Akaike information criterion (AIC) procedure (and of related procedures like the Bayesian information criterion (BIC)) for order estimation in autoregressive moving-average (ARMA) models is introduced. This procedure has the advantage that consistency for the order estimators obtained via this procedure can be established without restricting attention to only a finite number of models. The behaviour of these newly introduced order estimators is also analysed for the case when the data-generating process is not an ARMA process (transfer function/spectral density approximation). Furthermore, the behaviour of the order estimators obtained via minimization of BIC (or of related criteria) is investigated for a non-ARMA data-generating process.  相似文献   

8.
When a straight line is fitted to time series data, generalized least squares (GLS) estimators of the trend slope and intercept are attractive as they are unbiased and of minimum variance. However, computing GLS estimators is laborious as their form depends on the autocovariances of the regression errors. On the other hand, ordinary least squares (OLS) estimators are easy to compute and do not involve the error autocovariance structure. It has been known for 50 years that OLS and GLS estimators have the same asymptotic variance when the errors are second‐order stationary. Hence, little precision is gained by using GLS estimators in stationary error settings. This article revisits this classical issue, deriving explicit expressions for the GLS estimators and their variances when the regression errors are drawn from an autoregressive process. These expressions are used to show that OLS methods are even more efficient than previously thought. Specifically, we show that the convergence rate of variance differences is one polynomial degree higher than that of least squares estimator variances. We also refine Grenander's (1954) variance ratio. An example is presented where our new rates cannot be improved upon. Simulations show that the results change little when the autoregressive parameters are estimated.  相似文献   

9.
Environmental stress cracking (ESC) is one of the main phenomena that limit the use of polymers, being responsible for an expressive number of premature failures in service. This occurs when mechanical loading is combined with a certain kind of chemical agent, causing surface cracks and, eventually, catastrophic failure. This issue is not widely reported in the literature, and the usual procedures for investigation involve conventional mechanical testing. Alternative tools, such as, the technique of acoustic emission (AE) to monitor the stages of fracture, are rarely used. In this work, injection molded polystyrene were submitted to stress cracking conditions, using two types of active fluids and different exposure temperatures (25, 40, and 55°C). The AE technique was applied simultaneously with the mechanical testing to monitor parameters like intensity of the hits and energy released during the deformation and fracture. The results showed that the failure by ESC was related to the fluid interaction with the polymer and was very dependent on the exposure temperature. The use of acoustic emission technique was very useful to differentiate the effects of the various exposure conditions and to explain certain fracture characteristics observed by visual inspection and by scanning electron microscopy.  相似文献   

10.
Among the existing estimators of interactive effects (IEs) regressions, the common correlated effects (CCE) approach is probably the most popular. A major reason for this popularity is the generality of the approach. In fact, CCE is remarkably general in that it allows ample parameter heterogeneity without placing any restrictions on the true number of common factors. In the present paper, we show that this generality is not unique to CCE but that it is shared by a whole class of estimators. We characterize this class and show that it does not rely on the correct specification of the IEs. In spite of this, the estimators within the class are consistent and asymptotically normal under general conditions. This means that there is not just CCE but, in fact, many estimators to choose from, such as the fixed effects estimator.  相似文献   

11.
Estimation of the effect of a binary exposure on an outcome in the presence of confounding is often carried out via outcome regression modelling. An alternative approach is to use propensity score methodology. The propensity score is the conditional probability of receiving the exposure given the observed covariates and can be used, under the assumption of no unmeasured confounders, to estimate the causal effect of the exposure. In this article, we provide a non-technical and intuitive discussion of propensity score methodology, motivating the use of the propensity score approach by analogy with randomised studies, and describe the four main ways in which this methodology can be implemented. We carefully describe the population parameters being estimated - an issue that is frequently overlooked in the medical literature. We illustrate these four methods using data from a study investigating the association between maternal choice to provide breast milk and the infant's subsequent neurodevelopment. We outline useful extensions of propensity score methodology and discuss directions for future research. Propensity score methods remain controversial and there is no consensus as to when, if ever, they should be used in place of traditional outcome regression models. We therefore end with a discussion of the relative advantages and disadvantages of each.  相似文献   

12.
The robustness of sequential confidence intervals is studied by considering contamination with probability ε of the basic underlying distribution in a so-called gross errors model. Asymptotic theory is considered when d → 0, where the prescribed length of the interval is 2d, and simultaneously ε ? ε(d) → 0. A general theorem, in a distribution free setting, is given which provides expressions for the asymptotic coverage probability and the asymptotic distribution of the stopping variable. The results depend on the rate of ε(d)/d as d → 0 and on the contaminating distribution. If the latter distribution is degenerate, it turns out that the influence functions of the above mentioned two estimators used in the construction of the procedure, appear in the expressions for the asymptotic coverage probability and the asymptotic distribution of the stopping variable respectively. This shows how the sequential procedure inherits the robustness properties of the estimators concerned and how this is quantified. The general theorem is specialized to two procedures for the estimation of the mean of a symmetric distribution. Results of Monte Carlo studies indicate agreement between the asymptotic theory and the actual behavior of the procedures.  相似文献   

13.
Abstract. We consider robust serial correlation tests in autoregressive models with exogenous variables (ARX). Since the least squares estimators are not robust when outliers are present, a new family of estimators is introduced, called residual autocovariances for ARX (RA‐ARX). They provide resistant estimators that are less sensible to abnormal observations in the output variable of the dynamic model. Such ‘bad’ observations could be due to unexpected phenomena such as economic crisis or equipment failure in engineering, among others. We show that the new robust estimators are consistent and we can consider robust and powerful tests of serial correlation in ARX models based on these estimators. The new one‐sided tests of serial correlation are obtained in extending Hong's (1996) approach in a framework resistant to outliers. They are based on a weighted sum of robust squared residual autocorrelations and on any robust and n1/2‐consistent estimators. Our approach generalizes Li's (1988) test statistic, that can be interpreted as a test using the truncated uniform kernel. However, many kernels deliver a higher power. This is confirmed in a simulation study, where we investigate the finite sample properties of the new robust serial correlation tests in comparison to some commonly used robust and non‐robust tests.  相似文献   

14.
Abstract. We consider the standard spectral estimators based on a sample from a class of strictly stationary nonlinear processes which include, in particular, the bilinear and Volterra processes. It is shown that these estimators, under certain mild regularity conditions are both consistent and asymptotically normal.  相似文献   

15.
We propose a consistent monitoring procedure for structural change in a cointegrating relationship. The procedure is inspired by Chu et al. (1996) by being based on parameter estimation on a prebreak ‘calibration’ period. We use three modified least squares estimators to obtain nuisance parameter‐free limiting distributions. We study the asymptotic and finite sample properties of the procedures and finally apply the approach to monitor two‐fundamentals‐driven US housing prices cointegrating relationships over the period 1976:Q1–2010:Q4 using the data of Anundsen (2015). Depending on the relationship considered and the estimation method used, a break point is detected as early as 2003:Q2, that is, well before US housing prices started to fall in 2007.  相似文献   

16.
The predeterminate model of corrosion rate of steel in concrete, the influence of concrete carbonation exponent and cover thickness to steel corrosion rate, and relationships among steel diameter, cover thickness and exposure time to steel corrosion rate are mainly studied. It is shown that (1) the steel corrosion rate increases when the concrete carbonation exponent increases. (2) The steel corrosion rate decreases when a mild carbon steel with circular diameter in concrete increases. The more the concrete carbonation exponent becomes larger, the more the effect of steel diameter obviously appears. (3) The steel corrosion rate increases when the concrete cover thickness decreases. (4) The steel corrosion rate obviously increases when the exposure time decreases. The results of present studies are discussed in comparison with earlier findings.  相似文献   

17.
Gross error detection is crucial for data reconciliation and parameter estimation, as gross errors can severely bias the estimates and the reconciled data. Robust estimators significantly reduce the effect of gross errors (or outliers) and yield less biased estimates. An important class of robust estimators are maximum likelihood estimators or M-estimators. These are commonly of two types, Huber estimators and Hampel estimators. The former significantly reduces the effect of large outliers whereas the latter nullifies their effect. In particular, these two estimators can be evaluated through the use of an influence function, which quantifies the effect of an observation on the estimated statistic. Here, the influence function must be bounded and finite for an estimator to be robust. For the Hampel estimators the influence function becomes zero for large outliers, nullifying their effect. On the other hand, Huber estimators do not reject large outliers; their influence function is simply bounded. As a result, we consider the three part redescending estimator of Hampel and compare its performance with a Huber estimator, the Fair function. A major advantage to redescending estimators is that it is easy to identify outliers without having to perform any exploratory data analysis on the residuals of regression. Instead, the outliers are simply the rejected observations. In this study, the redescending estimators are also tuned to the particular observed system data through an iterative procedure based on the Akaike information criterion, (AIC). This approach is not easily afforded by the Huber estimators and this can have a significant impact on the estimation. The resulting approach is incorporated within an efficient non-linear programming algorithm. Finally, all of these features are demonstrated on a number of process and literature examples for data reconciliation.  相似文献   

18.
Clinical research analyses must balance the desire to ;learn all that is learnable' from the database with the observation that sample-based data commonly lead to conclusions that are perfectly correct for the sample, but wholly incorrect for the population from which the data were based. Investigators who defend exploratory analyses as reliable, misuse important tools that have taken over three hundred years to develop. Statistical estimators in clinical trials function appropriately when they incorporate random data that is gathered in response to a fixed research question. Their prediction ability degrades rapidly when the selection of the research question is itself random, that is, left to the data. Operating like blind guides, these estimators mislead the medical community about what it would see in the population, based on sample observations. The result is a wavering research focus, leaping from one provocative but misleading finding to the next on the powerful waves of sampling error. Therefore, a primary purpose of the prospective design is to fix the research questions prospectively, thereby anchoring the analysis plan. Prospective statements of the research questions and rejection of tempting databased changes to the protocol preserve the best estimates of effect sizes, standard errors, confidence intervals and p-values. Embracing these principles promotes the prosecution of a successful research program, that is, the construction and protection of a research environment that permits an objective assessment of the therapy or exposure being studied. If there is any fixed star in the research constellation, it is that sample-based research must be hypothesis-driven and concordantly executed to have real meaning for both the scientific community and the patient populations that we serve.  相似文献   

19.
Indirect estimators usually emerge from two‐step optimization procedures. Each step in such a procedure may induce complexities in the asymptotic theory of the estimator. In this note, we are occupied with a simple example in which the estimator defined by the inversion of the binding function has a ‘discontinuous’ limit theory even in cases where the auxiliary one does not. This example lives in the framework of estimation of the MA (1) parameter. The ‘discontinuities’ involve the dependence of the rate of convergence on the parameter, the non‐continuity of the limit distribution w.r.t. the parameter and the estimator's non‐regularity. We are also occupied with a more complex example where the discontinuities occur because of complexities induced in any step of the defining procedure. We present some Monte Carlo evidence on the quality of the approximations from the limit distributions. Copyright © 2014 Wiley Publishing Ltd  相似文献   

20.
Partial compliance with assigned treatment regimes is common in drug trials and calls for a causal analysis of the effect of treatment actually received. As such observed exposure is no longer randomized, selection bias must be carefully accounted for. The framework of potential outcomes allows this by defining a subject-specific treatment-free reference outcome, which may be latent and is modelled in relation to the observed (treated) data. Causal parameters enter these structural models explicitly. In this paper we review recent progress in randomization-based inference for structural mean modelling, from the additive linear model to the structural generalized linear models. An arsenal of tools currently available for standard association regression has steadily been developed in the structural setting, providing many parallel features to help randomization-based inference. We argue that measurement error on exposure is an important practical complication that has, however, not yet been addressed. We show how standard additive linear structural mean models are robust against unbiased measurement error and how efficient, asymptotically unbiased inference can be drawn when the degree of measurement error bias is known. The impact of measurement error is illustrated in a blood pressure example and finite sample properties are verified by simulation. We end with a plea for more and careful use of this methodology and point to directions for further development.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号