共查询到20条相似文献,搜索用时 31 毫秒
1.
Since 1996, south-eastern Australia has been experiencing a pertussis epidemic which has resulted in the deaths of several infants, including four from NSW in the 12 months to July 1997. All were less than six weeks of age and died from overwhelming cardiovascular compromise despite intensive care support. This excessive infant mortality from a preventable disease demonstrates the need for better pertussis immunity in the community and for erythromycin treatment of all suspected cases and family contacts, especially infants. 相似文献
2.
Motivated by an example in nutritional epidemiology, we investigate some design and analysis aspects of linear measurement error models with missing surrogate data. The specific problem investigated consists of an initial large sample in which the response (a food frequency questionnaire, FFQ) is observed and then a smaller calibration study in which replicates of the error prone predictor are observed (food records or recalls, FR). The difference between our analysis and most of the measurement error model literature is that, in our study, the selection into the calibration study can depend on the value of the response. Rationale for this type of design is given. Two major problems are investigated. In the design of a calibration study, one has the option of larger sample sizes and fewer replicates or smaller sample sizes and more replicates. Somewhat surprisingly, neither strategy is uniformly preferable in cases of practical interest. The answers depend on the instrument used (recalls or records) and the parameters of interest. The second problem investigated is one of analysis. In the usual linear model with no missing data, method of moments estimates and normal-theory maximum likelihood estimates are approximately equivalent, with the former method in most use because it can be calculated easily and explicitly. Both estimates are valid without any distributional assumptions. In contrast, in the missing data problem under consideration, only the moments estimate is distribution-free, but the maximum likelihood estimate has at least 50% greater precision in practical situations when normality obtains. Implications for the design of nutritional calibration studies are discussed. 相似文献
3.
Statistical methodology is presented for the statistical analysis of non-linear measurement error models. Our approach is to provide adjustments for the usual maximum likelihood estimators, their standard errors and associated significance tests in order to account for the presence of measurement error in some of the covariates. We illustrate the technique with a mixed effects Poisson regression model for recurrent event data applied to a randomized clinical trial for the prevention of skin tumours. 相似文献
4.
TR Ten Have AR Kunselman EP Pulkstenis JR Landis 《Canadian Metallurgical Quarterly》1998,54(1):367-383
A shared parameter model with logistic link is presented for longitudinal binary response data to accommodate informative drop-out. The model consists of observed longitudinal and missing response components that share random effects parameters. To our knowledge, this is the first presentation of such a model for longitudinal binary response data. Comparisons are made to an approximate conditional logit model in terms of a clinical trial dataset and simulations. The naive mixed effects logit model that does not account for informative drop-out is also compared. The simulation-based differences among the models with respect to coverage of confidence intervals, bias, and mean squared error (MSE) depend on at least two factors: whether an effect is a between- or within-subject effect and the amount of between-subject variation as exhibited by variance components of the random effects distributions. When the shared parameter model holds, the approximate conditional model provides confidence intervals with good coverage for within-cluster factors but not for between-cluster factors. The converse is true for the naive model. Under a different drop-out mechanism, when the probability of drop-out is dependent only on the current unobserved observation, all three models behave similarly by providing between-subject confidence intervals with good coverage and comparable MSE and bias but poor within-subject confidence intervals, MSE, and bias. The naive model does more poorly with respect to the within-subject effects than do the shared parameter and approximate conditional models. The data analysis, which entails a comparison of two pain relievers and a placebo with respect to pain relief, conforms to the simulation results based on the shared parameter model but not on the simulation based on the outcome-driven drop-out process. This comparison between the data analysis and simulation results may provide evidence that the shared parameter model holds for the pain data. 相似文献
5.
Statistical methods for assessing measurement error (reliability) in variables relevant to sports medicine 总被引:9,自引:0,他引:9
Minimal measurement error (reliability) during the collection of interval- and ratio-type data is critically important to sports medicine research. The main components of measurement error are systematic bias (e.g. general learning or fatigue effects on the tests) and random error due to biological or mechanical variation. Both error components should be meaningfully quantified for the sports physician to relate the described error to judgements regarding 'analytical goals' (the requirements of the measurement tool for effective practical use) rather than the statistical significance of any reliability indicators. Methods based on correlation coefficients and regression provide an indication of 'relative reliability'. Since these methods are highly influenced by the range of measured values, researchers should be cautious in: (i) concluding acceptable relative reliability even if a correlation is above 0.9; (ii) extrapolating the results of a test-retest correlation to a new sample of individuals involved in an experiment; and (iii) comparing test-retest correlations between different reliability studies. Methods used to describe 'absolute reliability' include the standard error of measurements (SEM), coefficient of variation (CV) and limits of agreement (LOA). These statistics are more appropriate for comparing reliability between different measurement tools in different studies. They can be used in multiple retest studies from ANOVA procedures, help predict the magnitude of a 'real' change in individual athletes and be employed to estimate statistical power for a repeated-measures experiment. These methods vary considerably in the way they are calculated and their use also assumes the presence (CV) or absence (SEM) of heteroscedasticity. Most methods of calculating SEM and CV represent approximately 68% of the error that is actually present in the repeated measurements for the 'average' individual in the sample. LOA represent the test-retest differences for 95% of a population. The associated Bland-Altman plot shows the measurement error schematically and helps to identify the presence of heteroscedasticity. If there is evidence of heteroscedasticity or non-normality, one should logarithmically transform the data and quote the bias and random error as ratios. This allows simple comparisons of reliability across different measurement tools. It is recommended that sports clinicians and researchers should cite and interpret a number of statistical methods for assessing reliability. We encourage the inclusion of the LOA method, especially the exploration of heteroscedasticity that is inherent in this analysis. We also stress the importance of relating the results of any reliability statistic to 'analytical goals' in sports medicine. 相似文献
6.
Computer-aided rolling-technology design is presented in the paper. Program is based on the solution of the set of non-linear equations which describe the process. They include continuity equations and power balance equations for continuous rolling. Reverse rolling is described by constant-rolling-force equations and power-balance equations for both main drive and reel drives. Typical results of calculations for hot and cold rolling processes are presented. 相似文献
7.
It is commonly thought that structural equation modeling corrects estimated relationships among latent variables for the biasing effects of measurement error. The purpose of this article is to review the manner in which structural equation models control for measurement error and to demonstrate the conditions in which structural equation models do and do not correct for unreliability. Generalizability theory is used to demonstrate that there are multiple sources of error in most measurement systems and that applications of structural equation modeling rarely account for more than a single source of error. As a result, the parameter estimates in a structural equation model may be severely biased by unassessed sources of measurement error. Recommendations for modeling multiple sources of error in structural equation models are provided. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
8.
Describes how an apparent contradiction between the methods of coding dummy variables proposed by J. Cohen (see record 1969-06106-001) and those by J. Overall and D. Spiegel (see record 1970-01534-001) led to the discovery of a general formula for such coding, based on demonstrating a theoretical connection between multiple comparison and dummy multiple regression. Examples are given for various cases of orthogonal and nonorthogonal designs, which explicitly include assumptions about sample size. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
9.
Werts C. E.; Rock D. A.; Linn R. L.; Joreskog K. G. 《Canadian Metallurgical Quarterly》1976,83(6):1007
A maximum likelihood procedure for testing the equality of sets of variances, covariances, correlations, and regression weights between and/or within populations is demonstrated. The procedure is an application of K. G. Joreskog's (see record 1972-09999-001) general factor-analytic model for simultaneous factor analysis in several populations. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
10.
Unreliability of measures produces bias in regression coefficients. Such measurement error is particularly problematic with the use of product terms in multiple regression because the reliability of the product terms is generally quite low relative to its component parts. The use of confirmatory factor analysis as a means of dealing with the problem of unreliability was explored in a simulation study. The design compared traditional regression analysis (which ignores measurement error) with approaches based on latent variable structural equation models that used maximum-likelihood and weighted least squares estimation criteria. The results showed that the latent variable approach coupled with maximum-likelihood estimation methods did a satisfactory job of interaction analysis in the presence of measurement error in terms of Type I and Type II errors. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
11.
In many studies included in meta-analyses, the independent variable measure, the dependent variable measure, or both, have been artificially dichotomized, attenuating the correlation from its true value and resulting in (a) a downward distortion in the mean correlation and (b) an upward distortion in the apparent real variation of correlations across studies. We present (a) exact corrections for this distortion for the case in which only one of the variables has been dichotomized and (b) methods for making approximate corrections when both variables have been artificially dichotomized. These approximate corrections are shown to be quite accurate for most research data. Methods for weighting the resulting corrected correlations in meta-analysis are presented. These corrections make it possible for meta-analysis to yield approximately unbiased estimates of mean population correlations and their standard deviations despite the initial distortion in the correlations from individual studies. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
12.
13.
Repeated measures data often occur in practice. This has led to considerable progress in the development of methods for inference in models for such data. In this paper, projection methods are proposed for examining goodness-of-fit in regression models for repeated measures. Rao's (1959, Biometrika 46, 49-58) F-test for testing a postulated mean structure using an independently identically normally distributed random sample is extended to a broad class of models including both fixed and random effects. The paper also shows how projection methods may be utilized for checking multivariate normality. In addition, application of projection to test the adequacy of extremely unbalanced models is considered. Two examples are given to demonstrate the underlying techniques. 相似文献
14.
15.
16.
In analysis of binary data from clustered and longitudinal studies, random effect models have been recently developed to accommodate two-level problems such as subjects nested within clusters or repeated classifications within subjects. Unfortunately, these models cannot be applied to three-level problems that occur frequently in practice. For example, multicenter longitudinal clinical trials involve repeated assessments within individuals and individuals are nested within study centers. This combination of clustered and longitudinal data represents the classic three-level problem in biometry. Similarly, in prevention studies, various educational programs designed to minimize risk taking behavior (e.g., smoking prevention and cessation) may be compared where randomization to various design conditions is at the level of the school and the intervention is performed at the level of the classroom. Previous statistical approaches to the three-level problem for binary response data have either ignored one level of nesting, treated it as a fixed effect, or used first- and second-order Taylor series expansions of the logarithm of the conditional likelihood to linearize these models and estimate model parameters using more conventional procedures for measurement data. Recent studies indicate that these approximate solutions exhibit considerable bias and provide little advantage over use of traditional logistic regression analysis ignoring the hierarchical structure. In this paper, we generalize earlier results for two-level random effects probit and logistic regression models to the three-level case. Parameter estimation is based on full-information maximum marginal likelihood estimation (MMLE) using numerical quadrature to approximate the multiple random effects. The model is illustrated using data from 135 classrooms from 28 schools on the effects of two smoking cessation interventions. 相似文献
17.
MacMahon et al. present a meta-analysis of the effect of blood pressure on coronary heart disease, as well as new methods for estimation in measurement error models for the case when a replicate or second measurement is made of the fallible predictor. The correction for attenuation used by these authors is compared to others already existing in the literature, as well as to a new instrumental variable method. The assumptions justifying the various methods are examined and their efficiencies are studied via simulation. Compared to the methods we discuss, the method of MacMahon et al. may have bias in some circumstances because it does not take into account: (i) possible correlations among the predictors within a study; (ii) possible bias in the second measurement; or (iii) possibly differing marginal distributions of the predictors or measurement errors across studies. A unifying asymptotic theory using estimating equations is also presented. 相似文献
18.
为了实现钢带表面尤其是粗糙度(Ra)的连续控制,进而改善钢带的表面性质,冶金研究中心(CRM)开发了一种表面粗糙度传感器(SRM),并在AMEPA实现了工业化。SRM用于测量不同工业线上整条钢带的粗糙度参数。选取的测量方法基于三角测量原理:一条很细的线被投影到表面,然后通过分析线的变形确定凸现。本方法通过与机械触针参照测量法对比得到验证,SRM与触针的相关范围为+/-10%。SRM被安装于不同的线和各种产品上:连续退火线、伸线(抹油表面)、镀锌线(高反射表面)和轧辊间(高反射和粗糙表面)。同样涉及的还有涂层的、未涂层的、随即(例如EDT)和确定(例如EBT)表面等。最终得到一个用于测量形貌学的在线传感器。这个传感器不久前用于欧洲某些公司及其他线的日常生产,四个传感器安装于中国某公司的连续镀锌线以及连续退火线上,这有助于理解不同生产参数对钢带表面的影响,有可能实现过程的直接控制。 相似文献
19.
20.
For diseases with a genetic component, logistic regression models are presented that incorporate family history in a quantitative way. In the largest model, every type of relative has their own regression coefficient. The other two models are submodels, which incorporate family history either by the number of cases in the family minus its expectation or by a weighted number of cases in the family minus its expectation. For various genetic effects, namely polygenic and autosomal dominant effects, the performance of these simple logistic models is studied. First, the predictive values of the logistic and true genetic models are computed and compared. Secondly, a simulation study is carried out to investigate the effects of estimation of the parameters in a small data set. Thirdly, the logistic models are fitted to a data set of Von Willebrand Factor responses of target individuals and their families; in these models, family history has a significant effect. The conclusion is that for the genetic effects considered the logistic models perform well. 相似文献