首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 309 毫秒
1.
Three methods of synthesizing correlations for meta-analytic structural equation modeling (SEM) under different degrees and mechanisms of missingness were compared for the estimation of correlation and SEM parameters and goodness-of-fit indices by using Monte Carlo simulation techniques. A revised generalized least squares (GLS) method for synthesizing correlations, weighted-covariance GLS (W-COV GLS), was compared with univariate weighting with untransformed correlations (univariate r) and univariate weighting with Fisher's z-transformed correlations (univariate z). These 3 methods were crossed with listwise and pairwise deletion. Univariate z and W-COV GLS performed similarly, with W-COV GLS providing slightly better estimation of parameters and more correct model rejection rates. Missing not at random data produced high levels of relative bias in correlation and model parameter estimates and higher incorrect SEM model rejection rates. Pairwise deletion resulted in inflated standard errors for all synthesis methods and higher incorrect rejection rates for the SEM model with univariate weighting procedures. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Maximum likelihood is the most common estimation method in structural equation modeling. Standard errors for maximum likelihood estimates are obtained from the associated information matrix, which can be estimated from the sample using either expected or observed information. It is known that, with complete data, estimates based on observed or expected information are consistent. The situation changes with incomplete data. When the data are missing at random (MAR), standard errors based on expected information are not consistent, and observed information should be used. A less known fact is that in the presence of nonnormality, the estimated information matrix also enters the robust computations (both standard errors and the test statistic). Thus, with MAR nonnormal data, the use of the expected information matrix can potentially lead to incorrect robust computations. This article summarizes the results of 2 simulation studies that investigated the effect of using observed versus expected information estimates of standard errors and test statistics with normal and nonnormal incomplete data. Observed information is preferred across all conditions. Recommendations to researchers and software developers are outlined. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Average change in list recall was evaluated as a function of missing data treatment (Study 1) and dropout status (Study 2) over ages 70 to 105 in Asset and Health Dynamics of the Oldest-Old data. In Study 1 the authors compared results of full-information maximum likelihood (FIML) and the multiple imputation (MI) missing-data treatments with and without independent predictors of missingness. Results showed declines in all treatments, but declines were larger for FIML and MI treatments when predictors were included in the treatment of missing data, indicating that attrition bias was reduced. In Study 2, models that included dropout status had better fits and reduced random variance compared with models without dropout status. The authors conclude that change estimates are most accurate when independent predictors of missingness are included in the treatment of missing data with either MI or FIML and when dropout effects are modeled. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
The past decade has seen a noticeable shift in missing data handling techniques that assume a missing at random (MAR) mechanism, where the propensity for missing data on an outcome is related to other analysis variables. Although MAR is often reasonable, there are situations where this assumption is unlikely to hold, leading to biased parameter estimates. One such example is a longitudinal study of substance use where participants with the highest frequency of use also have the highest likelihood of attrition, even after controlling for other correlates of missingness. There is a large body of literature on missing not at random (MNAR) analysis models for longitudinal data, particularly in the field of biostatistics. Because these methods allow for a relationship between the outcome variable and the propensity for missing data, they require a weaker assumption about the missing data mechanism. This article describes 2 classic MNAR modeling approaches for longitudinal data: the selection model and the pattern mixture model. To date, these models have been slow to migrate to the social sciences, in part because they required complicated custom computer programs. These models are now quite easy to estimate in popular structural equation modeling programs, particularly Mplus. The purpose of this article is to describe these MNAR modeling frameworks and to illustrate their application on a real data set. Despite their potential advantages, MNAR-based analyses are not without problems and also rely on untestable assumptions. This article offers practical advice for implementing and choosing among different longitudinal models. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

5.
A 2-step approach for obtaining internal consistency reliability estimates with item-level missing data is outlined. In the 1st step, a covariance matrix and mean vector are obtained using the expectation maximization (EM) algorithm. In the 2nd step, reliability analyses are carried out in the usual fashion using the EM covariance matrix as input. A Monte Carlo simulation examined the impact of 6 variables (scale length, response categories, item correlations, sample size, missing data, and missing data technique) on 3 different outcomes: estimation bias, mean errors, and confidence interval coverage. The 2-step approach using EM consistently yielded the most accurate reliability estimates and produced coverage rates close to the advertised 95% rate. An easy method of implementing the procedure is outlined. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
As with other statistical methods, missing data often create major problems for the estimation of structural equation models (SEMs). Conventional methods such as listwise or pairwise deletion generally do a poor job of using all the available information. However, structural equation modelers are fortunate that many programs for estimating SEMs now have maximum likelihood methods for handling missing data in an optimal fashion. In addition to maximum likelihood, this article also discusses multiple imputation. This method has statistical properties that are almost as good as those for maximum likelihood and can be applied to a much wider array of models and estimation methods. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
This paper considers five methods of analysis of longitudinal assessment of health related quality of life (QOL) in two clinical trials of cancer therapy. The primary difference in the two trials is the proportion of participants who experience disease progression or death during the period of QOL assessments. The sensitivity of estimation of parameters and hypothesis tests to the potential bias as a consequence of the assumptions of missing completely at random (MCAR), missing at random (MAR) and non-ignorable mechanisms are examined. The methods include complete case analysis (MCAR), mixed-effects models (MAR), a joint mixed-effects and survival model and a pattern-mixture model. Complete case analysis overestimated QOL in both trials. In the adjuvant breast cancer trial, with 15 per cent disease progression, estimates were consistent across the remaining four methods. In the advanced non-small-cell lung cancer trial, with 35 per cent mortality, estimates were sensitive to the missing data assumptions and methods of analysis.  相似文献   

8.
Traditional approaches to missing data (e.g., listwise deletion) can lead to less than optimal results in terms of bias, statistical power, or both. This article introduces the 3 articles in the special section of Psychological Methods, which consider multiple imputation and maximum-likelihood methods, new approaches to missing data that can often yield improved results. Computer software is now available to implement these new methods. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
The standard Pearson correlation coefficient is a biased estimator of the true population correlation, ρ, when the predictor and the criterion are range restricted. To correct the bias, the correlation corrected for range restriction, rc, has been recommended, and a standard formula based on asymptotic results for estimating its standard error is also available. In the present study, the bootstrap standard-error estimate is proposed as an alternative. Monte Carlo simulation studies involving both normal and nonnormal data were conducted to examine the empirical performance of the proposed procedure under different levels of ρ, selection ratio, sample size, and truncation types. Results indicated that, with normal data, the bootstrap standard-error estimate is more accurate than the traditional estimate, particularly with small sample size. With nonnormal data, performance of both estimates depends critically on the distribution type. Furthermore, the bootstrap bias-corrected and accelerated interval consistently provided the most accurate coverage probability for ρ. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
The performance of parameter estimates and standard errors in estimating F. Samejima’s graded response model was examined across 324 conditions. Full information maximum likelihood (FIML) was compared with a 3-stage estimator for categorical item factor analysis (CIFA) when the unweighted least squares method was used in CIFA’s third stage. CIFA is much faster in estimating multidimensional models, particularly with correlated dimensions. Overall, CIFA yields slightly more accurate parameter estimates, and FIML yields slightly more accurate standard errors. Yet, across most conditions, differences between methods are negligible. FIML is the best election in small sample sizes (200 observations). CIFA is the best election in larger samples (on computational grounds). Both methods failed in a number of conditions, most of which involved 200 observations, few indicators per dimension, highly skewed items, or low factor loadings. These conditions are to be avoided in applications. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
Structural equation mixture modeling (SEMM) integrates continuous and discrete latent variable models. Drawing on prior research on the relationships between continuous and discrete latent variable models, the authors identify 3 conditions that may lead to the estimation of spurious latent classes in SEMM: misspecification of the structural model, nonnormal continuous measures, and nonlinear relationships among observed and/or latent variables. When the objective of a SEMM analysis is the identification of latent classes, these conditions should be considered as alternative hypotheses and results should be interpreted cautiously. However, armed with greater knowledge about the estimation of SEMMs in practice, researchers can exploit the flexibility of the model to gain a fuller understanding of the phenomenon under study. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
Unreliability of measures produces bias in regression coefficients. Such measurement error is particularly problematic with the use of product terms in multiple regression because the reliability of the product terms is generally quite low relative to its component parts. The use of confirmatory factor analysis as a means of dealing with the problem of unreliability was explored in a simulation study. The design compared traditional regression analysis (which ignores measurement error) with approaches based on latent variable structural equation models that used maximum-likelihood and weighted least squares estimation criteria. The results showed that the latent variable approach coupled with maximum-likelihood estimation methods did a satisfactory job of interaction analysis in the presence of measurement error in terms of Type I and Type II errors. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
OBJECTIVE: To develop new approaches for evaluating results obtained from simulation studies used to determine sampling strategies for efficient estimation of population pharmacokinetic parameters. METHODS: One-compartment kinetics with intravenous bolus injection was assumed and the simulated data (one observation made on each experimental unit [human subject or animal]), were analyzed using NONMEM. Several approaches were used to judge the efficiency of parameter estimation. These included: (1) individual and joint confidence intervals (CIs) coverage for parameter estimates that were computed in a manner that would reveal the influence of bias and standard error (SE) on interval estimates; (2) percent prediction error (%PE) approach; (3) the incidence of high pair-wise correlations; and (4) a design number approach. The design number (phi) is a new statistic that provides a composite measure of accuracy and precision (using SE). RESULTS: The %PE approach is useful only in examining the efficiency of estimation of a parameter considered independently. The joint CI coverage approach permitted assessment of the accuracy and reliability of all model parameter estimates. The phi approach is an efficient method of achieving an accurate estimate of parameter(s) with good precision. Both the phi for individual parameter estimation and the overall phi for the estimation of model parameters led to optimal experimental design. CONCLUSIONS: Application of these approaches to the analyses of the results of the study was found useful in determining the best sampling design (from a series of two sampling times designs within a study) for efficient estimation of population pharmacokinetic parameters.  相似文献   

14.
Although researchers in clinical psychology routinely gather data in which many individuals respond at multiple times, there is not a standard way to analyze such data. A new approach for the analysis of such data is described. It is proposed that a person's current standing on a variable is caused by 3 sources of variance: a term that does not change (trait), a term that changes (state), and a random term (error). It is shown how structural equation modeling can be used to estimate such a model. An extended example is presented in which the correlations between variables are quite different at the trait, state, and error levels. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
16.
The simplex and common-factor models of drug use were compared using maximum-likelihood estimation of latent variable structural models in two samples: a sample of 226 high school students, using ratio-scale measures of current drug use, and a sample of 310 industrial workers and 811 college students, using ordinal-scale measures of current drug use. Latent variables of alcohol, marihuana, enhancer hard drugs, and dampener hard drugs were specified in a series of structural models. Contrary to previous findings with cumulative drug-use data, the common-factor model provided a more acceptable representation of the observed current-use data than did the simplex model in both samples. In addition, the similarity of results across both of these samples supports recent contentions by Huba and Bentler (1982) that quantitatively measured variables are not necessarily superior to qualitative, ordinal indicators in latent variable models of drug use. (49 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Interactions between (multiple indicator) latent variables are rarely used because of implementation complexity and competing strategies. Based on 4 simulation studies, the traditional constrained approach performed more poorly than did 3 new approaches-unconstrained, generalized appended product indicator, and quasi-maximum-likelihood (QML). The authors' new unconstrained approach was easiest to apply. All 4 approaches were relatively unbiased for normally distributed indicators, but the constrained and QML approaches were more biased for nonnormal data; the size and direction of the bias varied with the distribution but not with the sample size. QML had more power, but this advantage was qualified by consistently higher Type I error rates. The authors also compared general strategies for defining product indicators to represent the latent interaction factor. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
The use of multiple imputation for the analysis of missing data.   总被引:1,自引:0,他引:1  
This article provides a comprehensive review of multiple imputation (MI), a technique for analyzing data sets with missing values. Formally, MI is the process of replacing each missing data point with a set of m > 1 plausible values to generate m complete data sets. These complete data sets are then analyzed by standard statistical software, and the results combined, to give parameter estimates and standard errors that take into account the uncertainty due to the missing data values. This article introduces the idea behind MI, discusses the advantages of MI over existing techniques for addressing missing data, describes how to do MI for real problems, reviews the software available to implement MI, and discusses the results of a simulation study aimed at finding out how assumptions regarding the imputation model affect the parameter estimates provided by MI. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Whereas measures of explained variance in a regression and an equation of a recursive structural equation model can be simply summarized by a standard R2 measure, this is not possible in nonrecursive models in which there are reciprocal interdependencies among variables. This article provides a general approach to defining variance explained in latent dependent variables of nonrecursive linear structural equation models. A new method of its estimation, easily implemented in EQS or LISREL and available in EQS 6, is described and illustrated. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
A simple form of non-ignorable missing data mechanisms based on two parameters is used to characterize the amount of missing data and the severity of non-randomness in clinical trials. Based on the formulation, the effect of non-randomly missing data on simple analyses which ignore the missing data is studied for binary and normally distributed response variables. In general, the effect of the non-randomly missing data on the bias and the power increases with the severity of non-randomness. The bias can be positive or negative and the power can be less than or greater than when the data are missing at random. The results of the analysis, ignoring the missing data, can be seriously flawed if the non-randomness is severe, even when only a small proportion of the sample is missing. The problem is more pronounced in the case of normally distributed response variables with unequal variances.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号