首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 10 毫秒
1.
[Correction Notice: An erratum for this article was reported in Vol 75(1) of Journal of Applied Psychology (see record 2008-10492-001). An error exists in Figure 2 and the accompanying text of the article. The corrected information is included in the erratum.] The problem of assessing fit of structural equation models is reviewed, and two sampling studies are reported that examine the effects of sample size, estimation method, and model misspecification on fit indices. In the first study, the behavior of indices in a known-population confirmatory factor analysis model is considered. In the second study, the same problem in an empirical data set is examined by looking at antecedents and consequences of work motivation. The findings across the two studies suggest that (a) as might be expected, sample size is an important determinant in assessing model fit; (b) estimator-specific, as opposed to estimator-general, fit indices provide more accurate indications of model fit; and (c) the studied fit indices are differentially sensitive to model misspecification. Some recommendations for the use of structural equation model fit indices are given. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Combined significance tests (combined p values) and tests of the weighted mean effect size are used to combine information across studies in meta-analysis. A combined significance test (Stouffer test) is compared with a test based on the weighted mean effect size as tests of the same null hypothesis. The tests are compared analytically in the case in which the within-group variances are known and compared through large-sample theory in the more usual case in which the variances are unknown. Generalizations suggested are then explored through a simulation study. This work demonstrates that the test based on the average effect size is usually more powerful than the Stouffer test unless there is a substantial negative correlation between within-study sample size and effect size. Thus, the test based on the average effect size is generally preferable, and there is little reason to also calculate the Stouffer test. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Missing effect-size estimates pose a particularly difficult problem in meta-analysis. Rather than discarding studies with missing effect-size estimates or setting missing effect-size estimates equal to 0, the meta-analyst can supplement effect-size procedures with vote-counting procedures if the studies report the direction of results or the statistical significance of results. By combining effect-size and vote-counting procedures, the meta-analyst can obtain a less biased estimate of the population effect size and a narrower confidence interval for the population effect size. This article describes 3 vote-counting procedures for estimating the population correlation coefficient in studies with missing sample correlations. Easy-to-use tables, based on equal sample sizes, are presented for the 3 procedures. More complicated vote-counting procedures also are given for unequal sample sizes. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
Reports an error in "Influence of sample size, estimation method, and model specification on goodness-of-fit assessments in structural equation models" by Terence J. la Du and J. S. Tanaka (Journal of Applied Psychology, 1989[Aug], Vol 74[4], 625-635). Figure 2 (p. 631) summarizes Katzell's work motivation model and indicates where the trivial misspecification (dashed line) and nontrivial misspecification (starred line) occurred in our model specification condition. The error is in the latter. The starred line should be from Operations and Resources to Extrinsic Rewards and not from Rewards for Performance to Fruity. Our findings are not changed by this error, because we were using Katzell's model and accompanying data base to conduct a sampling study on goodness-of-fit indices and not testing his model. Hence, any of the paths were candidates for the nontrivial misspecification condition. (The following abstract of the original article appeared in record 1989-38703-001.) The problem of assessing fit of structural equation models is reviewed, and two sampling studies are reported that examine the effects of sample size, estimation method, and model misspecification on fit indices. In the first study, the behavior of indices in a known-population confirmatory factor analysis model is considered. In the second study, the same problem in an empirical data set is examined by looking at antecedents and consequences of work motivation. The findings across the two studies suggest that (a) as might be expected, sample size is an important determinant in assessing model fit; (b) estimator-specific, as opposed to estimator-general, fit indices provide more accurate indications of model fit; and (c) the studied fit indices are differentially sensitive to model misspecification. Some recommendations for the use of structural equation model fit indices are given. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
When the distribution of the response variable is skewed, the population median may be a more meaningful measure of centrality than the population mean, and when the population distribution of the response variable has heavy tails, the sample median may be a more efficient estimator of centrality than the sample mean. The authors propose a confidence interval for a general linear function of population medians. Linear functions have many important special cases including pairwise comparisons, main effects, interaction effects, simple main effects, curvature, and slope. The confidence interval can be used to test 2-sided directional hypotheses and finite interval hypotheses. Sample size formulas are given for both interval estimation and hypothesis testing problems. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Xenon-enhanced computed tomography (Xe-CT) is a technique for the noninvasive measurement of regional pulmonary ventilation from the washin and/or washout time constants of radiodense stable xenon gas, determined from serial computed tomography scans. Although the measurement itself is straightforward, there is a need for methods for the estimation of variability and confidence intervals so that the statistical significance of the information obtained may be evaluated, particularly since obtaining repeated measurements is often not practical. We present a Monte Carlo (MC) approach to determine the 95% confidence interval (CI) for any given measurement. This MC method was characterized in terms of its unbiasedness and coverage of the CI. In addition, 10 identical Xe-CT ventilation runs were performed in an anesthetized dog, and the time constant was determined for several regions of varying size in each run. The 95% CI, estimated from these repeated measurements as the mean +/- 2 x SE, compared favorably with the CI obtained by the MC approach. Finally, a simulation was performed to compare the performance of three imaging protocols in estimating model parameters.  相似文献   

7.
Methods for planning sample size (SS) for the standardized mean difference so that a narrow confidence interval (CI) can be obtained via the accuracy in parameter estimation (AIPE) approach are developed. One method plans SS so that the expected width of the CI is sufficiently narrow. A modification adjusts the SS so that the obtained CI is no wider than desired with some specified degree of certainty (e.g., 99% certain the 95% CI will be no wider than ω). The rationale of the AIPE approach to SS planning is given, as is a discussion of the analytic approach to CI formation for the population standardized mean difference. Tables with values of necessary SS are provided. The freely available Methods for the Behavioral, Educational, and Social Sciences (K. Kelley, 2006a) R (R Development Core Team, 2006) software package easily implements the methods discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
In addition to evaluating a structural equation model (SEM) as a whole, often the model parameters are of interest and confidence intervals for those parameters are formed. Given a model with a good overall fit, it is entirely possible for the targeted effects of interest to have very wide confidence intervals, thus giving little information about the magnitude of the population targeted effects. With the goal of obtaining sufficiently narrow confidence intervals for the model parameters of interest, sample size planning methods for SEM are developed from the accuracy in parameter estimation approach. One method plans for the sample size so that the expected confidence interval width is sufficiently narrow. An extended procedure ensures that the obtained confidence interval will be no wider than desired, with some specified degree of assurance. A Monte Carlo simulation study was conducted that verified the effectiveness of the procedures in realistic situations. The methods developed have been implemented in the MBESS package in R so that they can be easily applied by researchers. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

9.
Confidence regions (CR) for heritability (h2) and fraction of variance accounted for by permanent environmental effects (c2) from Method R estimates were obtained from simulated data using a univariate, repeated measures, full animal model, with 50% subsampling. Bootstrapping techniques were explored to assess the optimum number of subsamples needed to compute Method R estimates of h2 and c2 with properties similar to those of exact estimators. One thousand estimates of each parameter set were used to obtain 90, 95, and 99% CR in four data sets including 2,500 animals with four measurements each. Two approaches were explored to assess CR accuracy: a parametric approach assuming bivariate normality of h2 and c2 and a nonparametric approach based on the sum of squared rank deviations. Accuracy of CR was assessed by the average loss of confidence (LOSS) by number of estimates sampled (NUMEST). For NUMEST = 5, bootstrap estimates of h2 and c2 were within 10(-3) of the asymptotic ones. The same degree of convergence in the estimates of SE was achieved with NUMEST = 20. Correlation between estimates of h2 and c2 ranged from -.83 to -.98. At NUMEST < 10, the nonparametric CR were more accurate than parametric CR. However, with the parametric CR, LOSS approached zero at rate NUMEST(-1). This rate was an order of magnitude larger for the nonparametric CR. These results suggested that when the computational burden of estimating genetic parameters limits the number of Method R estimates that can be obtained to, say, 10 or 20, reliable CR can still be obtained by processing Method R estimates through bootstrapping techniques.  相似文献   

10.
63 16–65 yr olds exhibiting unipolar depression were assigned to 1 of 4 conditions (i.e., class, individual tutoring, minimal contact, or delayed treatment control) with regard to a course of treatment for coping with depression to investigate the efficacy of a psychoeducational approach in treating unipolar depression. The course addressed specific target behaviors (i.e., social skills, thinking, pleasant activities, relaxation) and more general components hypothesized to be critical to successful cognitive-behavioral therapy for depression. Ss in the immediate-treatment conditions were assessed pre- and posttreatment and at 1- and 6-mo follow-up sessions; the delayed-treatment group was assessed prior to and following an 8-wk waiting period. Results indicate clinical improvement by all of the active treatment conditions, as compared to the delayed-treatment condition. Differences between active-treatment conditions were small, and some differences between high and low responders to treatment were found. (33 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
53 patients clinically suspected of having a cerebellopontine angle (CPA) lesion were examined by computer tomography (CT) with 160 X 160 matrix EMI scanner. 17 cases (32%) had tumour positive CT, of which 12 were neurinomas and 1 meningioma. 1 CT suggestive of a CPA lesion was false positive and 1 unoperated case is probably a false negative CT. Three of the eleven verified neurinomas (27%) were of the medial type originating in the angle cistern. One neurinoma protruding 1 cm into the cistern showed no contrast enhancement. 2 CT scans (3.8%) were unsatisfactory due to movements and the large size of the head. CT is valuable for the investigation of CPA pathology and the diagnostic efficiency compares favourably to other neuroradiological procedures.  相似文献   

12.
One imposing directional decisions on nondirectional tests will overestimate power, underestimate sample size, and ignore the risk of Type III error (getting the direction wrong) if traditional calculations—those applying to nondirectional decisions—are used. Usually trivial with the z test, the errors might be important where α is large and effect size is small or with tests using other distributions. One can avoid the errors by using calculations that apply to directional decisions or by using a directional two-tailed test at the outset, a conceptually simpler solution. With a revised concept of power, this article shows calculations for the test; explains how to find its power, Type III error risk, and sample size in statistical tables for traditional tests; compares it to conventional one- and two-tailed tests and to one- and two-sided confidence intervals; and concludes that when a significance test is planned it is the best choice for most purposes. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
This article develops equations for determining the asymptotic confidence limits for the difference between 2 squared multiple correlation coefficients. The present procedure uses the delta method described by I. Olkin and J. D. Finn (1995) but does not require the variance-covariance matrix and the partial derivatives for all the zero-order correlations that enter into the expression for the difference, as does their procedure. This simplified approach can lead to an extreme reduction in the calculations required, as well as a reduction in the mathematical complexity of the solution. This approach also demonstrates clearly that in some cases, it may be inappropriate to use the asymptotic confidence limits in tests of significance. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
This article proposes a standard, easy-to-interpret effect size estimate for one-sample research. The proportion index (π) shows the hit rate on a scale on which .50 is always the null value regardless of the number of equally likely choices. The index π is useful in the design of one-sample research because it can guide the best choice of number of response alternatives. Significance tests and confidence limits are readily computed. For meta-analyses of one-sample studies, tests of heterogeneity of a set of πs and contrasts among them are described. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
Recent theory and methodology for inferences concerning the interclass correlation coefficient are reviewed, focusing on the case of a single individual in one class and a variable number of individuals in the other. Topics discussed include point and interval estimation, as well as significance-testing, with emphasis on application to data arising from family studies.  相似文献   

16.
Standard least squares analysis of variance methods suffer from poor power under arbitrarily small departures from normality and fail to control the probability of a Type I error when standard assumptions are violated. This article describes a framework for robust estimation and testing that uses trimmed means with an approximate degrees of freedom heteroscedastic statistic for independent and correlated groups designs in order to achieve robustness to the biasing effects of nonnormality and variance heterogeneity. The authors describe a nonparametric bootstrap methodology that can provide improved Type I error control. In addition, the authors indicate how researchers can set robust confidence intervals around a robust effect size parameter estimate. In an online supplement, the authors use several examples to illustrate the application of an SAS program to implement these statistical methods. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
In this reply, the authors point out that the simulations reported by S. M. Kanne, D. A. Balota, D. H. Spieler, and M. E. Faust (1998) did not incorporate mechanisms proposed to explain set size effects in J. D. Cohen, K. Dunbar, and J. L. McClelland (1990). The authors report a new simulation that incorporates these mechanisms and more accurately simulates S. M. Kanne et al.'s empirical data. The authors then point to other factors that could be explored in a more complete test of their model. The use of feed-forward rather than recurrent inhibition is discussed as a potentially important limitation of their original model, and recent work addressing this issue is described. The authors also discuss possible differences between word reading and color naming in the Stroop task. Although such differences may exist, the authors retain their earlier view that such differences do not reflect a dichotomy between automatic and controlled processing. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
There are shown the limitations of using the generally accepted approaches to statistic data processing at the perception of low level signals on Bernully scheme when specific nature on the phenomenon (the human ability to perception is not constant, uncontrolled and manifests itself for a short time) isn't taken into account. A new approach to the statistic processing of measurement results is also proposed. It is based on the sequence analysis of correct answers in a series, the use of sequential statistic procedures (criteria), maximally taking into account the perception specificity and the measurement features. The estimate of first kind error in proposed method has been carried out.  相似文献   

19.
Responds to H. C. Kraemer and G. Andrews's (see record 1982-11171-001) proposal of a nonparametric effect size D as an alternative to a parametric effect size proposed by G. V. Glass (1976). Here examples are given that illustrate that the measure D depends on the form of the underlying bivariate distribution of the pre- and posttreatment responses. This can result in a bias for conclusions based on D. An alternative nonparametric measure of effect size is proposed that avoids this difficulty. (2 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
The criterion-related validation strategy is not well suited for small businesses because it requires sample sizes larger than those that are available. On the basis of an integration and extension of three past synthetic validity models, we developed an alternative criterion-related model that requires smaller sample sizes. The method we developed borrows substantially from these former models by (a) deriving worker-oriented job elements via the Position Analysis Questionnaire, (b) assessing test–job element relationships in a direct, empirical fashion that is feasible in a local setting, and (c) providing an overall summary statistic in the form of a Pearson correlation that expresses the degree of relation between a battery of tests and a system of performance evaluations. This integrated approach to synthetic validation deviates from these former models, however, by altering the order of validation and aggregation. Sampling theory and matrix algebra are used to show that the order of validation and aggregation need not be fixed, and field data taken from 83 employees of a small chemical company are used to illustrate how the process can be applied in an actual setting. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号