共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
3.
Meta-analysis of correlation coefficients: A Monte Carlo comparison of fixed- and random-effects methods. 总被引:1,自引:0,他引:1
The efficacy of the Hedges and colleagues, Rosenthal-Rubin, and Hunter-Schmidt methods for combining correlation coefficients was tested for cases in which population effect sizes were both fixed and variable. After a brief tutorial on these meta-analytic methods, the author presents 2 Monte Carlo simulations that compare these methods for cases in which the number of studies in the meta-analysis and the average sample size of studies were varied. In the fixed case the methods produced comparable estimates of the average effect size; however, the Hunter-Schmidt method failed to control the Type I error rate for the associated significance tests. In the variable case, for both the Hedges and colleagues and Hunter-Schmidt methods, Type I error rates were not controlled for meta-analyses including 15 or fewer studies and the probability of detecting small effects was less than .3. Some practical recommendations are made about the use of meta-analysis. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
4.
5.
6.
C. J. Krauskopf (1991) is to be commended for calling attention to the fact that pattern analysis is subject not only to Type I errors but also Type II errors, which were not even mentioned by A. B. Silverstein (1982). There are, however, a number of points on which the authors still differ. Most notably, Krauskopf's recommendation not only fails to solve the multiple-comparisons problem, it exacerbates that problem. Other possibilities are considered, including the possibility that the assumption on which pattern analysis is based, clinical meaningfulness, is itself an error. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
7.
Several alternative procedures have been advocated for analyzing nonorthogonal ANOVA data. Two in particular, J. E. Overall and D. K. Spiegel's (see record 1970-01534-001) Methods 1 and 2, have been the focus of controversy. A Monte Carlo study was undertaken to explore the relative sensitivity and error rates of these 2 methods, in addition to M. I. Applebaum and E. M. Cramer's (see record 1974-28956-001) procedure. Results of 2,250 3?×?3 ANOVAs conducted with each method and involving 3 underlying groups of population effects supported 3 hypotheses raised in the study: (a) Method 2 was more powerful than Method 1 in the absence of interaction; (b) Method 2 was biased upwards in the presence of interaction; and (c) Methods 1 and 2 both had Type I error rates close to those expected in the absence of interaction. In addition, it was found that in the absence of interaction, the Appelbaum and Cramer procedure was more powerful than Method 2 but slightly increased the Type I error rate. (16 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
8.
Explored the use of transformations to improve power in within-S designs in which multiple observations are collected for each S in each condition, such as reaction time (RT) and psychophysiological experiments. Often, the multiple measures within a treatment are simply averaged to yield a single number, but other transformations have been proposed. Monte Carlo simulations were used to investigate the influence of those transformations on the probabilities of Type I and Type II errors. With normally distributed data, Z and range correction transformations led to substantial increases in power over simple averages. With highly skewed distributions, the optimal transformation depended on several variables, but Z and range correction performed well across conditions. Correction for outliers was useful in increasing power, and trimming was more effective than eliminating all points beyond a criterion. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
9.
Monotonic hypotheses are predictions about the ordering of group population means. A journal survey revealed that the problem was very common and that there was little uniformity among researchers regarding the statistical test to use. Most of the approaches in the literature to detect both monotonic trend and nonmonotonicity were compared under varying population conditions in a Monte Carlo simulation. The results suggested that only rarely will sample means order the same as the corresponding population means, leaving the approaches most researchers used with far too little power. Trend tests had far greater power; the one recommended is the familiar linear trend test. However, used alone this test does not detect the presence of any instances of nonmonotonicity. Therefore, it should be used in combination with a technique that can detect such inversions, preferably the Sidák-corrected reversal test conducted with a very high α (.50). (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
10.
11.
Compares component and common factor analysis using 3 levels of population factor pattern loadings (.40, .60, .80) for each of the 3 levels of variables (9, 18, 36). Common factor analysis was significantly more accurate than components in reproducing the population pattern in each of the conditions examined. The differences decreased as the number of variables and the size of the population pattern loadings increased. The common factor analysis loadings were unbiased, had a smaller standard error than component loadings, and presented no boundary problems. Component loadings were significantly and systematically inflated even with 36 variables and loadings of .80. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
12.
Research in validity generalization has generated renewed interest in the sampling error of the Pearson correlation coefficient. The standard estimator for the sampling variance of the correlation was derived under assumptions that do not consider the presence of measurement error or range restriction in the data. The accuracy of the estimator in attenuated or restricted data has not been studied. This article presented the results of computer simulations that examined the accuracy of the sampling variance estimator in data containing measurement error. Sample sizes of n?=?25, n?=?60, and n?=?100 are used, with the reliability ranging from .10 to 1.00, and the population correlation ranging from .10 to 0.90. Results demonstrated that the estimator has a slight negative bias, but may be sufficiently accurate for practical applications if the sample size is at least 60. In samples of this size, the presence of measurement error does not add greatly to the inaccuracy of the estimator. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
13.
A series of Monte Carlo computer simulations was conducted to investigate (a) the likelihood that meta-analysis will detect true differences in effect sizes rather than attributing differences to methodological artifact and (b) the likelihood that meta-analysis will suggest the presence of moderator variables when in fact differences in effect sizes are due to methodological artifact. The simulations varied the magnitude of the true population differences between correlations, the number of studies included in the meta-analysis, and the average sample size. Simulations were run both correcting and not correcting for measurement error. The power of 3 indices—the Schmidt-Hunter ratio of expected to observed variance, the Callender-Osburn procedure, and a chi-square test—to detect true differences was investigated. Results show that small true differences were not detected regardless of sample size and the number of studies and that moderate true differences were not detected with small numbers of studies or small sample sizes. Hence, there is a need for caution in attributing observed variation across studies to artifact. (9 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
14.
15.
Monte Carlo simulations of protein folding. II. Application to protein A, ROP, and crambin 总被引:3,自引:0,他引:3
The hierarchy of lattice Monte Carlo models described in the accompanying paper (Kolinski, A., Skolnick, J. Monte Carlo simulations of protein folding. I. Lattice model and interaction scheme. Proteins 18:338-352, 1994) is applied to the simulation of protein folding and the prediction of 3-dimensional structure. Using sequence information alone, three proteins have been successfully folded: the B domain of staphylococcal protein A, a 120 residue, monomeric version of ROP dimer, and crambin. Starting from a random expanded conformation, the model proteins fold along relatively well-defined folding pathways. These involve a collection of early intermediates, which are followed by the final (and rate-determining) transition from compact intermediates closely resembling the molten globule state to the native-like state. The predicted structures are rather unique, with native-like packing of the side chains. The accuracy of the predicted native conformations is better than those obtained in previous folding simulations. The best (but by no means atypical) folds of protein A have a coordinate rms of 2.25 A from the native C alpha trace, and the best coordinate rms from crambin is 3.18 A. For ROP monomer, the lowest coordinate rms from equivalent C alpha s of ROP dimer is 3.65 A. Thus, for two simple helical proteins and a small alpha/beta protein, the ability to predict protein structure from sequence has been demonstrated. 相似文献
16.
17.
18.
19.