首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Suggests that by the logic of their derivations, the formulas recently proposed by N. Schmitt et al (see record 1978-07042-001) for estimating cross-validated multiple correlation most closely approximate a measure of generalized predictive accuracy distinct from any traditional correlation statistic. Interpreted correlationally, these formulas have a negative bias that can become appreciable in parameter regions not sampled by their Monte Carlo tests. It is concluded that although less biased and more cogently derived alternatives are available, one of the formulas proposed by Schmitt et al works reasonably well in practice. (5 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Formula estimation of the predictive precision of a multiple regression equation is frequently presented as an alternative to actual cross-validation where appropriate, and a particular formula developed by M. W. Browne (see record 1978-00130-001) and evaluated by P. Cattin (see record 1980-31576-001) is cited as most useful in personnel psychology. One incorrectly specified term and an incorrect assumption regarding calculation of another term contained in identical formulae common to two influential personnel psychology texts suggest a shared misunderstanding of Browne's formula. Use of the incorrect formula will produce positively biased estimates of the squared population cross-validated multiple correlation. These discrepancies are examined, their practical implications are discussed, and correct presentation of Browne's formula is given. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Presents improved procedures to approximate confidence intervals for ρ–2 and ρc–2 in both fixed and random predictor models. These approximations require neither point estimates nor variance estimates and are analytically shown to be precise enough for most practical prediction purposes. An application of confidence intervals in regression model development is also given. (16 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
In a two-factor design, interactions are typically analyzed by analysis of variance (ANOVA). Bobko (1986) has suggested an alternative ordinal-interaction technique that might avoid spurious main effects and show more power than the classical ANOVA. In this study I (a) compared the classical and ordinal techniques in terms of Type I error rate and power under normally distributed homogeneous and heterogeneous populations, and (b) determined the effect of a population non-null main effect on the ordinal technique's Type I error rate. The ordinal technique showed a substantial power superiority over the classical technique under variance homogeneity, although it did have a power cap of less than 100%. Its Type I error rate under variance heterogeneity, however, was not stable and was susceptible to non-null main effects. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Research in validity generalization has generated renewed interest in the sampling error of the Pearson correlation coefficient. The standard estimator for the sampling variance of the correlation was derived under assumptions that do not consider the presence of measurement error or range restriction in the data. The accuracy of the estimator in attenuated or restricted data has not been studied. This article presented the results of computer simulations that examined the accuracy of the sampling variance estimator in data containing measurement error. Sample sizes of n?=?25, n?=?60, and n?=?100 are used, with the reliability ranging from .10 to 1.00, and the population correlation ranging from .10 to 0.90. Results demonstrated that the estimator has a slight negative bias, but may be sufficiently accurate for practical applications if the sample size is at least 60. In samples of this size, the presence of measurement error does not add greatly to the inaccuracy of the estimator. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
The effectiveness of 3 estimators of treatment magnitude were compared numerically for samples from a normal and exponential distribution. The estimators were compared for J. Cohen's (1969) definitions of small, medium, and large population treatment effects. It was found that omega squared was a more accurate estimator, while eta squared had the smallest sampling variability. (17 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
Monotonic hypotheses are predictions about the ordering of group population means. A journal survey revealed that the problem was very common and that there was little uniformity among researchers regarding the statistical test to use. Most of the approaches in the literature to detect both monotonic trend and nonmonotonicity were compared under varying population conditions in a Monte Carlo simulation. The results suggested that only rarely will sample means order the same as the corresponding population means, leaving the approaches most researchers used with far too little power. Trend tests had far greater power; the one recommended is the familiar linear trend test. However, used alone this test does not detect the presence of any instances of nonmonotonicity. Therefore, it should be used in combination with a technique that can detect such inversions, preferably the Sidák-corrected reversal test conducted with a very high α (.50). (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Several alternative procedures have been advocated for analyzing nonorthogonal ANOVA data. Two in particular, J. E. Overall and D. K. Spiegel's (see record 1970-01534-001) Methods 1 and 2, have been the focus of controversy. A Monte Carlo study was undertaken to explore the relative sensitivity and error rates of these 2 methods, in addition to M. I. Applebaum and E. M. Cramer's (see record 1974-28956-001) procedure. Results of 2,250 3?×?3 ANOVAs conducted with each method and involving 3 underlying groups of population effects supported 3 hypotheses raised in the study: (a) Method 2 was more powerful than Method 1 in the absence of interaction; (b) Method 2 was biased upwards in the presence of interaction; and (c) Methods 1 and 2 both had Type I error rates close to those expected in the absence of interaction. In addition, it was found that in the absence of interaction, the Appelbaum and Cramer procedure was more powerful than Method 2 but slightly increased the Type I error rate. (16 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
In a Monte Carlo study, the number of response categories, number of items, covariance among items, and item "error" were varied to simulate scores following classical true score assumptions. Despite considerable literature examining the optimal number of response categories, this variable accounted for very little variance in the correlation of fallible composite scale scores and known "true" scores. In no situation did correlations substantially increase with the use of more than 5 response categories. The effects of the 4 variables were largely additive. The relative importance of the variables differed, however, according to whether an internal consistency or a stability estimate was used as the dependent variable. Results are discussed in terms of possible trade-offs for applied researchers. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
The efficacy of the Hedges and colleagues, Rosenthal-Rubin, and Hunter-Schmidt methods for combining correlation coefficients was tested for cases in which population effect sizes were both fixed and variable. After a brief tutorial on these meta-analytic methods, the author presents 2 Monte Carlo simulations that compare these methods for cases in which the number of studies in the meta-analysis and the average sample size of studies were varied. In the fixed case the methods produced comparable estimates of the average effect size; however, the Hunter-Schmidt method failed to control the Type I error rate for the associated significance tests. In the variable case, for both the Hedges and colleagues and Hunter-Schmidt methods, Type I error rates were not controlled for meta-analyses including 15 or fewer studies and the probability of detecting small effects was less than .3. Some practical recommendations are made about the use of meta-analysis. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
In this paper we address the mapping of multiple quantitative trait loci (QTLs) in line crosses for which the genetic data are highly incomplete. Such complicated situations occur, for instance, when dominant markers are used or when unequally informative markers are used in experiments with outbred populations. We describe a general and flexible Monte Carlo expectation-maximization (Monte Carlo EM) algorithm for fitting multiple-QTL models to such data. Implementation of this algorithm is straightforward in standard statistical software, but computation may take much time. The method may be generalized to cope with more complex models for animal and human pedigrees. A practical example is presented, where a three-QTL model is adopted in an outbreeding situation with dominant markers. The example is concerned with the linkage between randomly amplified polymorphic DNA (RAPD) markers and QTLs for partial resistance to Fusarium oxysporum in lily.  相似文献   

12.
Validity generalization methods require accurate estimates of the sampling variance in the correlation coefficient when the range of variation in the data is restricted. This article presents the results of computer simulations examining the accuracy of the sampling variance estimator under sample range restrictions. Range restriction is assumed to occur by direct selection on the predictor. Sample sizes of 25, 60, and 100 are used, with the selection ratio ranging from .10 to 1.0 and the population correlation ranging from .10 to .90. The estimator is found to have a slight negative bias in unrestricted data. In restricted data, the bias is substantial in sample sizes of 60 or less. In all sample sizes, the negative bias increases as the selection ratio becomes smaller. Implications of the results for studies of validity generalization are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
14.
15.
16.
Three inferential morphometric methods, Euclidean distance matrix analysis (EDMA), Bookstein's edge-matching method (EMM), and the Procrustes method, were applied to facial landmark data. A Monte Carlo simulation was conducted with three sample sizes, ranging from n = 10 to 50, to assess type I error rates and the power of the tests to detect group differences for two- and three-dimensional representations of forms. Type I error rates for EMM were at or below nominal levels in both two and three dimensions. Procrustes in 2D and EDMA in 2D and 3D produced inflated type I error rates in all conditions, but approached acceptable levels with moderate cell sizes. Procrustes maintained error rates below the nominal levels in 2D. The power of EMM was high compared with the other methods in both 2D and 3D, but, conflicting EMM decisions were provided depending on which pair (2D) or triad (3D) of landmarks were selected as reference points. EDMA and Procrustes were more powerful in 2D data than for 3D data. Interpretation of these results must take into account that the data used in this simulation were selected because they represent real data that might have been collected during a study or experiment. These data had characteristics which violated assumptions central to the methods here with unequal variances about landmarks, correlated errors, and correlated landmark locations; therefore these results may not generalize to all conditions, such as cases with no violations of assumptions. This simulation demonstrates, however, limitations of each procedure that should be considered when making inferences about shape comparisons.  相似文献   

17.
18.
It is well known that the dose calibrator response/unit exposure rate depends significantly upon source energy. However, investigation of 137Cs, 192Ir, and 226Ra brachytherapy sources by empirical, analytical, and Monte Carlo techniques shows that source filtration significantly affects the calibrator reading to exposure rate conversion factor. The results demonstrate that for each clinically used filtration thickness an exposure calibrated standard source is required to establish the response of the well chamber. An interesting consequence of this analysis is that the Sievert point dose algorithm for clinical sources overestimates the dose on the order of 3% at distances of approximately 3.5 cm from the source.  相似文献   

19.
Explored the use of transformations to improve power in within-S designs in which multiple observations are collected for each S in each condition, such as reaction time (RT) and psychophysiological experiments. Often, the multiple measures within a treatment are simply averaged to yield a single number, but other transformations have been proposed. Monte Carlo simulations were used to investigate the influence of those transformations on the probabilities of Type I and Type II errors. With normally distributed data, Z and range correction transformations led to substantial increases in power over simple averages. With highly skewed distributions, the optimal transformation depended on several variables, but Z and range correction performed well across conditions. Correction for outliers was useful in increasing power, and trimming was more effective than eliminating all points beyond a criterion. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号