首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Suggests that by the logic of their derivations, the formulas recently proposed by N. Schmitt et al (see record 1978-07042-001) for estimating cross-validated multiple correlation most closely approximate a measure of generalized predictive accuracy distinct from any traditional correlation statistic. Interpreted correlationally, these formulas have a negative bias that can become appreciable in parameter regions not sampled by their Monte Carlo tests. It is concluded that although less biased and more cogently derived alternatives are available, one of the formulas proposed by Schmitt et al works reasonably well in practice. (5 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
To determine the stability of regression equations, researchers have typically employed a cross-validation design in which weights are developed on an estimation subset of the sample and then applied to the members of a holdout sample. The present study used a Monte Carlo simulation to ascertain the accuracy with which the shrinkage in R–2 could be estimated by 3 formulas developed for this purpose. Results indicate that R. B. Darlington's (see record 1968-08053-001) and F. M. Lord (1950) and G. E. Nicholson's (1960) formulas yielded mean estimates approximately equal to actual cross-validation values, but with smaller standard errors. Although the Wherry estimate is a good estimate of population multiple correlation, it is an overestimate on population cross-validity. It is advised that the researcher estimate weights on the total sample to maximize the stability of the regression equation and then estimate the shrinkage in R–2 that he/she can expect when going to a new sample with either the Lord-Nicholson or Darlington estimation formulas. (17 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Formula estimation of the predictive precision of a multiple regression equation is frequently presented as an alternative to actual cross-validation where appropriate, and a particular formula developed by M. W. Browne (see record 1978-00130-001) and evaluated by P. Cattin (see record 1980-31576-001) is cited as most useful in personnel psychology. One incorrectly specified term and an incorrect assumption regarding calculation of another term contained in identical formulae common to two influential personnel psychology texts suggest a shared misunderstanding of Browne's formula. Use of the incorrect formula will produce positively biased estimates of the squared population cross-validated multiple correlation. These discrepancies are examined, their practical implications are discussed, and correct presentation of Browne's formula is given. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
One of the main objectives in meta-analysis is to estimate the overall effect size by calculating a confidence interval (CI). The usual procedure consists of assuming a standard normal distribution and a sampling variance defined as the inverse of the sum of the estimated weights of the effect sizes. But this procedure does not take into account the uncertainty due to the fact that the heterogeneity variance (τ2) and the within-study variances have to be estimated, leading to CIs that are too narrow with the consequence that the actual coverage probability is smaller than the nominal confidence level. In this article, the performances of 3 alternatives to the standard CI procedure are examined under a random-effects model and 8 different τ2 estimators to estimate the weights: the t distribution CI, the weighted variance CI (with an improved variance), and the quantile approximation method (recently proposed). The results of a Monte Carlo simulation showed that the weighted variance CI outperformed the other methods regardless of the τ2 estimator, the value of τ2, the number of studies, and the sample size. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
[Correction Notice: An erratum for this article was reported in Vol 13(1) of Psychological Methods (see record 2008-02525-006). The note corrects simulation results presented in the article concerning the performance of confidence intervals (CIs) for Spearman's rs. An error in the author's C++ code affected all simulation results for Spearman's rs (but none of the results for gamma-family indices).] This research focused on confidence intervals (CIs) for 10 measures of monotonic association between ordinal variables. Standard errors (SEs) were also reviewed because more than 1 formula was available per index. For 5 indices, an element of the formula used to compute an SE is given that is apparently new. CIs computed with different SEs were compared in simulations with small samples (N = 25, 50, 75, or 100) for variables with 4 or 5 categories. With N > 25, many CIs performed well. Performance was best for consistent CIs due to N. Cliff and colleagues (N. Cliff, 1996; N. Cliff & V. Charlin, 1991; J. D. Long & N. Cliff, 1997). CIs for Spearman's rank correlation were also examined: Parameter coverage was erratic and sometimes egregiously underestimated. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Most psychology journals now require authors to report a sample value of effect size along with hypothesis testing results. The sample effect size value can be misleading because it contains sampling error. Authors often incorrectly interpret the sample effect size as if it were the population effect size. A simple solution to this problem is to report a confidence interval for the population value of the effect size. Standardized linear contrasts of means are useful measures of effect size in a wide variety of research applications. New confidence intervals for standardized linear contrasts of means are developed and may be applied to between-subjects designs, within-subjects designs, or mixed designs. The proposed confidence interval methods are easy to compute, do not require equal population variances, and perform better than the currently available methods when the population variances are not equal. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
Thirteen methods for computing binomial confidence intervals are compared based on their coverage properties, widths and errors relative to exact limits. The use of the standard textbook method, x/n +/- 1.96 square root of [(x/n)(1-x/n)/n], or its continuity corrected version, is strongly discouraged. A commonly cited rule of thumb stating that alternatives to exact methods may be used when the estimated proportion p is such that np and n(1(-)p) both exceed 5 does not ensure adequate accuracy. Score limits are easily calculated from closed form solutions to quadratic equations and can be used at all times. Based on coverage functions, the continuity corrected score method is recommended over exact methods. Its conservative nature should be kept in mind, as should the wider fluctuation of actual coverage that accompanies omission of the continuity correction.  相似文献   

8.
Kaufman et al. compute the 'excess risk' of a disease in the presence of an exposure as the product of the incidence rate of the disease in the source population, the complement of the aetiologic fraction and the relative risk minus one. Methods for calculating confidence intervals for this quantity are derived when (as in case-control studies) the relative risk is estimated by the odds ratio, firstly from multiple logistic regression analysis and secondly without adjustment for covariates. For the latter an innovative approach based on confidence bounds for the two exposure parameters is suggested. The performance of these systems of confidence intervals is assessed by simulation for the former and by exact enumeration of the distributions involved in the latter. Illustrative examples from a study of agranulocytosis and indomethacin are presented.  相似文献   

9.
An experiment to assess the efficacy of a particular treatment or process often produces dichotomous responses, either favourable or unfavourable. When we administer the treatment on two occasions to the same subjects, we often use McNemar's test to investigate the hypothesis of no difference in the proportions on the two occasions, that is, the hypothesis of marginal homogeneity. A disadvantage in using McNemar's statistic is that we estimate the variance of the sample difference under the restriction that the marginal proportions are equal. A competitor to McNemar's statistic is a Wald statistic that uses an unrestricted estimator of the variance. Because the Wald statistic tends to reject too often in small samples, we investigate an adjusted form that is useful for constructing confidence intervals. Quesenberry and Hurst and Goodman discussed methods of construction that we adapt for constructing confidence intervals for the differences in correlated proportions. We empirically compare the coverage probabilities and average interval lengths for the competing methods through simulation and give recommendations based on the simulation results.  相似文献   

10.
Notes that for those making choices among selection strategies, training programs, or other treatments it can be more important to understand the impact of the choice on individuals identified as the best or poorest rather than on the average. As there are not readily available techniques for making such comparisons, an approach that develops confidence intervals for quantile differences is illustrated based on the recently developed bootstrap principle of nonparametric inference. (14 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
Reports an error in Confidence intervals for gamma-family measures of ordinal association by Carol M. Woods (Psychological Methods, 2007[Jun], Vol 12[2], 185-204). The note corrects simulation results presented in the article concerning the performance of confidence intervals (CIs) for Spearman's rs. An error in the author's C++ code affected all simulation results for Spearman's rs (but none of the results for gamma-family indices). (The following abstract of the original article appeared in record 2007-07830-005.) This research focused on confidence intervals (CIs) for 10 measures of monotonic association between ordinal variables. Standard errors (SEs) were also reviewed because more than 1 formula was available per index. For 5 indices, an element of the formula used to compute an SE is given that is apparently new. CIs computed with different SEs were compared in simulations with small samples (N = 25, 50, 75, or 100) for variables with 4 or 5 categories. With N > 25, many CIs performed well. Performance was best for consistent CIs due to N. Cliff and colleagues (N. Cliff, 1996; N. Cliff & V. Charlin, 1991; J. D. Long & N. Cliff, 1997). CIs for Spearman's rank correlation were also examined: Parameter coverage was erratic and sometimes egregiously underestimated. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
RATIONALE AND OBJECTIVES: The authors performed this study to address two practical questions. First, how large does the sample size need to be for confidence intervals (CIs) based on the usual asymptotic methods to be appropriate? Second, when the sample size is smaller than this threshold, what alternative method of CI construction should be used? MATERIALS AND METHODS: The authors performed a Monte Carlo simulation study where 95% CIs were constructed for the receiver operating characteristic (ROC) area and for the difference between two ROC areas for rating and continuous test results--for ROC areas of moderate and high accuracy--by using both parametric and nonparametric estimation methods. Alternative methods evaluated included several bootstrap CIs and CIs with the Student t distribution. RESULTS: For the difference between two ROC areas, CIs based on the asymptotic theory provided adequate coverage even when the sample size was very small (20 patients). In contrast, for a single ROC area, the asymptotic methods do not provide adequate CI coverage for small samples; for ROC areas of high accuracy, the sample size must be large (more than 200 patients) for the asymptotic methods to be applicable. The recommended alternative (bootstrap percentile, bootstrap t, or bootstrap bias-corrected accelerated method) depends on the estimation approach, format of the test results, and ROC area. CONCLUSION: Currently, there is not a single best alternative for constructing CIs for a single ROC area for small samples.  相似文献   

13.
Weights derived from an admissions committee's assessment of 170 applicants to a graduate industrial relations program using 5 models (linear, multiplicative, dummy variable, unit weighting, and multiple hurdles) were cross-validated on 112 additional applicants. Predictions of all models were significantly related to the committee's admissions decisions in the cross-validation group. The accuracy of predictions was about the same for all models; however, except for GPA and Graduate Record Examination scores, the other variables weighted varied somewhat from model to model. A substantial amount of the decision variance was unaccounted for by any model. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
Since the publication of Loftus and Masson’s (1994) method for computing confidence intervals (CIs) in repeated-measures (RM) designs, there has been uncertainty about how to apply it to particular effects in complex factorial designs. Masson and Loftus (2003) proposed that RM CIs for factorial designs be based on number of observations rather than number of participants. However, determining the correct number of observations for a particular effect can be complicated, given the variety of effects occurring in factorial designs. In this paper the authors define a general “number of observations” principle, explain why it obtains, and provide step-by-step instructions for constructing CIs for various effect types. The authors illustrate these procedures with numerical examples. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
A Monte Carlo simulation assessed the relative power of 2 techniques that are commonly used to test for moderating effects. 500 samples were drawn from simulation-based populations for each of 81 conditions in a design that varied sample size, the reliabilities of 2 predictor variables (1 of which was the moderator variable), and the magnitude of the moderating effect. The null hypothesis of no interaction effect was tested by using moderated multiple regression (MMR). Each sample was then successively polychotomized into 2, 3, 4, 6, and 8 subgroups, and the equality of the subgroup-based correlation coefficients (SCC) was tested. Results show MMR to be more powerful than the SCC strategy for virtually all of the 81 conditions. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Describes how an apparent contradiction between the methods of coding dummy variables proposed by J. Cohen (see record 1969-06106-001) and those by J. Overall and D. Spiegel (see record 1970-01534-001) led to the discovery of a general formula for such coding, based on demonstrating a theoretical connection between multiple comparison and dummy multiple regression. Examples are given for various cases of orthogonal and nonorthogonal designs, which explicitly include assumptions about sample size. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Although cost-effectiveness analysis is not new, it is only recently that economic analysis has been conducted alongside clinical trials. Whereas in the past economic analysts most often used sensitivity analysis to examine the implications of uncertainty for their results, the existence of patient-level data on costs and effects opens up the possibility of statistical analysis of uncertainty. Unfortunately, ratio statistics can cause problems for standard statistical methods of confidence interval estimation. The recent health economics literature contains a number of suggestions for estimating confidence limits for ratios. In this paper, we begin by reviewing the different methods of confidence interval estimation with a view to providing guidance concerning the most appropriate method. We go on to argue that the focus on confidence interval estimation for cost-effectiveness ratios in the recent literature has been concerned more with problems of estimation than with problems of decision-making. We argue that decision-makers are most likely to be interested in one-sided tests of hypothesis and that confidence surfaces are better suited to such tests than confidence intervals. This approach is consistent with decision-making on the cost-effectiveness plane and with the cost-effectiveness acceptability curve approach to presenting uncertainty due to sampling variation in stochastic cost-effectiveness analyses.  相似文献   

18.
19.
A confidence interval (CI) for a population predictor weight for use with N. Cliff's (1994) method of ordinal multiple regression (OMR) is presented. The OMR CI is based on an estimated standard error of a weight derived from a fixed-effects model. A simulation was performed to examine the sampling properties of the OMR CI. The results show that the OMR CI had good Type I error rate and coverage. The OMR CI had lower power than the least-squares multiple regression (LSMR) CI when predictors were not correlated but had higher power when predictor correlations were moderate to high. In addition to discussing the simulation results, it is pointed out that the OMR CI can have superior sampling properties when the fixed-effects assumptions are violated. The OMR CI is recommended when a researcher wants to consider only ordinal information in multivariate prediction, when predictor correlations are moderate to high, and when the assumptions of fixed-effects LSMR are violated. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号