首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
Methods for planning sample size (SS) for the standardized mean difference so that a narrow confidence interval (CI) can be obtained via the accuracy in parameter estimation (AIPE) approach are developed. One method plans SS so that the expected width of the CI is sufficiently narrow. A modification adjusts the SS so that the obtained CI is no wider than desired with some specified degree of certainty (e.g., 99% certain the 95% CI will be no wider than ω). The rationale of the AIPE approach to SS planning is given, as is a discussion of the analytic approach to CI formation for the population standardized mean difference. Tables with values of necessary SS are provided. The freely available Methods for the Behavioral, Educational, and Social Sciences (K. Kelley, 2006a) R (R Development Core Team, 2006) software package easily implements the methods discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
In addition to evaluating a structural equation model (SEM) as a whole, often the model parameters are of interest and confidence intervals for those parameters are formed. Given a model with a good overall fit, it is entirely possible for the targeted effects of interest to have very wide confidence intervals, thus giving little information about the magnitude of the population targeted effects. With the goal of obtaining sufficiently narrow confidence intervals for the model parameters of interest, sample size planning methods for SEM are developed from the accuracy in parameter estimation approach. One method plans for the sample size so that the expected confidence interval width is sufficiently narrow. An extended procedure ensures that the obtained confidence interval will be no wider than desired, with some specified degree of assurance. A Monte Carlo simulation study was conducted that verified the effectiveness of the procedures in realistic situations. The methods developed have been implemented in the MBESS package in R so that they can be easily applied by researchers. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

3.
Most psychology journals now require authors to report a sample value of effect size along with hypothesis testing results. The sample effect size value can be misleading because it contains sampling error. Authors often incorrectly interpret the sample effect size as if it were the population effect size. A simple solution to this problem is to report a confidence interval for the population value of the effect size. Standardized linear contrasts of means are useful measures of effect size in a wide variety of research applications. New confidence intervals for standardized linear contrasts of means are developed and may be applied to between-subjects designs, within-subjects designs, or mixed designs. The proposed confidence interval methods are easy to compute, do not require equal population variances, and perform better than the currently available methods when the population variances are not equal. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
L. Wilkinson and the Task Force on Statistical Inference (1999) recommended reporting confidence intervals for measures of effect sizes. If the sample size is too small, the confidence interval may be too wide to provide meaningful information. Recently, K. Kelley and J. R. Rausch (2006) used an iterative approach to computer-generate tables of sample size requirements for a standardized difference between 2 means in between-subjects designs. Sample size formulas are derived here for general standardized linear contrasts of k ≥ 2 means for both between-subjects designs and within-subjects designs. Special sample size formulas also are derived for the standardizer proposed by G. V. Glass (1976). (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
The fixed-effects (FE) meta-analytic confidence intervals for unstandardized and standardized mean differences are based on an unrealistic assumption of effect-size homogeneity and perform poorly when this assumption is violated. The random-effects (RE) meta-analytic confidence intervals are based on an unrealistic assumption that the selected studies represent a random sample from a large superpopulation of studies. The RE approach cannot be justified in typical meta-analysis applications in which studies are nonrandomly selected. New FE meta-analytic confidence intervals for unstandardized and standardized mean differences are proposed that are easy to compute and perform properly under effect-size heterogeneity and nonrandomly selected studies. The proposed meta-analytic confidence intervals may be used to combine unstandardized or standardized mean differences from studies having either independent samples or dependent samples and may also be used to integrate results from previous studies into a new study. An alternative approach to assessing effect-size heterogeneity is presented. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
When the distribution of the response variable is skewed, the population median may be a more meaningful measure of centrality than the population mean, and when the population distribution of the response variable has heavy tails, the sample median may be a more efficient estimator of centrality than the sample mean. The authors propose a confidence interval for a general linear function of population medians. Linear functions have many important special cases including pairwise comparisons, main effects, interaction effects, simple main effects, curvature, and slope. The confidence interval can be used to test 2-sided directional hypotheses and finite interval hypotheses. Sample size formulas are given for both interval estimation and hypothesis testing problems. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
This article presents a generalization of the Score method of constructing confidence intervals for the population proportion (E. B. Wilson, 1927) to the case of the population mean of a rating scale item. A simulation study was conducted to assess the properties of the Score confidence interval in relation to the traditional Wald (A. Wald, 1943) confidence interval under a variety of conditions, including sample size, number of response options, extremeness of the population mean, and kurtosis of the response distribution. The results of the simulation study indicated that the Score interval usually outperformed the Wald interval, suggesting that the Score interval is a viable method of constructing confidence intervals for the population mean of a rating scale item. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
[Correction Notice: An erratum for this article was reported in Vol 12(4) of Psychological Methods (see record 2007-18729-004). The sentence describing Equation 1 is incorrect. The corrected sentence is presented in the erratum.] The point estimate of sample coefficient alpha may provide a misleading impression of the reliability of the test score. Because sample coefficient alpha is consistently biased downward, it is more likely to yield a misleading impression of poor reliability. The magnitude of the bias is greatest precisely when the variability of sample alpha is greatest (small population reliability and small sample size). Taking into account the variability of sample alpha with an interval estimator may lead to retaining reliable tests that would be otherwise rejected. Here, the authors performed simulation studies to investigate the behavior of asymptotically distribution-free (ADF) versus normal-theory interval estimators of coefficient alpha under varied conditions. Normal-theory intervals were found to be less accurate when item skewness >1 or excess kurtosis >1. For sample sizes over 100 observations, ADF intervals are preferable, regardless of item skewness and kurtosis. A formula for computing ADF confidence intervals for coefficient alpha for tests of any size is provided, along with its implementation as an SAS macro. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
PURPOSE: Many radiotherapy treatment plans involve some level of standardization (e.g., in terms of beam ballistics, collimator settings, and wedge angles), which is determined primarily by tumor site and stage. If patient-to-patient variations in the size and shape of relevant anatomical structures for a given treatment site are adequately sampled, then it would seem possible to develop a general method for automatically mapping individual patient anatomy to a corresponding set of treatment variables. A medical expert system approach to standardized treatment planning was developed that should lead to improved planning efficiency and consistency. METHODS AND MATERIALS: The expert system was designed to specify treatment variables for new patients based upon a set of templates (a database of treatment plans for previous patients) and a similarity metric for determining the goodness of fit between the relevant anatomy of new patients and patients in the database. A set of artificial neural networks was used to optimize the treatment variables to the individual patient. A simplified example, a four-field box technique for prostate treatments based upon a single external contour, was used to test the viability of the approach. RESULTS: For a group of new prostate patients, treatment variables specified by the expert system were compared to treatment variables chosen by the dosimetrists. Performance criteria included dose uniformity within the target region and dose to surrounding critical organs. For this standardized prostate technique, a database consisting of approximately 75 patient records was required for the expert system performance to approach that of the dosimetrists. CONCLUSIONS: An expert system approach to standardized treatment planning has the potential of improving the overall efficiency of the planning process by reducing the number of iterations required to generate an optimized dose distribution, and to function most effectively, should be closely integrated with a dosimetric based treatment planning system.  相似文献   

10.
The main goal of regression analysis (multiple, logistic, Cox) is to assess the relationship of one or more exposure variables to a response variable, in the presence of confounding and interaction. The confidence interval for the regression coefficient of the exposure variable, obtained through the use of a computer statistical package, quantify these relationships for models without interaction. Relationships between variables that present interactions are represented by two or more terms, and the corresponding confidence intervals can be calculated 'manually' from the covariance matrix. This paper suggests an easy procedure for obtaining confidence intervals from any statistical package. This procedure is applicable for modifying variables which are continuous as well as categorical.  相似文献   

11.
One of the most frequently cited reasons for conducting a meta-analysis is the increase in statistical power that it affords a reviewer. This article demonstrates that fixed-effects meta-analysis increases statistical power by reducing the standard error of the weighted average effect size (T?.) and, in so doing, shrinks the confidence interval around T?.. Small confidence intervals make it more likely for reviewers to detect nonzero population effects, thereby increasing statistical power. Smaller confidence intervals also represent increased precision of the estimated population effect size. Computational examples are provided for 3 effect-size indices: d (standardized mean difference), Pearson's r, and odds ratios. Random-effects meta-analyses also may show increased statistical power and a smaller standard error of the weighted average effect size. However, the authors demonstrate that increasing the number of studies in a random-effects meta-analysis does not always increase statistical power. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
In a meta-analysis of a set of clinical trials, a crucial but problematic component is providing an estimate and confidence interval for the overall treatment effect theta. Since in the presence of heterogeneity a fixed effect approach yields an artificially narrow confidence interval for theta, the random effects method of DerSimonian and Laird, which incorporates a moment estimator of the between-trial components of variance sigma B2, has been advocated. With the additional distributional assumptions of normality, a confidence interval for theta may be obtained. However, this method does not provide a confidence interval for sigma B2, nor a confidence interval for theta which takes account of the fact that sigma B2 has to be estimated from the data. We show how a likelihood based method can be used to overcome these problems, and use profile likelihoods to construct likelihood based confidence intervals. This approach yields an appropriately widened confidence interval compared with the standard random effects method. Examples of application to a published meta-analysis and a multicentre clinical trial are discussed. It is concluded that likelihood based methods are preferred to the standard method in undertaking random effects meta-analysis when the value of sigma B2 has an important effect on the overall estimated treatment effect.  相似文献   

13.
In contrast to the standard use of regression, in which an individual's score on the dependent variable is unknown, neuropsychologists are often interested in comparing a predicted score with a known obtained score. Existing inferential methods use the standard error for a new case (sN+1) to provide confidence limits on a predicted score and hence are tailored to the standard usage. However, sN+1 can be used to test whether the discrepancy between a patient's predicted and obtained scores was drawn from the distribution of discrepancies in a control population. This method simultaneously provides a point estimate of the percentage of the control population that would exhibit a larger discrepancy. A method for obtaining confidence limits on this percentage is also developed. These methods can be used with existing regression equations and are particularly useful when the sample used to generate a regression equation is modest in size. Monte Carlo simulations confirm the validity of the methods, and computer programs that implement them are described and made available. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
We propose a new procedure for constructing a confidence interval about the kappa statistic in the case of two raters and a dichotomous outcome. The procedure is based on a chi-square goodness-of-fit test as applied to a model frequently used for clustered binary data. The procedure provides coverage levels that are accurate in samples of smaller size than those required for other procedures. The procedure also has use for significance-testing and the planning of corresponding sample size requirements.  相似文献   

15.
Missing effect-size estimates pose a particularly difficult problem in meta-analysis. Rather than discarding studies with missing effect-size estimates or setting missing effect-size estimates equal to 0, the meta-analyst can supplement effect-size procedures with vote-counting procedures if the studies report the direction of results or the statistical significance of results. By combining effect-size and vote-counting procedures, the meta-analyst can obtain a less biased estimate of the population effect size and a narrower confidence interval for the population effect size. This article describes 3 vote-counting procedures for estimating the population correlation coefficient in studies with missing sample correlations. Easy-to-use tables, based on equal sample sizes, are presented for the 3 procedures. More complicated vote-counting procedures also are given for unequal sample sizes. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Suppose the number of 2 x 2 tables is large relative to the average table size, and the observations within a given table are dependent, as occurs in longitudinal or family-based case-control studies. We consider fitting regression models to the odds ratios using table-level covariates. The focus is on methods to obtain valid inferences for the regression parameters beta when the dependence structure is unknown. In this setting, Liang (1985, Biometrika 72, 678-682) has shown that inference based on the noncentral hypergeometric likelihood is sensitive to misspecification of the dependence structure. In contrast, estimating functions based on the Mantel-Haenszel method yield consistent estimators of beta. We show here that, under the estimating function approach, Wald's confidence interval for beta performs well in multiplicative regression models but unfortunately has poor coverage probabilities when an additive regression model is adopted. As an alternative to Wald inference, we present a Mantel-Haenszel quasi-likelihood function based on integrating the Mantel-Haenszel estimating function. A simulation study demonstrates that, in medium-sized samples, the Mantel-Haenszel quasi-likelihood approach yields better inferences than other methods under an additive regression model and inferences comparable to Wald's method under a multiplicative model. We illustrate the use of this quasi-likelihood method in a study of the familial risk of schizophrenia.  相似文献   

17.
Regression Model for Daily Maximum Stream Temperature   总被引:3,自引:0,他引:3  
An empirical model is developed to predict daily maximum stream temperatures for the summer period. The model is created using a stepwise linear regression procedure to select significant predictors. The predictive model includes a prediction confidence interval to quantify the uncertainty. The methodology is applied to the Truckee River in California and Nevada. The stepwise procedure selects daily maximum air temperature and average daily flow as the variables to predict maximum daily stream temperature at Reno, Nev. The model is shown to work in a predictive mode by validation using three years of historical data. Using the uncertainty quantification, the amount of required additional flow to meet a target stream temperature with a desired level of confidence is determined.  相似文献   

18.
This article presents confidence interval methods for improving on the standard F tests in the balanced, completely between-subjects, fixed-effects analysis of variance. Exact confidence intervals for omnibus effect size measures, such as or and the root-mean-square standardized effect, provide all the information in the traditional hypothesis test and more. They allow one to test simultaneously whether overall effects are (a) zero (the traditional test), (b) trivial (do not exceed some small value), or (c) nontrivial (definitely exceed some minimal level). For situations in which single-degree-of-freedom contrasts are of primary interest, exact confidence interval methods for contrast effect size measures such as the contrast correlation are also provided. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
BACKGROUND: Hyperthyroidism affects many organ systems, but the effects are usually considered reversible. The long-term effects of hyperthyroidism on mortality are not known. METHODS: We conducted a population-based study of mortality in a cohort of 7209 subjects with hyperthyroidism who were treated with radioactive iodine in Birmingham, United Kingdom, between 1950 and 1989. The vital status of the subjects was determined on March 1, 1996, and causes of death were ascertained for those who had died. The data on the causes of death were compared with data on age-specific mortality in England and Wales. The standardized mortality ratio was used as a measure of relative risk, and the effect of covariates on mortality was assessed by regression analysis. RESULTS: During 105,028 person-years of follow-up, 3611 subjects died; the expected number of deaths was 3186 (standardized mortality ratio, 1.1; 95 percent confidence interval, 1.1 to 1.2; P<0.001). The risk was increased for deaths due to thyroid disease (106 excess deaths; standardized mortality ratio, 24.8; 95 percent confidence interval, 20.4 to 29.9), cardiovascular disease (240 excess deaths; standardized mortality ratio, 1.2; 95 percent confidence interval, 1.2 to 1.3), and cerebrovascular disease (159 excess deaths; standardized mortality ratio, 1.4; 95 percent confidence interval, 1.2 to 1.5), as well as fracture of the femur (26 excess deaths; standardized mortality ratio, 2.9; 95 percent confidence interval, 2.0 to 3.9). The excess mortality was most evident in the first year after radioiodine therapy and declined thereafter. CONCLUSIONS: Among patients with hyperthyroidism treated with radioiodine, mortality from all causes and mortality due to cardiovascular and cerebrovascular disease and fracture are increased.  相似文献   

20.
A 2-step approach for obtaining internal consistency reliability estimates with item-level missing data is outlined. In the 1st step, a covariance matrix and mean vector are obtained using the expectation maximization (EM) algorithm. In the 2nd step, reliability analyses are carried out in the usual fashion using the EM covariance matrix as input. A Monte Carlo simulation examined the impact of 6 variables (scale length, response categories, item correlations, sample size, missing data, and missing data technique) on 3 different outcomes: estimation bias, mean errors, and confidence interval coverage. The 2-step approach using EM consistently yielded the most accurate reliability estimates and produced coverage rates close to the advertised 95% rate. An easy method of implementing the procedure is outlined. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号