首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The test for regression slope homogeneity across groups (e.g., sex, race, and treatments) is used in such varied settings as the analysis of covariance, the study of aptitude by treatment interactions, and bias detection in differential prediction research. The accuracy of this test requires the seldom-considered assumption of equality of within-group error variances. This research studies the effect of violating that assumption on the power of the F test for regression slope equality and finds that the test may be substantially affected when sample sizes are equal and severely affected when sample sizes are unequal. Alternative procedures based on R. A. Alexander's (see record 1994-39680-001) normalized-t approximation, G. S. James's (1951) second-order approximation, the Welch-Aspin approximation, and the chi-square test are described and evaluated. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
If Y is a continuous, ordinal measure of latent variable θ and Y has a normal distribution with equal variances in several groups, then t tests and one-way analyses of variance on Y can be used to test hypotheses about population mean differences on θ in the corresponding groups. If X and Y are continuous, ordinal measures of latent variables θ and φ, and if X and Y have a bivariate normal distribution, then a test of the null hypothesis that the population correlation between X and Y is zero is also a test of the hypothesis that θ and φ are independent. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
In a comparison of 2 treatments, if outcome scores are denoted by X in 1 condition and by Y in the other, stochastic equality is defined as P(X = P(X > Y). Tests of stochastic equality can be affected by characteristics of the distributions being compared, such as heterogeneity of variance. Thus, various robust tests of stochastic equality have been proposed and are evaluated here using a Monte Carlo study with sample sizes ranging from 10 to 30. Three robust tests are identified that perform well in Type I error rates and power except when extremely skewed data co-occur with very small n. When tests of stochastic equality might be preferred to tests of means is also considered. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
The behavior of the L. V. Hedges's (see record 1983-00213-001) Q test for the fixed-effects meta-analytic model was investigated for small and unequal study sample sizes paired with larger numbers of studies, nonnormal score distributions, and unequal variances. The results of a Monte Carlo study indicate that the hypothesis of equal effect sizes tends to be rejected less than expected if smaller study sample sizes are paired with larger numbers of studies; pairing smaller variances with larger sample sizes (or vice versa) leads to this hypothesis being rejected more than expected. The power of the Q test is also less than expected when small study sample sizes are paired with larger numbers of studies. These findings suggest conditions for which the Q test should be used cautiously. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Let Y be a continuous, ordinal measure of a latent variable Θ. In general, for factorial designs, an analysis of variance of the observed variable Y cannot be used to draw inferences about main effects and interactions on the latent variable Θ even when the standard normality and equality of variance assumptions hold. If Y is a continuous, ordinal measure of a latent variable Θ; X?,…, Xn are continuous, ordinal measures of latent variables Φ?,…, Φn; and the observed measures have a multivariate normal distribution, then a multiple regression analysis of the observed criterion measure Y and predictors X?,…, Xn can be used to test hypotheses about multivariate associations among the latent variables. Furthermore, the predicted values Y′ are unbiased estimates of quantities that are monotonically related to predicted values on the latent criterion variable Θ. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Suggests that under certain conditions, comparisons of majority and minority group regression lines for purposes of assessing test bias can be viewed as comparisons of conditional bivariate distributions. Under conditions of trivariate normality, findings should reveal parallel regression lines except for a special case. One implication is that even when the test is a parallel form of the criterion, lines with equal slopes but unequal intercepts should be found. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
A Monte Carlo simulation assessed the relative power of 2 techniques that are commonly used to test for moderating effects. 500 samples were drawn from simulation-based populations for each of 81 conditions in a design that varied sample size, the reliabilities of 2 predictor variables (1 of which was the moderator variable), and the magnitude of the moderating effect. The null hypothesis of no interaction effect was tested by using moderated multiple regression (MMR). Each sample was then successively polychotomized into 2, 3, 4, 6, and 8 subgroups, and the equality of the subgroup-based correlation coefficients (SCC) was tested. Results show MMR to be more powerful than the SCC strategy for virtually all of the 81 conditions. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
In neuropsychological single-case studies, a patient is compared with a small control sample. Methods of testing for a deficit on Task X, or a significant difference between Tasks X and Y, either treat the control sample statistics as parameters (using z and zD) or use modified t tests. Monte Carlo simulations demonstrated that if z is used to test for a deficit, the Type I error rate is high for small control samples, whereas control of the error rate is essentially perfect for a modified t test. Simulations on tests for differences revealed that error rates were very high for zD. A new method of testing for a difference (the revised standardized difference test) achieved good control of the error rate, even with very small sample sizes. A computer program that implements this new test (and applies criteria to test for classical and strong dissociations) is made available. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
In some measurement studies, the researcher seeks to compare the internal consistency reliability coefficients (α? and α?) of 2 rating scales or 2 observational techniques. In planning such studies, the investigator must determine the sample size (e.g., the number of participants or raters) that should be used if the power of the test of the null hypothesis is to be adequate. In this article, tables are derived that enable the researcher to determine what sample sizes are required to attain a specified power against a given alternative to the hypothesis of equality of 2 values of Cronbach's alpha coefficient. The tables cover situations in which either independent or dependent samples of participants or raters are used to estimate the reliability coefficients. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
The nutritional quality of rice protein was compared with that of whole egg protein by slope ratio assay. Diets for each food at four levels of protein, 4, 6, 10 and 15% and a protein-free diet were given to male weanling rats of the Sprague-Dawley strain for 21 days. The slopes of the regression lines of the whole egg and rice groups calculated from the changes of body weight (Y in g/21 days) with nitrogen intake (X in g/21 days), including and (excluding) zero protein group were, respectively, Y=27.39 X-12.26 (Y=24.41 X-1.86) and Y = 13.86 X-8.06 (Y = 12.54 X +0.50). Assuming a potency of 100 for the egg protein, the relative potency of rice estimated from body weight gain with nitrogen intake was 51 (51). The values for rice calculated from body water gain and nitrogen retention with nitrogen intake were, respectively, 51 (47) and 46 (44). These values were compared with RNV of several varieties of conventional rice and high-protein rice.  相似文献   

11.
Modification of DNA with aging has been proposed as a mechanism of cellular senescence. To test this hypothesis, we measured fluorescence of the DNA-Ethidium bromide (EB) complex in human peripheral lymphocytes. Lymphocytes were incubated in a medium containing phytohemagglutinin (PHA-P). EB was linked to lymphoblast-DNA. Healthy adults of three age groups were examined: 40-49 (N = 14), 50-59 (N = 16), 60-69 (N = 8). Moreover, we studied lymphocytes from 17 patients (47-74 years old) with probable Alzheimer's disease. An aged-related linear decrease in fluorescence intensity was found in healthy controls (r = 0.135, p < 0.05), and for Alzheimer's disease patients (r = 0.443, P < 0.10). The regression equation are: Y = -0.0405X + 7.164 (Healthy Controls) Y = -0.121X + 11.258 (Alzheimer's disease patients) where X and Y are age and fluorescence, respectively. These results indicate that the analysis of DNA-EB fluorescence in lymphocytes may be useful in the study of changes associated with aging, and also in the evaluation of the clinical diagnosis of Alzheimer's disease.  相似文献   

12.
One of the assumptions underlying the F test of parallelism of 2 or more regression lines is that the within-group residual variances are homogeneous. In the present study, a 2-group Monte Carlo investigation examined the effect of violating this assumption for F, a large-sample chi-square approximation (U?), and an approximate F test (F*). In terms of Type I error probabilities, the standard F test performed acceptably well as long as sample sizes were equal. This was not true when sample sizes were unequal, with F* proving to be clearly superior. The pattern of results parallel exactly what is known about the robustness of the F test when testing for mean differences in the presence of unequal variances. (9 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Investigated the performance of 5 methods for determining the number of components to retain—J. L. Horn's (see record 1965-13273-001) parallel analysis, W. F. Velicer's (see record 1977-00166-001) minimum average partial (MAP), R. B. Cattell's (see PA, Vol 41:969) scree test, M. S. Bartlett's (1950) chi-square test, and H. F. Kaiser's (see record 1960-06772-001) eigenvalue greater than 1 rule—across 7 systematically varied conditions (sample size, number of variables, number of components, component saturation, equal or unequal numbers of variables for each component, and the presence or absence of unique and complex variables). Five sample correlation matrices were generated at each of 2 sample sizes from the 48 known population correlation matrices representing 6 levels of component pattern complexity. Results indicate that the performance of the parallel analysis and MAP methods was generally the best across all situations; the scree test was generally accurate but variable; and Bartlett's chi-square test was less accurate and more variable than the scree test. Kaiser's method tended to severely overestimate the number of components. (65 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
This article presents methods for sample size and power calculations for studies involving linear regression. These approaches are applicable to clinical trials designed to detect a regression slope of a given magnitude or to studies that test whether the slopes or intercepts of two independent regression lines differ by a given amount. The investigator may either specify the values of the independent (x) variable(s) of the regression line(s) or determine them observationally when the study is performed. In the latter case, the investigator must estimate the standard deviation(s) of the independent variable(s). This study gives examples using this method for both experimental and observational study designs. Cohen's method of power calculations for multiple linear regression models is also discussed and contrasted with the methods of this study. We have posted a computer program to perform these and other sample size calculations on the Internet (see http://www.mc.vanderbilt.edu/prevmed/psintro+ ++.htm). This program can determine the sample size needed to detect a specified alternative hypothesis with the required power, the power with which a specific alternative hypothesis can be detected with a given sample size, or the specific alternative hypotheses that can be detected with a given power and sample size. Context-specific help messages available on request make the use of this software largely self-explanatory.  相似文献   

15.
I. Olkin and J. D. Finn (1995) presented 2 methods for comparing squared multiple correlation coefficients for 2 independent samples. In 1 method, the researcher constructs a confidence interval for the difference between 2 population squared coefficients; in the 2nd method, a Fisher-type transformation of the sample squared correlation coefficient is used to obtain a test statistic. Both methods are based on asymptotic theory and use approximations to the sampling variance. The approximations are incorrect when the population multiple correlation coefficient is zero. The 2 procedures were examined for equal and unequal population multiple correlation coefficients in combination with equal and unequal sample sizes. As expected, the procedures were inaccurate when the population multiple correlation coefficients were zero or very small and, in some conditions, were inaccurate when sample sizes and coefficients were unequal. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Used 3 coalition games to test the minimum resource, minimum power, and bargaining theories against each other and against equal excess theory; 144 male undergraduates were Ss. In Game X, all winning coalitions had the same payoffs but players had different resources; in Game Y, winning coalitions had different payoffs and players had different resources. The characteristic functions of the games (payoffs for the coalitions) were the same for Games Y and Z, and the resource distributions were the same for Games X and Z. Coalition behavior was virtually the same in Games Y and Z, but coalition behavior in these games differed from that in Game X. Thus, when there were differences in both coalition payoffs and individual resources, the payoffs rather than the resources tended to influence coalition behavior. Coalition behavior in Games Y and Z was best accounted for by equal excess theory, coalition behavior in Game X by bargaining theory. (24 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Monte Carlo computer simulations were used to investigate the performance of three χ–2 test statistics in confirmatory factor analysis (CFA). Normal theory maximum likelihood χ–2 (ML), Browne's asymptotic distribution free χ–2 (ADF), and the Satorra-Bentler rescaled χ–2 (SB) were examined under varying conditions of sample size, model specification, and multivariate distribution. For properly specified models, ML and SB showed no evidence of bias under normal distributions across all sample sizes, whereas ADF was biased at all but the largest sample sizes. ML was increasingly overestimated with increasing nonnormality, but both SB (at all sample sizes) and ADF (only at large sample sizes) showed no evidence of bias. For misspecified models, ML was again inflated with increasing nonnormality, but both SB and ADF were underestimated with increasing nonnormality. It appears that the power of the SB and ADF test statistics to detect a model misspecification is attenuated given nonnormally distributed data. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
The role of changes in preload in maintaining stable hemodynamics during coronary obstruction was assessed in the presence of myocardial ischemia due to occlusions of the left anterior descending (LAD) and left circumflex (LCX) coronary arteries. Changes in preload (mean left atrial pressure) to maintain a constant stroke volume after coronary occlusion were examined in 18 anesthetized dogs (LAD occlusion in 9 dogs, LCX occlusion in 9 dogs). The level of ischemia was assessed sonomicrometrically. Ventricular function curves relating left atrial pressure to stroke volume were assessed during a control state and after 1 min of coronary occlusion. The extent of preload reserve after coronary occlusion was examined on the ventricular function curves and was defined as the change in mean left atrial pressure required to maintain stroke volume at the level of the control state under conditions of regional ischemia. Ischemic size was determined by a stereo-angiogram after the animals were sacrificed. The extent of preload reserve (X) was linearly related to the ischemic size (Y) in both LAD (Y = 0.90 + 0.16X, r = 0.76, p < 0.001) and LCX (Y = -1.79 + 0.19X, r = 0.79, p < 0.001) occlusions. The slopes of the regression lines in LAD and LCX occlusions were the same. The X intercepts of these lines were -5.6% and 9.4% of the left ventricular weight in LAD and LCX ischemia (p < 0.001), respectively. Thus, the presence of systolic wall motion abnormalities due to coronary occlusion can be compensated for hemodynamically by changes in the preload reserve.(ABSTRACT TRUNCATED AT 250 WORDS)  相似文献   

19.
Validated 10 pencil-and-paper tests against telephone operator proficiency measured in specially developed job simulations. Job analysis information plus patterns of validity coefficients for a nationwide sample (N = 1,091) working in 3 different telephone operator jobs indicated that a number of behavioral dimensions were common to all 3 jobs. Data, therefore, were combined across jobs and analyzed separately for Black, Spanish-surnamed, and White operators. A composite of the 4 maximally predictive tests was significantly predictive of a composite criterion for all ethnic groups, but less so for the Spanish-surnamed. Ethnic regression-line slopes and intercepts differed significantly. The common regression equation generally did not underpredict minority operator proficiency, and a composite test cutoff considered fair for minority and nonminority applicants is recommended. (21 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
This paper presents a sample size formula for testing the equality of kappa (> or = 2) survival distributions using the Tarone-Ware class of test statistics in the presence of non-proportional hazards, time dependent losses, non-compliance and drop-in. This method extends the derivation by Lakatos of a sample size formula for comparing two survival distributions. A sample size formula is also presented for the stratified logrank test. We describe how one can utilize these generalized formulae in calculating sample sizes and assessing power in complex multi-arm clinical trials.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号