首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Comments on the article by N. Cliff and J. Caruso (see record 1998-10231-002) which proposed reliable component analysis as an alternative to principal-components analysis for situations in which the reliabilities of the variables are known. The present author clarifies that it is the sum of the reliabilities of the components that remains invariant under rotation in reliable component analysis. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Two methods of differential weighting including reliable component analysis (RCA) were compared with equal weighting in calculating factor scores for the Wechsler Adult Intelligence Scale-Third Edition (WAIS—III; D. Wechsler, 1997). Differentially weighted scores were highly replicable across samples. Equal weighting provided scores that were most reliable followed by RCA weighting. Equally weighted scores were highly intercorrelated, whereas differentially weighed scores were uncorrelated. Equally weighted scores were more confounded with g than differentially weighted scores. RCA score differences were substantially more reliable than those of equal weighting. The 2 most reliable, orthogonal components that can be formed from the WAIS—III subtests are measures of Gf and Gc, not verbal and performance IQs, but the 4-factor model resembled that suggested in the WAIS—III manual. The use of RCA to define the factor scores results in scores with several attractive properties. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Factor scores may be estimated by assigning each variable (in standard score form) a weight of unity with the sign of the loading, or a weight equal to the factor loadings of the variables. In an empirical comparison based on a factorization of a battery of 104 personality measures, for six factors the correlations between factor scores estimated from unit weights and from factor-loading weights were all .9 or higher. This result could be expected from consideration of the behavior of correlation between weighted composites. "It may be concluded, then, that in most instances there is little gained by the use of fractional weights." (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
Wechsler Intelligence Scale for Children—Third Edition index score differences are generally interpreted cautiously, if at all, primarily because of their poor reliability. On the basis of prior analyses with the Wechsler Adult Intelligence Scale—Third Edition (J. C. Caruso & N. Cliff, 1999), it was hypothesized that differences between scores defined by reliable component analysis would have higher reliability than those defined by traditional equal weighting. Differences between the reliable component scores showed substantially higher reliability than equally weighted score differences. The differences between reliable component scores were also substantially more reliable than those derived from the weighted scores suggested by K. C. H. Parker and L. Atkinson (1994). Using the weights provided in this article will allow researchers and practitioners to compute the RCA scores and have the assurance of high reliability with its attractive consequences. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Reliable component analysis (RCA) was conducted on the Stanford-Binet: Fourth Edition subtests for 2 to 6-year-olds using the standardization sample. Scores were derived through RCA to assess the Verbal Comprehension and Non-Verbal Reasoning factors suggested for children in this age range. The scores derived through RCA had greater discriminant validity than did equally weighted scores, whose high intercorrelations preclude effective discrimination or incremental validity. The difference scores derived through RCA were compared with equally weighted difference scores in terms of reliability and three types of standard error. Differences between RCA scores were more reliable than were equally weighted differences. The more reliable differences resulted in more precise confidence intervals and more powerful significance tests. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Compared student, peer (or colleague), and self ratings in terms of item statistics, convergent and discriminant validity, and relation to student learning. Ratings from the 3 sources (involving 263 students and 14 instructors) were similar in range and distribution, although colleagues tended to give the most favorable ratings, students the least favorable. Individual student and colleague reliabilities were also similar; composite student reliabilities were considerably higher than composite colleague reliabilities, only partly because of differing sample sizes. Student and self ratings and rankings were quite good in terms of convergent and discriminant validity, but no student, peer, or self rating was significantly related to residualized student achievement. (32 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
The cumulative percentage frequencies are presented for differences among reliable component analysis (RCA) scores for the verbal comprehension, perceptual organization, freedom from distractibility, and processing speed constructs assessed by the Wechsler Intelligence Scale for Children - IIIrd Edition (WISC--III) for the standardization sample and a learning disabled sample. Using RCA scores to form differences has several advantages over traditional equally weighted scores for the WISC-III. J. C. Caruso and N. Cliff (2000) presented tables to assess the statistical significance of differences among the RCA scores for the WISC-III. It is important, however, to use a dual approach in interpreting difference scores; both the statistical significance of a difference and the frequency with which it occurred in a relevant comparison group should be determined. This article contains the information necessary for practitioners to use the recommended dual approach to interpreting RCA difference scores for the WISC-III. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
The purpose of this study was to investigate the g loadings and specific effects of the core and diagnostic composite scores from the Differential Abilities Scales, Second Edition (DAS-II; Elliott, 2007a). Scores from a subset of the DAS-II standardization sample for ages 3:6 to 17:11 were submitted to principal factor analysis. Four composites, Nonverbal Reasoning Ability, Verbal Ability, Spatial Ability, and Working Memory, appear to be primarily measures of the general factor across most age levels. In contrast, Processing Speed appears to primarily be a measure of a specific ability or specific abilities across age levels. A secondary analysis revealed that averaging subtest g loadings produced downwardly biased values for composites, that the Spearman (1927) formula for determining the g loading of a composite produced upwardly biased values, and that averaging subtest specific effects produced upwardly biased values for composites. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

9.
In two previous studies on general and violent recidivism (Walters & Heilbrun, 2010; Walters, Knight, Grann, & Dahle, 2008), the summed composite antisocial facet of the Psychopathy Checklist displayed incremental validity relative to the other 3 facets (interpersonal, affective, lifestyle), whereas the other 3 facets generally failed to demonstrate incremental validity relative to the antisocial facet. Because summed composite scores do not account for ordinal item distributions, the 6 Walters et al. (2008) samples were reanalyzed with factor score composites derived from a 4-factor confirmatory factor analysis. The results, however, showed little change from what had been obtained earlier with summed composite scores. Two additional samples not previously included in any incremental validity analyses of the Psychopathy Checklist evidenced a 3-factor structure, with the lifestyle and antisocial facets merged into a single factor. This single factor displayed incremental validity relative to the interpersonal and affective facets, but the reverse was not true regardless of whether summed composite scores or factor score composites were used. A comparison of zero-order correlations from all 8 samples revealed that the antisocial summed composite score predicted significantly better than the summed composite scores for the other 3 facets and that a superordinate factor failed to improve on the performance of either the antisocial summed composite score or the antisocial factor score composite. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

10.
Prospect theory assumes nonadditive decision weights for preferences over risky gambles. Such decision weights generalize additive probabilities. This article proposes a decomposition of decision weights into a component reflecting risk attitude and a new component depending on belief. The decomposition is based on an observable preference condition and does not use other empirical primitives such as statements of judged probabilities. The preference condition is confirmed by most of the experimental findings in the literature. The implied properties of the belief component suggest that, besides the often-studied ambiguity aversion (a motivational factor reflecting a general aversion to unknown probabilities), perceptual and cognitive limitations play a role: It is harder to distinguish among various levels of likelihood, and to process them differently, when probabilities are unknown than when they are known. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
Data pertaining to the value of measures of foreman performance were subjected to factor analysis. 20 criterion variables, 9 ratings, and 11 objective measures were used with 102 foremen in one plant and 104 in another. "Four meaningful dimensions were identified by factor analyzing the measures separately for each plant. Relevance weights for the dimensions were derived from superintendents' relevance rankings of the 20 variables." (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
In the last 2 decades attention to causal (and formative) indicators has grown. Accompanying this growth has been the belief that one can classify indicators into 2 categories: effect (reflective) indicators and causal (formative) indicators. We argue that the dichotomous view is too simple. Instead, there are effect indicators and 3 types of variables on which a latent variable depends: causal indicators, composite (formative) indicators, and covariates (the “Three Cs”). Causal indicators have conceptual unity, and their effects on latent variables are structural. Covariates are not concept measures, but are variables to control to avoid bias in estimating the relations between measures and latent variables. Composite (formative) indicators form exact linear combinations of variables that need not share a concept. Their coefficients are weights rather than structural effects, and composites are a matter of convenience. The failure to distinguish the Three Cs has led to confusion and questions, such as, Are causal and formative indicators different names for the same indicator type? Should an equation with causal or formative indicators have an error term? Are the coefficients of causal indicators less stable than effect indicators? Distinguishing between causal and composite indicators and covariates goes a long way toward eliminating this confusion. We emphasize the key role that subject matter expertise plays in making these distinctions. We provide new guidelines for working with these variable types, including identification of models, scaling latent variables, parameter estimation, and validity assessment. A running empirical example on self-perceived health illustrates our major points. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

13.
Compares component and common factor analysis using 3 levels of population factor pattern loadings (.40, .60, .80) for each of the 3 levels of variables (9, 18, 36). Common factor analysis was significantly more accurate than components in reproducing the population pattern in each of the conditions examined. The differences decreased as the number of variables and the size of the population pattern loadings increased. The common factor analysis loadings were unbiased, had a smaller standard error than component loadings, and presented no boundary problems. Component loadings were significantly and systematically inflated even with 36 variables and loadings of .80. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
15.
35 male and 36 female professional employees (average age 36.2 yrs) in a community mental health center completed the Adjective Check List twice, separated by a 1-yr interval. After each administration, separate factor analyses were computed. All scales had highly significant test–retest reliabilities. Five factors emerged in each analysis, 2 of which accounted for about 55% of the common variance. Repetition of factor analysis at 2 different times resulted in a more stable factor structure than did the usual method of single-time analysis. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Although within-person comparisons allow direct assessments of change, some of the observed change may reflect effects associated with prior test experience rather than the processes of primary interest. One method that might allow retest effects to be distinguished from other influences of change involves comparing the pattern of results in a longitudinal study with those in a study with a very short retest interval. Three short-term retest studies with moderately large samples of adults are used to provide this type of reference information about the magnitude of change, test-retest correlations, reliabilities of change, and correlations of the change in different cognitive variables with each other, and with other types of variables. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Standard procedures for estimating factor scores for the Wechsler Adult Intelligence Scale—Revised (WAIS—R; D. Wechsler, 1981) involve equally weighted sums of the subtests that load most highly on the factor being estimated. We argue that factor scores derived in this manner lack discriminant validity; they are strongly biased toward g (the first unrotated factor) and away from the other 2 unrotated factors. If regression-like weights are applied to all of the WAIS—R subtests and the products are summed, the resulting differentially weighted factors give results that show similar convergent validity and much greater discriminant validity with respect to the original factors. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

18.
Defines need for uniqueness as a positive striving for abnormality relative to other people. Recent research regarding situational determinants of uniqueness motivation is described, and a dispositional individual-differences measure of need for uniqueness is presented. The development of the Uniqueness Scale aims at insuring construct validity as a guide for the item selection. The internal reliabilities, item-remainder coefficients, test–retest reliabilities, cross-validation information, factor analysis, and discriminant validation data are presented, and all meet the normal psychometric criteria expected of an individual-differences measure. Additionally, 8 separate validational studies, conducted with a total of 1,523 US and Israeli college students, are presented. (28 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
"To develop a disguised but objective personality inventory, a factor analysis was performed on scores based on 400 examinees' tendencies to accept or reject 13 lists of proverbs constructed to cover 13 areas. The three test factors which emerged… were: Conventional Mores, Hostility, and Fear of Failure. Using 200 new examinees, scales were constructed by item analysis to measure each. In subsequent samples, the three scales were found to have corrected split-half reliabilities ranging from .45 to .83 and intercorrelations ranging from - .12 to .54. The reliabilities and intercorrelations among the scales were higher when the groups were more heterogeneous in background. The reliabilities and intercorrelations among the scales suggest that three separate behavioral tendencies are being assessed." (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
12 examination scores from 6 courses given as part of the Naval Pre-Flight Training Program were factored into two interpretable criterion factors. The intercorrelations of 10 predictor (standard test) variables were then added to the correlation matrix, and loadings for them on the criterion factors were obtained. Using the factor loadings of the predictors as validity coefficients, regression weights were found for them on each of the two criterion factors, using a modified Doolittle method. "The principal advantage… derived from the initial factor analysis of the criterion variables is that the obtained criterion factors may be isolated free from the influence of the variance of the predictor variables." (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号