首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Comments (see records 2001-10055-002, 2001-10055-003, 2001-10055-004, and 2001-10055-005) on the R. D. Roberts, M. Zeidner, and G. Matthews (see record 2001-10055-001) article on the measurement of emotional intelligence (EI) made various pertinent observations that confirm the growing interest in this topic. This rejoinder finds general agreement on some key issues: learning from the history of ability testing, developing more sophisticated structural models of ability, studying emotional abilities across the life span, and establishing predictive and construct validity. However, scoring methods for tests of EI remain problematic. This rejoinder acknowledges recent improvements in convergence between different scoring methods but discusses further difficulties related to (a) neglect of group differences in normative social behaviors, (b) segregation of separate domains of knowledge linked to cognitive and emotional intelligences, (c) potential confounding of competence with learned skills and cultural factors, and (d) lack of specification of adaptive functions of EI. Empirical studies have not yet established that the Multi-Factor Emotional Intelligence Scale and related tests assess a broad EI factor of real-world significance. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
The authors have claimed that emotional intelligence (EI) meets traditional standards for an intelligence (J. D. Mayer, D. R. Caruso, & P. Salovey, 1999). R. D. Roberts, M. Zeidner, and G. Matthews (see record 2001-10055-001) questioned whether that claim was warranted. The central issue raised by Roberts et al. concerning Mayer et al. (1999) is whether there are correct answers to questions on tests purporting to measure EI as a set of abilities. To address this issue (and others), the present authors briefly restate their view of intelligence, emotion, and EI. They then present arguments for the reasonableness of measuring EI as an ability, indicate that correct answers exist, and summarize recent data suggesting that such measures are, indeed, reliable. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
After reviewing recent studies involving the selection of items for interest scales, in which scales with higher validity (and fewer items) generally had lower reliability, the author presents the original odd-even reliabilities and recently-collected test-retest reliabilities (over an average 18-year interval) for 15 scales of the Strong VIB. The test-retest reliabilities were all lower than the odd-even reliabilities, and the shrinkage was greatest for those scales with the lowest original reliabilities. It is concluded that, for prediction in the distant future, scale reliability is important. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
The study was designed to determine whether the reliability and validity of interpretations based on 3 frequently used psychological tests—Rorschach, TAT, MMPI—increased as a function of number of tests employed. 30 clinical psychologists completed personality questionnaires describing 5 Ss on the basis of identifying data alone, each test individually, pairs of tests, and all 3 combined. Reliability and validity did not increase as a function of number of tests, nor were there any differences between tests or pairs of tests. The validity scores for test data ranged from 66% to 73%, with a Mean of 68%. The reliability scores ranged from 56% to 72%, with a Mean of 64%. (18 ref.) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
If one asks a representative group of Americans over 18 about the use of intelligence tests in student selection for school or college or to aid in job promotion selection "he finds that many of them are against the use of intelligence tests. High school students in the U. S. are even more strongly opposed to the use of intelligence tests." Critical attitudes toward tests involve the following issues: Inaccessibility of test data. Invasion of privacy. Rigidity in use of test scores. Types of talent selected by tests. Fairness of tests to minority groups. Among the personal and social characteristics of the critics are: Some people are distinctly hostile to any self examination. People subscribing strongly to aristocratic or equalitarian viewpoints of society may oppose testing. People who have done poorly on tests may have wounded self-esteem leading to test opposition. The punishing effects tests may have had on an individual's life chances may lead to resentment against tests. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
In this study, a task using forced-choice lexical familiarity judgments of irregular versus archaic words (a newly developed measure called the Lexical Orthographic Familiarity Test; LOFT) was compared to a standardized oral word-reading measure (the Wechsler Test of Adult Reading; WTAR) in a group of 35 aphasic adults and a comparison group of 125 community dwelling, nonbrain damaged adults. When compared to the comparison group, aphasics had significantly lower scores on the WTAR but not the LOFT. Although both the WTAR and LOFT were significantly correlated with education in the nonbrain-damaged group, only the LOFT was correlated with education and also with the Barona full scale IQ index in the aphasic group. Lastly, WTAR performance showed a significantly greater relationship to the severity of language disorder in the aphasic group than did the LOFT. These results have both theoretical and clinical implications for the assessment of language-disordered adults, as they indicate that patients with aphasia may retain aspects of verbally mediated intelligence, and that the LOFT may provide a better estimate of premorbid functioning in aphasia than other currently available measures. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
This study investigated the relative validities of a battery of "creativity tests" and an IQ tests for predicting several indices of achievement in high school science. Criteria included grade-point average in science courses, percentile rank on the STEP Science Achievement Test, teacher rating of overall scientific potential, number of high school science courses taken, and a measure of involvement with science. Results indicated that the creativity tests did have considerable predictive validity against each criterion for each sex and that the criterion variance accounted for by the creativity tests is to a substantial degree independent of IQ. Contrary to findings of other investigators, teachers did not discriminate against highly creative pupils in their ratings. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Reviews the book, What intelligence tests miss: The psychology of rational thought by Keith E. Stanovich (see record 2008-06992-000). Speed of processing seems to reign in the world of the mind. Although a person’s speed of processing may in part dictate who amongst us performs well on intelligence tests, this speed may not necessarily guarantee good decisions, personal contentment, and the meeting of goals in real life. Stanovich’s book is a scholarly, yet captivating, survey of research on rational thought and action, and what it means to be a truly industrious thinker. The book is divided into 13 chapters. Chapters 1 and 2 attempt to persuade the reader that measured intelligence is different from rationality—measured intelligence is essentially about raw speed of processing while rationality is about sophisticated problem solving. Chapter 3 elucidates the theoretical models, including the reflective, algorithmic, and autonomous minds, which help account for the distinction between measured intelligence and rationality. Chapters 4 and 5 flesh out the differences between intelligence and rationality. Chapters 6 through 9 expose the strategies the cognitive miser (a metaphor to guide and describe key ideas about human thinking) employs to cut corners in thinking. Chapters 10 and 11 focus on both the positive thinking strategies that should be taught in school and the contaminated forms of thinking that impede us from weighing and sifting through the information we encounter in the world and then evaluating it effectively. Chapters 12 and 13 provide a useful review of the forms of thinking that lead to irrational beliefs and actions, and conclude with the social benefits of better thinking. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
States, in this comment on the article by R. D. Roberts, M. Zeidner and G. Matthews (see record 2001-10055-001) that there is now sufficient work in the literature on emotional intelligence to suggest that this construct or series of constructs deserves serious attention, but several questions remain as to adequate construct validation as well as to the emergence and development of these constructs. There is a need to conduct convergent and divergent validity studies on a midlife sample that is likely to show the optimal level of differentiation of the new constructs. The reference domain of cognitive intelligence should be constructed in a multiple-construct manner, and the validation procedure should use confirmatory factor analysis and P. S. Dwyer's (1937) extension method. Once properly validated, there is a need to study the emergence, age differences, and age changes in the level and structure of emotional intelligence. A paradigm that investigates the invariance of factor structure across age and uses the model of differentiation-dedifferentiation would be useful for this purpose. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
Reports an error in the original article by Richard R. Reilly, Sheldon Zedeck, and Mary L. Tenopyr (Journal of Applied Psychology, 1979, Vol. 64, No. 3, pp. 262-274). In the Results section of Experiment 2 several results were reported incorrectly. The corrected results are provided. (The following abstract of this article originally appeared in record 1980-26872-001.) Problems relating to performance, accidents, and turnover in outdoor telephone craft jobs stimulated 2 experiments aimed at developing and validating a physical test battery. Based on job analysis results, a battery of 9 measures was administered to a sample of 128 Ss (83 males and 45 females) in Exp I. A 2-test battery (dynamic arm strength and reaction time), valid for predicting job task performance and turnover, was selected. Regression equations for males and females were not significantly different. Exp II included a sample of 210 Ss (132 males and 78 females). A 3-test battery consisting of a body density measure, a balance test, and a static strength test was selected based on relationships with training performance. No significant differences were found in the regression equations for males compared to females. The Exp II battery was also significantly related to field performance, training completion, and accidents and was valid for the Exp I criteria. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
"Construct validation was introduced in order to specify types of research required in developing tests for which the conventional views on validation are inappropriate. Personality tests, and some tests of ability, are interpreted in terms of attributes for which there is no adequate criterion. This paper indicates what sorts of evidence can substantiate such an interpretation, and how such evidence is to be interpreted." 60 references. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
This article advances a simple conception of test validity: A test is valid for measuring an attribute if (a) the attribute exists and (b) variations in the attribute causally produce variation in the measurement outcomes. This conception is shown to diverge from current validity theory in several respects. In particular, the emphasis in the proposed conception is on ontology, reference, and causality, whereas current validity theory focuses on epistemology, meaning, and correlation. It is argued that the proposed conception is not only simpler but also theoretically superior to the position taken in the existing literature. Further, it has clear theoretical and practical implications for validation research. Most important, validation research must not be directed at the relation between the measured attribute and other attributes but at the processes that convey the effect of the measured attribute on the test scores. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Reports an error in "Inferences from personnel tests and their validity" by C. H. Lawshe (Journal of Applied Psychology, 1985[Feb], Vol 70[1], 237-238). On page 238, line 4, the word "each" appears and should be "such." The sentence will, therefore, refer "to the use of such cognitive processes as inductive and deductive reasoning and such characteristics of temperament as emotional stability and self-esteem." (The following abstract of the original article appeared in record 1985-16032-001.) Contends that despite clear definitions in standard sources, psychologists persistently refer to the validity of tests instead of the validity of inferences from test scores. This persistence leads to references to "kinds of validity" when, in fact, there are "kinds of validity analysis strategies" whereby data are collected or generated to determine or defend the extent, degree, or strength of the inference or inferences that can be made from a set of test scores. It is concluded that content validity analysis strategies are appropriate only when the job behavior under scrutiny falls at the observation end of the continuum; when such behavior approaches the abstract end of the continuum, a construct validity analysis strategy is indicated. (5 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
15.
"The intent of this paper has been to emphasize the directive role of theory in the construction of psychological tests." The several methodological issues arising from the use of theory in test construction are illustrated through a critical examination of the Taylor Anxiety Scale. "Our conclusion was that the A scale has only a tenuous, theoretical and empirical coordination to the Hullian construct of drive." 31 references. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
The aim of this study was to examine the reliability and validity of a French version of the Revised Children's Manifest Anxiety Scale (RCMAS). A sample of 2,666 school-age French-Canadian children completed the questionnaire. With regard to factor structure, the 5-factor model found in U.S. normative samples was confirmed. The internal consistency of the 5 scales and of the 2 global scales was good to excellent. Test-retest reliabilities after a 6-month period were also similar to those of the original version. Finally, the concurrent validity, assessed by a correlation with the State-Trait Anxiety Inventory for Children, was also found to be good. Results of the present study show that the French version of the RCMAS is a good instrument to assess anxiety in children. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
As more and more subtests are added to the short form, its validity as determined by McNemar's formula approaches unity, whereas the upper limit to its validity as determined by the corrected formula is the reliability of the Full Scale. This difference seems to correspond to Kaufman's distinction between using the short form as part of the Full Scale and using it as a replacement. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Which is better for assessing personality—structured or projective devices? "Attitude toward Home & Parents and Attitude toward Law & Justice of 79 prison inmates were each measured by a sentence completion test and a structured attitude test. As examined through a multitrait-multimethod matrix, these tests were found to validate each other quite satisfactorily. Insofar as the two measurement approaches differed at all in the efficacy with which they differentiated crime groups among the prisoners, the structured tests were slightly the better." (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
The authors articulate 5 basic principles for enhancing incremental validity, both among elements within a test and between tests, during test construction: (a) careful, precise articulation of each element or facet within the content domain; (b) reliable measurement of each facet through use of multiple, alternate-form items; (c) examination of incremental validity at the facet level rather than the broad construct level; (d) use of items that represent single facets rather than combinations of facets; and (e) empirical examination of whether there is a broad construct or a combination of separate constructs. Using these principles, the authors offer specific suggestions for modifications in 3 classic test construction approaches: (a) criterion keying, (b) inductive test construction, and (c) deductive test construction. Implementation of these suggestions is likely to provide theoretical clarification and improved prediction. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
In this study, the authors examined whether video-based situational judgment tests (SJTs) have higher predictive validity than written SJTs (keeping verbal content constant). The samples consisted of 1,159 students who completed a video-based version of an SJT and 1,750 students who completed the same SJT in a written format. The study was conducted in a high stakes testing context. The video-based version of an interpersonally oriented SJT had a lower correlation with cognitive ability than did the written version. It also had higher predictive and incremental validity for predicting interpersonally oriented criteria than did the written version. In this high stakes context, applicants also reacted relatively favorably to the SJTs, although there was no significant difference in face validity between the formats. These findings suggest that SJT format changes be made with caution and that validation evidence is needed when changes are proposed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号