首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The purpose of this study was to test competing theories regarding the relationship between true halo (actual dimensional correlations) and halo rater error (effects of raters' general impressions on specific ratee qualities) at both the individual and group level of analysis. Consistent with the prevailing general impression model of halo rater error, results at both the individual and group level analyses indicated a null (vs. positive or negative) true halo-halo rater error relationship. Results support the ideas that (a) the influence of raters' general impressions is homogeneous across rating dimensions despite wide variability in levels of true halo; (b) in assigning ratings, raters rely both on recalled observations of actual ratee behaviors and on general impressions of ratees in assigning dimensional ratings; and (c) these 2 processes occur independently of one another. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
3.
4.
Correlated performance ratings obtained at various decision points in the career of 121 airline stewardesses with ability and motive scores on the Project TALENT test battery to illustrate a method of assessing trait evaluations of employees or potential employees. Although the trait validity was low, training and on-the-job performance ratings of stewardesses did reflect knowledge of etiquette and typical high school behavior characterized by sociability. The method also illuminated aspects of the interviewer ratings and termination data. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Criticizes an article by F. Landy et al (see record 1981-00274-001), which assumed that a general factor present in the intercorrelations of ratings of performance was generated by halo rating errors. A number of decision rules were used to generate an analytic procedure for extracting halo errors, correcting ratings, and interpreting factors present in the corrected correlations. A number of the decision rules, as well as the initial assumption, seem to be rules of thumb or to depend on implicit theories of ratings and not on empirical data. Reanalysis of the rating data to allow both general 2nd- and 1st-order factors to be expressed in terms of item loadings recovered the structure present in the correlation of the original ratings as well as the psychological meanings of the 1st-order factors. General factors in rating data resemble general factors in measures of human ability. It is argued that removing general factors as if they were halo rather than true score may eliminate more of the variance from rating data than is justifiable. (8 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
69 unacquainted undergraduates worked in 3-man groups under relevant (mathematical tasks) and irrelevant (socializing) acquaintance conditions. The Ss rated one another on scales that defined several cognitive skills. They were also rated on these same scales by Os, dependent on visual information, and unacquainted with Ss or the nature of the tasks being performed. As hypothesized, Ss under the relevant acquaintance condition achieved consistently good validity for all 3 cognitive areas with the best validity for ratings of math ability. Validity under the irrelevant acquaintance condition was nil on all scales. Os achieved significant validity (at lower levels than Ss) for ratings under the relevant acquaintance condition. Levels of inter- and intrarater reliability were not associated with levels of validity under the various rating conditions. (16 ref.) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
Examines the practice of statistically controlling for halo effects in performance ratings. It is argued that there are 2 major problems with the partial correlation approach to removing halo from ratings data. First, the correct use of the technique depends on the validity of specific causal assumptions regarding the rating process that have not received empirical evaluation to date. The 2nd problem concerns analytic procedures employed in previous tests of the partialing approach. Reanalysis of data reported by F. Landy et al (see record 1981-00274-001) indicates that previous conclusions regarding the effectiveness of partialing may have been artifacts of the way the data were analyzed. It is felt that criticisms of the partial correlation approach to halo reduction are sufficient to suggest suspension of its use in any nonresearch context until necessary additional research is performed. (13 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Tested whether a possible source of difficulty in materially reducing illusory halo in job performance ratings is raters' beliefs that rating categories are conceptually similar and hence covary, thereby inflating observed correlation matrices. 11 graduate business administration students evaluated the conceptual similarities among job dimensions within 3 jobs. The previously observed interdimension correlation matrices were successfully predicted by Ss' mean conceptual similarity scores. When the observed correlation matrix obtained by W. C. Borman (see record 1980-26801-001) was compared with the normative true score matrix, the conceptual similarity scores were found to be inferior predictors of the observed correlation matrix compared with the superior predictive ability of the normative true score matrix. It is suggested that conceptual similarities among job dimensions represent one potentially recalcitrant source of illusory halo in performance ratings, particularly when ratings are based on encoded observations that have decayed in memory. (24 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
This study used meta-analytic methods to compare the interrater and intrarater reliabilities of ratings of 10 dimensions of job performance used in the literature; ratings of overall job performance were also examined. There was mixed support for the notion that some dimensions are rated more reliably than others. Supervisory ratings appear to have higher interrater reliability than peer ratings. Consistent with H. R. Rothstein (1990), mean interrater reliability of supervisory ratings of overall job performance was found to be .52. In all cases, interrater reliability is lower than intrarater reliability, indicating that the inappropriate use of intrarater reliability estimates to correct for biases from measurement error leads to biased research results. These findings have important implications for both research and practice. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
Investigated the effects (over time) of a comprehensive vs an abbreviated rater training session on relative levels of leniency error and halo effect. 80 undergraduates (20 per group) rated all of their nonlaboratory instructors over 1, 2, or 3 rating periods using either behavioral expectation scales or summated rating scales. Tests on psychometric error were also administered at these times. Results indicate that the psychometric quality of ratings was superior for the group receiving the comprehensive training, and both training groups were superior to the control groups at the 1st measurement period. No differences were found between any groups in later comparisons. A consistent relationship was found between scores on the tests of psychometric error and error as measured on the ratings. Results are discussed in terms of the diminishing effect of rater training over rating periods, the relationship of internal and external criteria of training effects, the practical significance of differences between groups, and the importance of rating context on rating quality. (16 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
Compared the psychometric properties of ratings on behavioral expectation scales (BES) across 4 groups totalling 156 undergraduate raters. Groups differed with respect to amount of prior training (1 hr or more), the nature of psychometric errors, and the extent of exposure to scales (read scales and recorded observed critical incidents, discussed general scale dimensions, or no exposure to scales). Three Ss from each group rated 1 of 13 instructors during the last week of a 10-wk term. Significantly less leniency error and halo effect, plus higher interrater reliability, were found for the group that had received the hour of training and full exposure to the BES. Ss who had received only training had significantly less halo error than those that had received no training. The need for rater training prior to observation and the use of BES as a context for observation are discussed. (20 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
13.
Compared 2 models of the cognitive processes underlying performance ratings: a traditional model outlined by W. C. Borman (see record 1980-26801-001), and a cognitive categorization model proposed by J. M. Feldman (see record 1981-24524-001). To distinguish these 2 models, 120 college students were shown 1 of 2 videotapes of a college lecturer in which 3 of 5 dimensions of performance were manipulated to be opposite to that of the lecturer's overall performance. Ratings were made either immediately after viewing the videotape or 2 days later. Results indicate that the traditional model was appropriate for describing the rating process in both the immediate and the delayed rating conditions. However, a large halo effect was also found that was consistent with the categorization model despite conditions designed to minimize the likelihood of halo. Additional effects of cognitive categorization included a tendency to make errors in later recall of lecturing incidents consistent with Ss' general impression. (48 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
The construct validity of developmental ratings of managerial performance was assessed by using 2 data sets, each based on a different 360° rating instrument. Specifically, the authors investigated the nature of the constructs measured by developmental ratings, the structural relationships among those constructs, and the generalizability of results across 4 rater perspectives (boss, peer, subordinate, and self). A structure with 4 lower order factors (Technical Skills, Administrative Skills, Human Skills, and Citizenship Behaviors) and 2 higher order factors (Task Performance and Contextual Performance) was tested against competing models. Results consistently supported the lower order constructs, but the higher order structure was problematic, indicating that the structure of ratings is not yet well understood. Multisample analyses indicated few practically significant differences in factor structures across perspectives. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
Human Error Identification (HEI) techniques have been used to predict human error in high risk environments for the past two decades. Despite the lack of supportive evidence for their efficacy, their popularity remains unabated. The application of these approaches is ever-increasing, to include product assessment. The authors feel that it is necessary to prove that the predictions are both reliable and valid before the approaches can be recommended with any confidence. This paper provides evidence to suggest that human error identification techniques in general, and SHERPA in particular, may be acquired with relative ease and can provide reasonable error predictions.  相似文献   

16.
Describes a study in which 4,533 male and 639 female faculty members in large and small high school settings rated former students as to their future performance as Marine Corps enlisted men. The ratings were evaluated against a criterion of attrition and pay grade. Validity coefficients were generally low but valid. Ratings made by males were significantly higher in validity than ratings by females. Ratings made in small school settings were more valid than those made in the large school settings. Suggestions are given for modifying the use of these ratings to increase their predictive validity. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Tested the hypothesis that previous ratings of programs in psychology reflect both an experimental psychology and general institutional halo bias. A questionnaire similar to one used in an earlier study of graduate programs by H. D. Roose and C. J. Andersen (1970) was used to survey the responses of 598 professionals in the field of counseling psychology. Respondents were furnished with a listing of 70 doctoral programs in counseling psychology and other closely related programs and were asked to rate each of the programs. It was found that applied programs in counseling psychology received ratings that differed from overall ratings of psychology in general. Programs ranked as strong, good, and adequate are listed. Ratings were related to institutional halo, program age, rater knowledge of program, geographic location, and approved status by the American Psychological Association. Implications for program evaluation are discussed, and users of reputational ratings are cautioned about the need for supplemental information. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
This study fills a key gap in research on response instructions in situational judgment tests (SJTs). The authors examined whether the assumptions behind the differential effects of knowledge and behavioral tendency SJT response instructions hold in a large-scale high-stakes selection context (i.e., admission to medical college). Candidates (N = 2,184) were randomly assigned to a knowledge or behavioral tendency response instruction SJT, while SJT content was kept constant. Contrary to prior research in low-stakes settings, no meaningfully important differences were found between mean scores for the response instruction sets. Consistent with prior research, the SJT with knowledge instructions correlated more highly with cognitive ability than did the SJT with behavioral tendency instructions. Finally, no difference was found between the criterion-related validity of the SJTs under the two response instruction sets. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
The interrater reliabilities of ratings of 9,975 ratees from 79 organizations were examined as a function of length of exposure to the ratee. It was found that there was a strong, nonlinear relationship between months of exposure and interrater reliability. The correlation between a logarithmic transformation of months of experience and reliability was .73 for one type of ratings and .65 for another type. The relationship was strongest during the first 12 months on the job. Changes in reliability were accounted for mostly by changes in criterion variance. Asymptotic levels of reliability were only about .60, even with 10–20 yrs of experience. Implications for estimating reliabilities in individual and meta-analytic studies and for performance appraisal were presented, and possible explanations of the reliability–variance relationship were advanced. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
The study examines the effects of a wide array of rater–ratee relationship and ratee-characteristic variables on supervisor and peer job-performance ratings. Interpersonal ratings, job performance ratings, and ratee scores on ability, job knowledge, and technical proficiency were available for 493 to 631 first-tour US Army soldiers. Results of supervisor and peer ratings-path models showed ratee ability, knowledge, and proficiency accounted for 13% of the variance in supervisor performance ratings and 7% for the peer ratings. Among the interpersonal variables, ratee dependability had the strongest effect for both models. Ratee friendliness and likability had little effect on the performance ratings. Inclusion of the interpersonal factors increased the variance accounted for in the ratings to 28% and 19%, respectively. Discussion focuses on the relative contribution of ratee technical and contextual performance to raters' judgments. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号