首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The results of numerous social perception studies have led researchers to conclude that raters' implicit cognitive schemata regarding trait and behavior covariance may play a crucial role in the rating judgment process. W. H. Cooper (see PA, Vol 66:9176 and 9262) proposed one such cognitive schema, semantic conceptual similarity, as a key source of halo error in job performance ratings but was unable to reproduce the results of previous social perception research. The present study, with 186 undergraduates, employed baseball players as target ratees to examine the effects of job and ratee knowledge on the relations of raters' conceptual similarity schemata with rating and true score covariance. The results are consistent with the systematic distortion hypothesis presented by R. A. Shweder (see record 1976-07240-001). The association between conceptual similarity and rating covariance was significantly greater when Ss lacked sufficient job and/or ratee knowledge. Moreover, the degree of halo was also significantly greater when Ss lacked relevant job and ratee knowledge. The advantages of using objective measures of actual performance as true score estimates in the study of rater cognitive processes are discussed. (30 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Several investigators have argued that raters often rely on the conceptual similarity among performance dimension labels to guide the pattern of their performance ratings. Recent studies have used individual-level conceptual similarity (COS) judgments to investigate this systematic distortion hypothesis and related performance rating issues. In this article the results from 4 studies are reported in which 171 subjects completed COS judgments on 2 occasions. In 3 separate studies the reliability of COS schemata was found to be positively related to the rater's relevant job knowledge. In a 4th study it was found that changes in COS schemata over a 9-week interval may result from COS unreliability as much as from any meaningful reconceptualization of COS structure. Implications for performance rating research are reviewed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Tested whether a possible source of difficulty in materially reducing illusory halo in job performance ratings is raters' beliefs that rating categories are conceptually similar and hence covary, thereby inflating observed correlation matrices. 11 graduate business administration students evaluated the conceptual similarities among job dimensions within 3 jobs. The previously observed interdimension correlation matrices were successfully predicted by Ss' mean conceptual similarity scores. When the observed correlation matrix obtained by W. C. Borman (see record 1980-26801-001) was compared with the normative true score matrix, the conceptual similarity scores were found to be inferior predictors of the observed correlation matrix compared with the superior predictive ability of the normative true score matrix. It is suggested that conceptual similarities among job dimensions represent one potentially recalcitrant source of illusory halo in performance ratings, particularly when ratings are based on encoded observations that have decayed in memory. (24 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
144 deputy sheriffs were rated on 9 job performance dimensions with 2 rating scales by 2 raters. Results indicate that the rating scales (the Multiple Item Appraisal Form and the Global Dimension Appraisal Form) developed in this study were able to minimize the major problems often associated with performance ratings (i.e., leniency error, restriction of range, and low reliability). A multitrait/multimethod analysis indicated that the rating scales possessed high convergent and discriminant validity. A multitrait/multirater analysis indicated that although the interrater agreement and the degree of rated discrimination on different traits by different raters were good, there was a substantial rater bias, or strong halo effect. This halo effect in the ratings, however, may really be a legitimate general factor rather than an error. (11 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Under trait theory, ratings may be modeled as a function of the temperament of the child and the bias of the rater. Two linear structural equation models are described, one for mutual self and partner ratings, and one for multiple ratings of related individuals. Application of the first model to EASI temperament data collected from spouses rating each other shows moderate agreement between raters and little rating bias. Spouse pairs agree moderately when rating their twin children, but there is significant rater bias, with greater bias for monozygotic than for dizygotic twins. MLEs of heritability are approximately .5 for all temperament scales with no common environmental variance. Results are discussed with reference to trait validity, the person–situation debate, halo effects, and stereotyping. Questionnaire development using ratings on family members permits increased rater agreement and reduced rater bias. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
The purpose of this study was to test competing theories regarding the relationship between true halo (actual dimensional correlations) and halo rater error (effects of raters' general impressions on specific ratee qualities) at both the individual and group level of analysis. Consistent with the prevailing general impression model of halo rater error, results at both the individual and group level analyses indicated a null (vs. positive or negative) true halo-halo rater error relationship. Results support the ideas that (a) the influence of raters' general impressions is homogeneous across rating dimensions despite wide variability in levels of true halo; (b) in assigning ratings, raters rely both on recalled observations of actual ratee behaviors and on general impressions of ratees in assigning dimensional ratings; and (c) these 2 processes occur independently of one another. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
A note on the statistical correction of halo error.   总被引:1,自引:0,他引:1  
Attempts to eliminate halo error from rating scales by statistical correction have assumed halo to be a systematic error associated with a ratee–rater pair that adds performance-irrelevant variance to ratings. Furthermore, overall performance ratings have been assumed to reflect this bias. Consideration of the source of halo error, however, raises the possibility that the cognitive processes resulting in halo also mediate expectations of and interactions with employees, indirectly influencing true performance and ability via instruction, feedback, and reinforcement. If so, it would not be possible to correct for halo error using overall performance ratings. (26 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
This study extends multisource feedback research by assessing the effects of rater source and raters' cultural value orientations on rating bias (leniency and halo). Using a motivational perspective of performance appraisal, the authors posit that subordinate raters followed by peers will exhibit more rating bias than superiors. More important, given that multisource feedback systems were premised on low power distance and individualistic cultural assumptions, the authors expect raters' power distance and individualism-collectivism orientations to moderate the effects of rater source on rating bias. Hierarchical linear modeling on data collected from 1,447 superiors, peers, and subordinates who provided developmental feedback to 172 military officers show that (a) subordinates exhibit the most rating leniency, followed by peers and superiors; (b) subordinates demonstrate more halo than superiors and peers, whereas superiors and peers do not differ; (c) the effects of power distance on leniency and halo are strongest for subordinates than for peers and superiors; (d) the effects of collectivism on leniency were stronger for subordinates and peers than for superiors; effects on halo were stronger for subordinates than superiors, but these effects did not differ for subordinates and peers. The present findings highlight the role of raters' cultural values in multisource feedback ratings. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

9.
C. E. Lance et al (see record 1994-17452-001) tested 3 different causal models of halo rater error (general impression [GI], salient dimension [SD], and inadequate discrimination [ID] models) and found that the GI model better accounted for observed halo rating error than did the SD or ID models. It was also suggested that the type of halo rater error that occurs might vary as a function of rating context. The purpose of this study was to determine whether rating contexts could be manipulated that favored the operation of each of these 3 halo-error models. Results indicate, however, that GI halo error occurred in spite of experimental conditions designed specifically to induce other forms of halo rater error. This suggests that halo rater error is a unitary phenomenon that should be defined as the influence of a rater's general impression on ratings of specific ratee qualities. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
11.
Tested C. E. Schneier's (see record 1978-11450-001) cognitive compatibility theory. In Exps I and II, 100 undergraduates rated college instructors and professor vignettes, respectively. Results show that rater cognitive complexity was unrelated to rating accuracy, halo error, acceptability of rating format, or confidence in ratings. In Exp III, 31 police sergeants rated patrol officers, and the results show that halo error and acceptability of formats were unrelated to cognitive complexity. In Exp IV, 95 undergraduates' ratings of managerial performance and instructor effectiveness showed no support for the cognitive compatibility theory. However, the data showed that raters' ability to generate dimensions was significantly related to halo error in instructors' ratings. Implications for cognitive compatibility theory and future research with the method of generating performance dimensions are discussed. (30 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
108 undergraduates were randomly assigned to 1 of 4 experimental groups to rate videotaped performances of several managers talking with a problem subordinate. The research employed a single-factor experimental design in which rater error training (RET), rater accuracy training (RAT), rating error and accuracy training (RET/RAT), and no training were compared for 2 rating errors (halo and leniency) and accuracy of performance evaluations. Differences in program effectiveness for various performance dimensions were also assessed. Results show that RAT yielded the most accurate ratings and no-training the least accurate ratings. The presence of error training (RET or RET/RAT) was associated with reduced halo, but the presence of accuracy training (RAT or RET/RAT) was associated with less leniency. Dimensions?×?Training interactions revealed that training was not uniformly effective across the rating dimensions. (23 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
The stability of halo errors when the ratees, the specific behavioral episodes observed, or both varied was studied. In a laboratory study, halo errors were highly unstable when either the ratees or the ratee behaviors varied (average stability coefficients were .20 and .18 when ratee behavior or both ratees and their behavior varied, respectively), but halo errors were moderately stable when the ratees and the specific performance segments viewed were kept constant. In a field study using actual teacher ratings in which the ratee, the ratee's role, or the semester in which ratings were obtained was varied, very low stability coefficients were again found. The results suggest that halo error is not a stable characteristic of the rater or the ratees but rather is partly a characteristic of the unique rating situation. Practical and theoretical implications are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
Studied 52 nursing supervisors to examine the effects of a lecture on rating errors, discussion about errors, and participation in scale construction on both experimental and subsequent administrative ratings. On experimental ratings, scale construction reduced halo and variability errors, lecture reduced variability errors, and discussion increased variability errors. These results held true only for raters who began making ratings within 1 wk after training, before administration of questionnaires designed to measure rater motivation and knowledge. (27 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
The present study tested whether empathic accuracy and physiological linkage during an emotion recognition task are facilitated by a cultural match between rater and target (cultural advantage model) or unaffected (cultural equivalence model). Participants were 161 college students of African American, Chinese American, European American, or Mexican American ethnicity. To assess empathic accuracy—knowing what another person is feeling—participant’s (raters) used a rating dial to provide continuous, real-time ratings of the valence and intensity of emotions being experienced by 4 strangers (targets). Targets were African American, Chinese American, European American, or Mexican American women who had been videotaped having a conversation with their dating partner in a previous study and had rated their own feelings during the interaction. Empathic accuracy was defined as the similarity between ratings of the videotaped interactions obtained from raters and targets. To assess emotional empathy—feeling what another person is feeling—we examined physiological linkage (similarity between raters’ and targets’ physiology). Our findings for empathic accuracy supported the cultural equivalence model, while those for physiological linkage provided some support for the cultural advantage model. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Investigated the effects (over time) of a comprehensive vs an abbreviated rater training session on relative levels of leniency error and halo effect. 80 undergraduates (20 per group) rated all of their nonlaboratory instructors over 1, 2, or 3 rating periods using either behavioral expectation scales or summated rating scales. Tests on psychometric error were also administered at these times. Results indicate that the psychometric quality of ratings was superior for the group receiving the comprehensive training, and both training groups were superior to the control groups at the 1st measurement period. No differences were found between any groups in later comparisons. A consistent relationship was found between scores on the tests of psychometric error and error as measured on the ratings. Results are discussed in terms of the diminishing effect of rater training over rating periods, the relationship of internal and external criteria of training effects, the practical significance of differences between groups, and the importance of rating context on rating quality. (16 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
In light of consistently observed correlations among Big Five ratings, the authors developed and tested a model that combined E. L. Thorndike’s (1920) general evaluative bias (halo) model and J. M. Digman’s (1997) higher order personality factors (alpha and beta) model. With 4 multitrait–multimethod analyses, Study 1 revealed moderate convergent validity for alpha and beta across raters, whereas halo was mainly a unique factor for each rater. In Study 2, the authors showed that the halo factor was highly correlated with a validated measure of evaluative biases in self-ratings. Study 3 showed that halo is more strongly correlated with self-ratings of self-esteem than self-ratings of the Big Five, which suggests that halo is not a mere rating bias but actually reflects overly positive self-evaluations. Finally, Study 4 demonstrated that the halo bias in Big Five ratings is stable over short retest intervals. Taken together, the results suggest that the halo-alpa-beta model integrates the main findings in structural analyses of Big Five correlations. Accordingly, halo bias in self-ratings is a reliable and stable bias in individuals’ perceptions of their own attributes. Implications of the present findings for the assessment of Big Five personality traits in monomethod studies are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
A comparison of behavioral expectation scales and graphic rating scales.   总被引:1,自引:0,他引:1  
Compared ratings derived from behavioral expectation scales developed by 147 personnel management students with ratings based on graphic rating scales. The ratees were 4 college professors, and the raters were the 183 students in their classes. The behaviorally anchored scales resulted in less halo error, or alternatively, more independence in ratings of different dimensions of performance. The behaviorally anchored scales did not correct for leniency in ratings. These results were observed both among raters who participated in developing the behavioral expectation scales and among similar raters who did not take part in this process. The factor structures of the 2 rating formats were essentially equivalent in "cleanness." Neither solution was judged superior to the other. However, the behavioral expectation scale format possessed greater discriminant validity. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Nature and consequences of halo error: A critical analysis.   总被引:1,自引:0,他引:1  
The definition of halo error that dominated researchers' thinking for most of this century implied that (1) halo error was common, (2) it was a rater error, with true and illusory components, (3) it led to inflated correlations among rating dimensions and was due to the influence of a general evaluation on specific judgments, and (4) it had negative consequences and should be avoided or removed. Research is reviewed showing that all of the major elements of this conception of halo are either wrong or problematic. Because of unresolved confounds of true and illusory halo and the often unclear consequences of halo errors, the authors suggest a moratorium on the use of halo indices as dependent measures in applied research. They suggest specific directions for further research on halo that take into account the context in which judgments are formed and ratings are obtained and that more clearly distinguish between actual halo errors and the apparent halo effect. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Compared Smith-Kendall type behaviorally anchored scales for derived performance dimensions (Format 1), scales for the same dimensions but without the behavioral anchors (Format 2), and scales for dimensions selected on an a priori basis (Format 3) on the basis of susceptibility to rater response biases. Raters were 30 graduate students and ratees were 3 associate professors whom the raters had had in succession during their 1st year of graduate study. Leniency error and composite halo error were present in all ratings; there was no evidence of relative or absolute halo errors in any ratings. There was some evidence that the use of scales for derived dimensions reduced leniency error and increased the amount of variance attributable to ratee differences. The scale reliabilities of the 3 formats were also determined. A discussion of the feasibility of obtaining relatively independent scales for several job performance dimensions is included. (15 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号