首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The purpose of this study was to test competing theories regarding the relationship between true halo (actual dimensional correlations) and halo rater error (effects of raters' general impressions on specific ratee qualities) at both the individual and group level of analysis. Consistent with the prevailing general impression model of halo rater error, results at both the individual and group level analyses indicated a null (vs. positive or negative) true halo-halo rater error relationship. Results support the ideas that (a) the influence of raters' general impressions is homogeneous across rating dimensions despite wide variability in levels of true halo; (b) in assigning ratings, raters rely both on recalled observations of actual ratee behaviors and on general impressions of ratees in assigning dimensional ratings; and (c) these 2 processes occur independently of one another. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Investigated the effects of perceived purpose for rating and training type on the following dependent variables: accuracy, leniency/severity, and illusory halo. The purpose factor comprised 3 levels: a hiring purpose, a feedback purpose, and a research-only purpose. The training factor comprised 4 levels: rater error (RE) training, frame-of-reference (FOR) training, the combination of both methods, and no training. With both factors crossed, 164 undergraduates were randomly assigned to 1 of 12 conditions and viewed videotapes of lectures given by bogus graduate assistants. Heterogeneity of variance made it necessary to apply a conservative analytical strategy. Training significantly affected 2 measures of accuracy and halo such that a training condition that contained an FOR component did better than RE or no training. The conservativeness of the conservative analytic strategy made effects for the purpose factor on correlation accuracy, leniency/severity, and halo only tentative; it dissipated the 1 interaction effect of the 2 factors on distance accuracy. Discussion centers on (a) comparison of the results with those of S. Zedeck and W. Cascio (see record 1983-09102-001), (b) potential reasons for the heteroscedasticity, and (c) implications for the development of student evaluations of university instructors. (32 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
In several social perception studies investigators have concluded that raters' semantic conceptual similarity schemata serve to guide and constrain dimensional covariance in the rating judgment process. This effect has been hypothesized to be most likely when ratings are memory based and raters lack relevant job or ratee information. Recent research that has explored the effects of conceptual similarity schemata on performance ratings and halo error has provided some limited support for this systematic distortion hypothesis (SDH). However, these studies are limited because researchers have examined this phenomenon using group-level analyses, whereas the theory references individual-level judgment processes. The present study investigated the phenomena at the individual level. The effects of varying levels of rater job knowledge (high, medium, and low) and familiarity with ratees (high and low) were examined for conceptual similarity–rating and rating–true-score covariation relations, for measures of halo, and for rating accuracy components. Results provided support for the SDH, but indicated a boundary condition for its operation and revealed some surprising findings for individual-level rater halo. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
144 deputy sheriffs were rated on 9 job performance dimensions with 2 rating scales by 2 raters. Results indicate that the rating scales (the Multiple Item Appraisal Form and the Global Dimension Appraisal Form) developed in this study were able to minimize the major problems often associated with performance ratings (i.e., leniency error, restriction of range, and low reliability). A multitrait/multimethod analysis indicated that the rating scales possessed high convergent and discriminant validity. A multitrait/multirater analysis indicated that although the interrater agreement and the degree of rated discrimination on different traits by different raters were good, there was a substantial rater bias, or strong halo effect. This halo effect in the ratings, however, may really be a legitimate general factor rather than an error. (11 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
108 undergraduates were randomly assigned to 1 of 4 experimental groups to rate videotaped performances of several managers talking with a problem subordinate. The research employed a single-factor experimental design in which rater error training (RET), rater accuracy training (RAT), rating error and accuracy training (RET/RAT), and no training were compared for 2 rating errors (halo and leniency) and accuracy of performance evaluations. Differences in program effectiveness for various performance dimensions were also assessed. Results show that RAT yielded the most accurate ratings and no-training the least accurate ratings. The presence of error training (RET or RET/RAT) was associated with reduced halo, but the presence of accuracy training (RAT or RET/RAT) was associated with less leniency. Dimensions?×?Training interactions revealed that training was not uniformly effective across the rating dimensions. (23 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Discusses the use of multirater feedback (feedback from peers and subordinates) rather than manager feedback for appraisal purposes in the workplace. The author argues that using multirater feedback for administrative purposes ignores basic principles from the counseling and stress literature about how people change, invites rater inflation and poor data, and is naive to organizational issues of hierarchy, status, and power. The trend toward using multirater feedback is discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
Bias in observer ratings compromises generalizability of measurement, typically resulting in attenuation of observed associations between variables. This quantitative review of 79 generalizability studies including raters as a facet examines bias in observer ratings in published psychological research and identifies properties of rating systems likely to place them at risk for problems with rater bias. For the rating systems studied, an average of 37% of score variance was attributable to 2 types of rater bias: (a) raters' differential interpretations of the rating scale and (b) their differential evaluations of the same targets. Ratings of explicit attributes (e.g., frequency counts) contained negligible bias variance, whereas ratings of attributes requiring rater inference contained substantial bias variance. Rater training ameliorated but did not solve the problem of bias in inferential rating scales. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Under trait theory, ratings may be modeled as a function of the temperament of the child and the bias of the rater. Two linear structural equation models are described, one for mutual self and partner ratings, and one for multiple ratings of related individuals. Application of the first model to EASI temperament data collected from spouses rating each other shows moderate agreement between raters and little rating bias. Spouse pairs agree moderately when rating their twin children, but there is significant rater bias, with greater bias for monozygotic than for dizygotic twins. MLEs of heritability are approximately .5 for all temperament scales with no common environmental variance. Results are discussed with reference to trait validity, the person–situation debate, halo effects, and stereotyping. Questionnaire development using ratings on family members permits increased rater agreement and reduced rater bias. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
Rater bias is a substantial source of error in psychological research. Bias distorts observed effect sizes beyond the expected level of attenuation due to intrarater error, and the impact of bias is not accurately estimated using conventional methods of correction for attenuation. Using a model based on multivariate generalizability theory, this article illustrates how bias affects research results. The model identifies 4 types of bias that may affect findings in research using observer ratings, including the biases traditionally termed leniency and halo errors. The impact of bias depends on which of 4 classes of rating design is used, and formulas are derived for correcting observed effect sizes for attenuation (due to bias variance) and inflation (due to bias covariance) in each of these classes. The rater bias model suggests procedures for researchers seeking to minimize adverse impact of bias on study findings. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
The effects of cognitive categorization of raters on accuracy, leniency, and halo of performance evaluations were investigated in a field setting. One hundered seventy-four subordinates evaluated the performance of their managers on three performance dimensions. Managers were categorized as congruent or incongruent based on subordinates' perceptions of the extent to which the manager's behavior met the subordinates' expectations. The results indicated that the quality of ratings assigned by subordinates was related to the cognitive categories used. As hypothesized, ratings of managers who were categorized as congruent were found to be more accurate and also to contain more leniency and halo tendency than the ratings of managers who were categorized as incongruent. Implications of these findings for performance-appraisal research are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
The results of numerous social perception studies have led researchers to conclude that raters' implicit cognitive schemata regarding trait and behavior covariance may play a crucial role in the rating judgment process. W. H. Cooper (see PA, Vol 66:9176 and 9262) proposed one such cognitive schema, semantic conceptual similarity, as a key source of halo error in job performance ratings but was unable to reproduce the results of previous social perception research. The present study, with 186 undergraduates, employed baseball players as target ratees to examine the effects of job and ratee knowledge on the relations of raters' conceptual similarity schemata with rating and true score covariance. The results are consistent with the systematic distortion hypothesis presented by R. A. Shweder (see record 1976-07240-001). The association between conceptual similarity and rating covariance was significantly greater when Ss lacked sufficient job and/or ratee knowledge. Moreover, the degree of halo was also significantly greater when Ss lacked relevant job and ratee knowledge. The advantages of using objective measures of actual performance as true score estimates in the study of rater cognitive processes are discussed. (30 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
The authors compared a feedback workshop with both a no-feedback control group and a comparison group of managers who received a feedback report but no feedback workshop. The multisource feedback was based on ratings of a manager's influence behavior by subordinates, peers, and bosses. Managers in the feedback workshop increased their use of some core influence tactics with subordinates, whereas there was no change in behavior for the control group or for the comparison group. The feedback was perceived to be more useful by managers who received it in a workshop with a facilitator than by managers who received only a printed feedback report. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Investigated the effects (over time) of a comprehensive vs an abbreviated rater training session on relative levels of leniency error and halo effect. 80 undergraduates (20 per group) rated all of their nonlaboratory instructors over 1, 2, or 3 rating periods using either behavioral expectation scales or summated rating scales. Tests on psychometric error were also administered at these times. Results indicate that the psychometric quality of ratings was superior for the group receiving the comprehensive training, and both training groups were superior to the control groups at the 1st measurement period. No differences were found between any groups in later comparisons. A consistent relationship was found between scores on the tests of psychometric error and error as measured on the ratings. Results are discussed in terms of the diminishing effect of rater training over rating periods, the relationship of internal and external criteria of training effects, the practical significance of differences between groups, and the importance of rating context on rating quality. (16 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
The Leader–Member Exchange (LMX) relationship quality between superiors and subordinates in Korean civil engineering companies were empirically examined for superiors’ feedback-seeking behaviors. The results showed that for superiors, affect, loyalty, and contribution toward subordinates were positively related to seeking negative as well as positive feedback from subordinates. From subordinates’ point of view, affect, contribution, and professional respect toward their superiors were positively related to superiors’ negative feedback seeking, but affect was negatively related to superiors’ positive feedback seeking. It was also found that superiors and subordinates were not consensual in LMX and superiors’ feedback-seeking behaviors. For example, for superiors, all of the four LMX dimensions were positively related to superiors’ asking subordinates directly for feedback, whereas for subordinates, none of the LMX dimensions were significantly related to superiors’ asking subordinates directly for feedback. These and other findings are discussed in detail, and implications for the findings are provided.  相似文献   

15.
A note on the statistical correction of halo error.   总被引:1,自引:0,他引:1  
Attempts to eliminate halo error from rating scales by statistical correction have assumed halo to be a systematic error associated with a ratee–rater pair that adds performance-irrelevant variance to ratings. Furthermore, overall performance ratings have been assumed to reflect this bias. Consideration of the source of halo error, however, raises the possibility that the cognitive processes resulting in halo also mediate expectations of and interactions with employees, indirectly influencing true performance and ability via instruction, feedback, and reinforcement. If so, it would not be possible to correct for halo error using overall performance ratings. (26 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Assessed the cognitive complexity of 96 undergraduates with the group version of the Role Construct Repertory (REP) Test, a factor analysis of REP test data, and a sorting task. Performance ratings for 3 of the Ss' instructors were obtained with behaviorally anchored rating scales, mixed standard rating scales, graphic rating scales, and simple "alternate" 3-point rating scales. No differences in leniency, halo, or range restriction emerged either as a function of raters' cognitive complexity or a Cognitive Complexity?×?Scale Format interaction. Raters' confidence in their ratings was not associated with either cognitive complexity or rating scale format. It is concluded that researchers of performance ratings should exercise restraint before confidently conferring moderator variable status on a cognitive complexity construct. (25 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Tested C. E. Schneier's (see record 1978-11450-001) cognitive compatibility theory. In Exps I and II, 100 undergraduates rated college instructors and professor vignettes, respectively. Results show that rater cognitive complexity was unrelated to rating accuracy, halo error, acceptability of rating format, or confidence in ratings. In Exp III, 31 police sergeants rated patrol officers, and the results show that halo error and acceptability of formats were unrelated to cognitive complexity. In Exp IV, 95 undergraduates' ratings of managerial performance and instructor effectiveness showed no support for the cognitive compatibility theory. However, the data showed that raters' ability to generate dimensions was significantly related to halo error in instructors' ratings. Implications for cognitive compatibility theory and future research with the method of generating performance dimensions are discussed. (30 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
This study quantified the effects of 5 factors postulated to influence performance ratings: the ratee's general level of performance, the ratee's performance on a specific dimension, the rater's idiosyncratic rating tendencies, the rater's organizational perspective, and random measurement error. Two large data sets, consisting of managers (n?=?2,350 and n?=?2,142) who received developmental ratings on 3 performance dimensions from 7 raters (2 bosses, 2 peers, 2 subordinates, and self) were used. Results indicated, that idiosyncratic rater effects (62% and 53%) accounted for over half of the rating variance in both data sets. The combined effects of general and dimensional ratee performance (21% and 25%) were less than half the size of the idiosyncratic rater effects. Small perspective-related effects were found in boss and subordinate ratings but not in peer ratings. Average random error effects in the 2 data sets were 11% and 18%. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
This study investigated within-source interrater reliability of supervisor, peer, and subordinate feedback ratings made for managerial development. Raters provided 360-degree feedback ratings on a sample of 153 managers. Using generalizability theory, results indicated that little within-source agreement exists; a large portion of the error variance is attributable to the combined rater main effect and Rater X Ratee effect; more raters are needed than currently used to reach acceptable levels of reliability; supervisors are the most reliable with trivial differences between peers and subordinates when the numbers of raters and items are held constant; and peers are the most reliable, followed by subordinates, followed by supervisors, under conditions commonly encountered in practice. Implications for the validity, design, and maintenance of 360-degree feedback systems are discussed along with directions for future research in this area. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Frame-of-reference (FOR) rater training is one technique used to impart a theory of work performance to raters. In this study, the authors explored how raters' implicit performance theories may differ from a normative performance theory taught during training. The authors examined how raters' level and type of idiosyncrasy predicts their rating accuracy and found that rater idiosyncrasy negatively predicts rating accuracy. Moreover, although FOR training may improve rating accuracy even for trainees with lower performance theory idiosyncrasy, it may be more effective in improving errors of omission than commission. The discussion focuses on the roles of idiosyncrasy in FOR training and the implications of this research for future FOR research and practice. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号