首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The results of numerous social perception studies have led researchers to conclude that raters' implicit cognitive schemata regarding trait and behavior covariance may play a crucial role in the rating judgment process. W. H. Cooper (see PA, Vol 66:9176 and 9262) proposed one such cognitive schema, semantic conceptual similarity, as a key source of halo error in job performance ratings but was unable to reproduce the results of previous social perception research. The present study, with 186 undergraduates, employed baseball players as target ratees to examine the effects of job and ratee knowledge on the relations of raters' conceptual similarity schemata with rating and true score covariance. The results are consistent with the systematic distortion hypothesis presented by R. A. Shweder (see record 1976-07240-001). The association between conceptual similarity and rating covariance was significantly greater when Ss lacked sufficient job and/or ratee knowledge. Moreover, the degree of halo was also significantly greater when Ss lacked relevant job and ratee knowledge. The advantages of using objective measures of actual performance as true score estimates in the study of rater cognitive processes are discussed. (30 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Performance ratings of 294 clerical workers in a validation study of clerical ability tests indicated that halo, measured as the standard deviation across dimensions, consistently moderated the relationships between dimension ratings and scores on valid tests. Greater halo resulted in higher validity coefficients, and also was related to higher performance ratings. In an additional analysis, statistically controlling for the effect of the overall rating on dimension ratings resulted in poorer validation results, with dimension ratings rarely adding additional variance to that of overall ratings. The results of this study contradict the traditionally held view of halo as a rating "error," and are consistent with recent laboratory studies that have found accuracy and halo positively related. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
The present two studies integrate and extend the literatures on dynamic performance, performance attributions, and rating purpose, making several important contributions. First, examining attributions of dynamic performance, Study 1 predicted that performance mean and trend would affect judged ratee ability and effort and that performance variation would affect locus of causality; both predictions were supported by the results. Second, investigating the interaction between dynamic performance and rating purpose, Study 2 predicted that performance mean would have a stronger impact on administrative than on developmental ratings, whereas performance trend and variation would have a stronger impact on developmental than on administrative ratings; again, both predictions were borne out by the results. Third, both studies found that performance trend interacted with performance mean and variability to predict overall ratings. Fourth, both studies replicated main effects of dynamic performance characteristics on ratings in a different culture and, in Study 2, a sample of more experienced managers. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
A note on the statistical correction of halo error.   总被引:1,自引:0,他引:1  
Attempts to eliminate halo error from rating scales by statistical correction have assumed halo to be a systematic error associated with a ratee–rater pair that adds performance-irrelevant variance to ratings. Furthermore, overall performance ratings have been assumed to reflect this bias. Consideration of the source of halo error, however, raises the possibility that the cognitive processes resulting in halo also mediate expectations of and interactions with employees, indirectly influencing true performance and ability via instruction, feedback, and reinforcement. If so, it would not be possible to correct for halo error using overall performance ratings. (26 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
The study examines the effects of a wide array of rater–ratee relationship and ratee-characteristic variables on supervisor and peer job-performance ratings. Interpersonal ratings, job performance ratings, and ratee scores on ability, job knowledge, and technical proficiency were available for 493 to 631 first-tour US Army soldiers. Results of supervisor and peer ratings-path models showed ratee ability, knowledge, and proficiency accounted for 13% of the variance in supervisor performance ratings and 7% for the peer ratings. Among the interpersonal variables, ratee dependability had the strongest effect for both models. Ratee friendliness and likability had little effect on the performance ratings. Inclusion of the interpersonal factors increased the variance accounted for in the ratings to 28% and 19%, respectively. Discussion focuses on the relative contribution of ratee technical and contextual performance to raters' judgments. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
In several social perception studies investigators have concluded that raters' semantic conceptual similarity schemata serve to guide and constrain dimensional covariance in the rating judgment process. This effect has been hypothesized to be most likely when ratings are memory based and raters lack relevant job or ratee information. Recent research that has explored the effects of conceptual similarity schemata on performance ratings and halo error has provided some limited support for this systematic distortion hypothesis (SDH). However, these studies are limited because researchers have examined this phenomenon using group-level analyses, whereas the theory references individual-level judgment processes. The present study investigated the phenomena at the individual level. The effects of varying levels of rater job knowledge (high, medium, and low) and familiarity with ratees (high and low) were examined for conceptual similarity–rating and rating–true-score covariation relations, for measures of halo, and for rating accuracy components. Results provided support for the SDH, but indicated a boundary condition for its operation and revealed some surprising findings for individual-level rater halo. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
Investigated the effects of frame-of-reference (FOR) training on various indexes of distance and correlational accuracy under alternative time delays. 150 Ss were assigned randomly to either FOR- or control- (i.e., minimal) training conditions, with 1 of 3 time delays: (1) no delay between training, observation, and rating; (2) ratings performed 2 days following training and ratee observations; or (3) ratee observations and ratings completed 2 days following training. Hypotheses were proposed predicting specific relationships between accuracy, recall memory, and learning, depending on the delay period. Overall, results support the categorization perspective on FOR-training effectiveness; however, different results were obtained depending on the type of accuracy index and time delay. The implications of these findings are discussed in terms of how they relate to the conceptual distinction between distance and correlational accuracy and to the role of on-line, memory-based, and inference-memory-based processing in the ratings of FOR trained raters. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Conducted a meta-analysis of how the race of the ratee affects performance ratings by examining 74 studies with a total sample of 17,159 ratees for White raters and 14 studies with 2,428 ratees for Black raters. The 5 moderators examined were the study setting, rater training, type of rating, rating purpose, and the racial composition of the work group. Results show that the corrected mean correlations between ratee race and ratings for White and Black raters were .183 and –.220, with 95% confidence intervals that excluded zero for both rater groups. Substantial moderating effects were found for study setting and for the saliency of Blacks in the sample. Race effects were more likely in field settings when Blacks composed a small percentage of the work force. Both Black and White raters gave significantly higher ratings to members of their own race. It is suggested that future research should focus on understanding the process underlying race effects. References for the studies included are appended. (47 ref) (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

9.
This study quantified the effects of 5 factors postulated to influence performance ratings: the ratee's general level of performance, the ratee's performance on a specific dimension, the rater's idiosyncratic rating tendencies, the rater's organizational perspective, and random measurement error. Two large data sets, consisting of managers (n?=?2,350 and n?=?2,142) who received developmental ratings on 3 performance dimensions from 7 raters (2 bosses, 2 peers, 2 subordinates, and self) were used. Results indicated, that idiosyncratic rater effects (62% and 53%) accounted for over half of the rating variance in both data sets. The combined effects of general and dimensional ratee performance (21% and 25%) were less than half the size of the idiosyncratic rater effects. Small perspective-related effects were found in boss and subordinate ratings but not in peer ratings. Average random error effects in the 2 data sets were 11% and 18%. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
Examined the effects of knowledge of a ratee's prior performance on evaluations of present performance. Subjects received knowledge of either good or poor prior performances and then viewed and rated a videotape depicting average performance. In Study 1, some subjects received knowledge of the ratee's prior performance by directly viewing videotapes of good or poor ratee behavior, whereas others only reviewed written performance ratings completed by those subjects who had actually viewed the ratee. A contrast effect occurred when knowledge of prior performance was obtained by observing ratee behavior, but an assimilation effect occurred when knowledge of prior performance was obtained by reviewing performance ratings. In Study 2, subjects viewed videotapes of good or poor performances prior to viewing an average performance by the same ratee. However, the separate ratee performances were observed over a more realistic time interval than that used in Study 1 (3 weeks vs. 1 h). No significant contrast effects were observed. In Study 3, subjects reviewed written ratings of prior performances before viewing an average videotape. Subjects who reviewed extremely good (or poor) prior performance ratings provided more extreme ratings of the "average" performance than did subjects who reviewed less extreme ratings. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
C. E. Lance et al (see record 1994-17452-001) tested 3 different causal models of halo rater error (general impression [GI], salient dimension [SD], and inadequate discrimination [ID] models) and found that the GI model better accounted for observed halo rating error than did the SD or ID models. It was also suggested that the type of halo rater error that occurs might vary as a function of rating context. The purpose of this study was to determine whether rating contexts could be manipulated that favored the operation of each of these 3 halo-error models. Results indicate, however, that GI halo error occurred in spite of experimental conditions designed specifically to induce other forms of halo rater error. This suggests that halo rater error is a unitary phenomenon that should be defined as the influence of a rater's general impression on ratings of specific ratee qualities. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
The purpose of this study was to test competing theories regarding the relationship between true halo (actual dimensional correlations) and halo rater error (effects of raters' general impressions on specific ratee qualities) at both the individual and group level of analysis. Consistent with the prevailing general impression model of halo rater error, results at both the individual and group level analyses indicated a null (vs. positive or negative) true halo-halo rater error relationship. Results support the ideas that (a) the influence of raters' general impressions is homogeneous across rating dimensions despite wide variability in levels of true halo; (b) in assigning ratings, raters rely both on recalled observations of actual ratee behaviors and on general impressions of ratees in assigning dimensional ratings; and (c) these 2 processes occur independently of one another. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Studied the effects of 2 levels of rater and ratee experience and education, as well as their possible interaction, on behaviorally anchored rating scales. A total of 370 male police personnel participated, of whom 71 were sergeants and 299 were police officers. Eight dependent variables, each a 9-point behaviorally anchored rating scale describing 1 dimension of police officer performance, were subjected to fixed-effects, unweighted-means analyses of variance. Results indicate that raters' experience and raters' education accounted for most of the statistically significant effects. Likewise Raters' Experience?×?Education and Raters' Education?×?Ratees' Education interactions were statistically significant. All significant effects were weak, however, as indicated by overlaps of 82–92% between distributions, and eta-squares for all significant F ratios of .01–.03. Hence, neither rater nor ratee characteristics exerted any practically significant effects on observed behaviorally anchored ratings. (16 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
We examined methodological and theoretical issues related to accuracy measures used as criteria in performance-rating research. First, we argued that existing operational definitions of accuracy are not all based on a common accuracy definition; we report data that show generally weak relations among different accuracy operational definitions. Second, different methods of true score development are also examined, and both methodological and theoretical limitations are explored. Given the difficulty of obtaining true scores, criteria are discussed for examining the suitability of expert ratings as surrogate true score measures. Last, the usefulness of using accuracy measures in performance-rating research is examined to highlight situations in which accuracy measures might be desirable criterion measures in rating research. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
Job performance measures consisting of 35 objective indices and ratings on 8 behaviorally anchored rating scales (BARS) were available for 795 nonminority (mean age, 29.8 yrs) and 147 minority (mean age, 28.2 yrs) police officers. Eight of the 35 objective measures, plus age and job tenure, were used as predictors of the sum of the 8 BARS. Identical predictor sets validly forecast supervisory ratings in both minority and nonminority groups whether or not age and tenure were included. Unit weights were inferior to regression weights in both groups. It is concluded that supervisory ratings are linearly predictable from objective performance indices for both minority and nonminority subordinates, a finding that comports with civil rights legislation and recent US Supreme Court decisions. (19 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Reports an error in the original article by Wayne F. Cascio and Enzo R. Valenzi (Journal of Applied Psychology, 1978, Vol. 63, No. 1, pp. 22-28). In the last sentence of the Results section of the article, the values are incorrect. The corrected values for line 33 of page 26 are provided. (The following abstract of this article originally appeared in record 1979-24955-001.) Job performance measures consisting of 35 objective indices and ratings on 8 behaviorally anchored rating scales (BARS) were available for 795 nonminority (mean age, 29.8 yrs) and 147 minority (mean age, 28.2 yrs) police officers. Eight of the 35 objective measures, plus age and job tenure, were used as predictors of the sum of the 8 BARS. Identical predictor sets validly forecast supervisory ratings in both minority and nonminority groups whether or not age and tenure were included. Unit weights were inferior to regression weights in both groups. It is concluded that supervisory ratings are linearly predictable from objective performance indices for both minority and nonminority subordinates, a finding that comports with civil rights legislation and recent US Supreme Court decisions. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Tested whether a possible source of difficulty in materially reducing illusory halo in job performance ratings is raters' beliefs that rating categories are conceptually similar and hence covary, thereby inflating observed correlation matrices. 11 graduate business administration students evaluated the conceptual similarities among job dimensions within 3 jobs. The previously observed interdimension correlation matrices were successfully predicted by Ss' mean conceptual similarity scores. When the observed correlation matrix obtained by W. C. Borman (see record 1980-26801-001) was compared with the normative true score matrix, the conceptual similarity scores were found to be inferior predictors of the observed correlation matrix compared with the superior predictive ability of the normative true score matrix. It is suggested that conceptual similarities among job dimensions represent one potentially recalcitrant source of illusory halo in performance ratings, particularly when ratings are based on encoded observations that have decayed in memory. (24 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Reviews research that investigated the effects of nonperformance factors (i.e., gender and race) on a variety of organizational criteria, including performance evaluations. It is argued that previous findings are attributable to a research design that bears little resemblance to the performance appraisal process in real organizational contexts. 134 Black and 417 White male candidates for a police-department promotion were rated on a battery of attitude and behavior measures by 3 of the 14 Black and 18 White interviewers to examine the effects of 2 nonperformance factors (ratee and rater race) and an index of ratee past performance on performance ratings. Results of a higher-order MANOVA showed significant effects of ratee race, past performance, rater race, and a Ratee?×?Rater interaction. All of these sources of variance combined, however, accounted for no more than 4% of the total variance in performance ratings. Reasons for the low relationship between past performance and oral interview performance, which involve dissimilarity between rating dimensions and interview demand characteristics, are discussed. Thus, the applicability of results from past laboratory studies to performance evaluation in real organizational environments is questioned. (45 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Tested the hypothesis that subsequent performance levels would bias the recall and evaluations of a ratee's previous level of performance with 183 undergraduates, who rated 3 videotaped lectures in either immediate or delayed rating conditions. The 1st videotape depicted an average level of performance and was followed by either 2 good lectures or 2 poor lectures. A significant performance level?×?time of rating interaction was found, in which memory-based ratings were biased in the direction of subsequent performance (i.e., when there was a delay between observation and rating, Ss who had seen an average lecture followed by good lectures rated that average lecture more favorably than did Ss who had seen that same lecture followed by poor lectures). It is suggested that raters are biased in favor of recalling behaviors that are consistent with their general impression of a ratee and that subsequent performance may systematically alter the rater's recall of the ratee's previous behavior. (23 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Research examining the structure of multisource performance ratings has demonstrated that ratings are a direct function of both who is doing the rating (rating source) as well as what is being rated (performance dimension). A separate line of research has focused on the extent to which performance ratings are equivalent across sources. To date no research has examined the measurement equivalence of multisource ratings within the context of both dimension and rating source direct effects on ratings. We examine the impact of both performance dimension and rating source as well as the degree of measurement equivalence across sources. Results indicate that (a) the impact of the underlying performance dimension is the same across rating sources, (b) the impact of rating source is substantial and only slightly smaller than the impact of the underlying performance dimension, and (c) the impact of rating source differs substantially depending on the source. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号