首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Studied the effect of the magnitude of the mean predictor score on the validity coefficient, corrected for range restriction due to explicit selection, from 68,672 Navy recruits. The predictor was the Armed Forces Qualification Test (AFQT) and the criteria were 6 non-AFQT tests of the Armed Services Vocational Aptitude Battery: General Science, Coding Speed, Auto and Shop Information, Mathematics Knowledge, Mechanical Comprehension, and Electronics Information. It is concluded that (a) the validity coefficients were generally higher at higher predictor score ranges and (b) the validity, slope, and standard error of estimate should be viewed as an average rather than a constant value for all Ss in a population. (9 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
This study compared economic utility estimates that were based on noninteractive, interactive, independent multiplicative, and Taylor Series Approximation (TSA) 1 and 2 validity generalization results for clerical selection procedures at a large international manufacturing company. On the basis of estimates of the mean true validity and lower bound 90% credibility value, magnitude and percentage differences in resulting utility estimates across validity generalization procedures were relatively small for almost all comparisons. Regardless of the specific validity generalization parameter estimate used in estimating a utility value, the change in economic utility, going from the organization's current selection procedure (i.e., a verbal ability test) to an alternative procedure, was sizable in most cases. These results clearly demonstrate the practical similarity in utility terms of alternative validity generalization procedure results as well as the sizable economic value of minimum-level generalized validity coefficients. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
With data originally obtained by the 3rd author and colleagues (see record 1980-31533-001), comparative results are presented for the noninteractive, interactive, independent multiplicative and Taylor Series Approximations 1 and 2 validity generalization procedures when there is nonzero sampling error. Findings indicate that the 5 validity generalization procedures yielded similar estimates of the fully corrected mean and variance of true validity coefficients. It is concluded that the 5 validity generalization procedures will lead to the same general conclusions regarding the effectiveness of a predictor measure. (16 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
Compared the scores of 174 Navy enlisted men on K. Clark's Navy Vocational Interest Inventory with Ss' scores obtained as civilians 6 yr. later and subsequent civilian occupations. Results show reliability and validity which parallel those reported for the SVIB. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Administered the Cornell Medical Index (CMI) to 630 Navy psychiatric patients and 454 healthy controls. Patient and control samples were split into 2 groups for cross-validation purposes, and 2 methods, regression analysis and a new item selection technique called SEQUIN, were applied to the problem of selecting the most discriminating set of CMI items. The percentages correctly classified "sick" or "well" when results from Sample 1 were used to predict Sample 2 and vice versa were 82 and 85% by the regression method and 86 and 86% by the SEQUIN method. 7 items, perhaps representing general attributes defining mental illness in the Navy culture, contributed significantly to the predictive scales regardless of particular item selection method or sample. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
A common belief among researchers is that vocational interests have limited value for personnel selection. However, no comprehensive quantitative summaries of interests validity research have been conducted to substantiate claims for or against the use of interests. To help address this gap, we conducted a meta-analysis of relations between interests and employee performance and turnover using data from 74 studies and 141 independent samples. Overall validity estimates (corrected for measurement error in the criterion but not for range restriction) for single interest scales were .14 for job performance, .26 for training performance, –.19 for turnover intentions, and –.15 for actual turnover. Several factors appeared to moderate interest–criterion relations. For example, validity estimates were larger when interests were theoretically relevant to the work performed in the target job. The type of interest scale also moderated validity, such that corrected validities were larger for scales designed to assess interests relevant to a particular job or vocation (e.g., .23 for job performance) than for scales designed to assess a single, job-relevant realistic, investigative, artistic, social, enterprising, or conventional (i.e., RIASEC) interest (.10) or a basic interest (.11). Finally, validity estimates were largest when studies used multiple interests for prediction, either by using a single job or vocation focused scale (which tend to tap multiple interests) or by using a regression-weighted composite of several RIASEC or basic interest scales. Overall, the results suggest that vocational interests may hold more promise for predicting employee performance and turnover than researchers may have thought. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

7.
The overall validity of a career-intention question for predicting Navy reenlistment was reanalyzed for subgroups selected by another logically related test serving as a measure of predictability. On the assumption that career-intention responses of better informed recruits would be relatively more valid, 21 samples, comprising 13,448 enlisted men, were each trichotomized into High, Middle, and Low subgroups on Naval Knowledge Test (NKT) scores. The validity of the career question for the High group was equal to, or larger than the validity for the total group in 19 of the 21 samples. The results generally confirmed that test validity for total groups may be improved for subgroups identified as more predictable by another relevant measure. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Compared the relative accuracy of 2 methods of estimating employment test validity: expert judgment and small sample criterion-related validation studies. The study was based on US Navy data from samples of over 3,000 for each of 9 jobs, with validity results on 6 tests for each job. 20 experienced psychologists estimated the observed validity for each of the 54 test–job combinations. Both the random and systematic error in the expert judgments were evaluated. Psychologists typically underestimated the validity by a small amount (an average systematic error of .019). On the average, to equal the accuracy of a single judge, the sample size of a criterion-related validation study would have to be 92. To match the accuracy of an average across 4 judges, the sample size must be 326. The sample size must be 1,164 to match the accuracy of the pooled judgment of 30 judges. Results indicate that, given highly trained and experienced judges, expert judgment may provide more accurate estimates of validity for cognitive tests than do local criterion-related validation studies. (10 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
This study assessed the relative accuracy of 3 techniques--local validity studies, meta-analysis, and Bayesian analysis--for estimating test validity, incremental validity, and adverse impact in the local selection context. Bayes-analysis involves combining a local study with nonlocal (meta-analytic) validity data. Using tests of cognitive ability and personality (conscientiousness) as predictors, an empirically driven selection scenario illustrates conditions in which each of the 3 estimation techniques performs best. General recommendations are offered for how to estimate local parameters, based on true population variability and the number of studies in the meta-analytic prior. Benefits of empirical Bayesian analysis for personnel selection are demonstrated, and equations are derived to help guide the choice of a local validity technique (i.e., meta-analysis vs. local study vs. Bayes-analysis). (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
Presents a stochastic judgment model (SJM) as a framework for addressing a wide range of issues in statement verification and probability judgment. The SJM distinguishes between covert confidence in the truth of a proposition and the selection of an overt response. A series of experiments demonstrated the model's validity and yielded new results: Binary true–false responses were biased toward true relative to underlying judgment. Underlying judgment was also biased in that direction. Also, in a domain about which Ss had some knowledge, they discriminated true and false statements better when they compared complementary pairs before judging individual statements than when they performed those tasks in the opposite order. The results are interpreted in terms of the SJM and are discussed with respect to implications for theories of statement verification and for research on the accuracy of probability judgments. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
Although vocational interests have a long history in vocational psychology, they have received extremely limited attention within the recent personnel selection literature. We reconsider some widely held beliefs concerning the (low) validity of interests for predicting criteria important to selection researchers, and we review theory and empirical evidence that challenge such beliefs. We then describe the development and validation of an interests-based selection measure. Results of a large validation study (N = 418) reveal that interests predicted a diverse set of criteria—including measures of job knowledge, job performance, and continuance intentions—with corrected, cross-validated Rs that ranged from .25 to .46 across the criteria (mean R = .31). Interests also provided incremental validity beyond measures of general cognitive aptitude and facets of the Big Five personality dimensions in relation to each criterion. Furthermore, with a couple exceptions, the interest scales were associated with small to medium subgroup differences, which in most cases favored women and racial minorities. Taken as a whole, these results appear to call into question the prevailing thought that vocational interests have limited usefulness for selection. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
In the absence of a well-defined technology for the construction of criterion-referenced tests, a methodology was devised for use in developing and evaluating a diagnostic test keyed to 14 modules of individualized, US Navy shipboard instruction. The test was constructed in 2 phases. In the 1st phase, preinstruction and postinstruction groups each consisted of 100 Navy boiler technicians; in the 2nd phase, each group consisted of 75 Navy boiler technicians. In both phases, 25 members of each instruction group were chosen randomly to form cross-validation samples. The main construction procedures included (a) writing and refining an item pool, (b) selecting items that best discriminated between instruction groups, (c) determining cutoff scores, (d) validating items on cross-validation samples, and (e) estimating test–retest reliability. High face validity was achieved by using materials that were encountered on the job and by having job experts write the items. In the final construction phase, the amount of agreement between actual instruction-group membership of the cross-validation sample and that diagnosed by test scores ranged from 68 to 92%. Also, the discrimination ability of refined items improved significantly. Across a test and retest during a tryout phase, the agreement in diagnostic decisions ranged from 71 to 96%. (10 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Research investigating the validity of personality measures has established these measures as useful selection tools. However, personality measures are vulnerable to response distortion leading to employer concerns about the influence of applicant faking, with specific concerns about the influence of social desirability. A traditional method used to circumvent this is the application of a correction based on a social desirability scale score. This study sought to evaluate whether such corrections are effective tools for removing the influence of intentional distortion. A within-subjects design facilitated comparisons between honest, faked, and corrected scores. The goal was to evaluate whether a social desirability correction allows one to approximate an individual's honest score. The results suggest that a social desirability correction is ineffective and fails to produce a corrected score that approximates an honest score. Results are interpreted with respect to applicant comparison and construct validity. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
Analyzed the findings of over 700 criterion-related validity studies concerning (a) the relationship between the magnitude of the standard deviation (SD) of the predictor and the magnitude of the predictive validity, (b) the effect of corrections for range restriction, assuming explicit selection was based solely on the single predictor, and (c) the effect of corrections for range restriction, assuming selection was based on an unknown 3rd variable that had plausible correlations with the predictor and the criterion. As expected, a strong positive relationship was found in (a). Assumption of explicit selection, as in (b), reduced but did not eliminate the positive relationship between the SD and the corrected predictive validity. This relationship was reduced by corrections, as in (c). It is concluded that the usual correction for range restriction is better than the uncorrected coefficient but is still apt to provide a conservative estimate. More frequent use of corrections is encouraged. (11 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
Using a large database, this study examined 3 refinements of validity generalization procedures: (1) a more accurate procedure for correcting the residual standard deviation (SD) for range restriction to estimate SDp, (2) use of r? instead of study-observed rs in the formula for sampling error variance, and (3) removal of non-Pearson rs. The 1st procedure does not affect the amount of variance accounted for by artifacts. The addition of the 2nd and 3rd procedures increased the mean percentage of validity variance accounted for by artifacts from 70 to 82%, a 17% increase. The cumulative addition of all 3 procedures decreased the mean SDp estimate from .150 to .106, a 29% decrease. Six additional variance-producing artifacts were identified that could not be corrected for. In light of these it was concluded that the obtained estimates of mean SDp and mean validity variance accounted for were consistent with the hypothesis that the true mean SDp value is close to zero. These findings provide further evidence against the situational specificity hypothesis. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
In selection research and practice, there have been many attempts to correct scores on noncognitive measures for applicants who may have faked their responses somehow. A related approach with more impact would be identifying and removing faking applicants from consideration for employment entirely, replacing them with high-scoring alternatives. The current study demonstrates that under typical conditions found in selection, even this latter approach has minimal impact on mean performance levels. Results indicate about .1 SD change in mean performance across a range of typical correlations between a faking measure and the criterion. Where trait scores were corrected only for suspected faking, and applicants not removed or replaced, the minimal impact the authors found on mean performance was reduced even further. By comparison, the impact of selection ratio and test validity is much larger across a range of realistic levels of selection ratios and validities. If selection researchers are interested only in maximizing predicted performance or validity, the use of faking measures to correct scores or remove applicants from further employment consideration will produce minimal effects. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Evaluated the utility of the assessment center by means of the Brogden-Cronbach-Gleser continuous variable utility model. After specifying several cost assumptions, 6 parameters were varied systematically: the validity and cost of the assessment center, the validity of the ordinary selection procedure, the selection ratio, the standard deviation of the criterion, and the number of assessment centers. The largest impacts on assessment center payoffs were exerted by the size of the criterion standard deviation, the selection ratio, and the difference in validity between the assessment center and the ordinary selection procedure. Even assessment centers with validities as low as .10 showed positive gains in utility over random selection. (39 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
The Objective-Analytic Personality Test Battery was administered to an offender and a nonoffender Navy enlisted sample to determine the validity of these objective test dimensions in differing delinquent from nondelinquent groups. 8 of the 18 objective test factors differentiated the samples at the .05 confidence level or higher. However, when correlations were computed against number of offenses within the offender sample none of the factors was significantly related to the criterion. (15 ref.) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
The main objectives in this research were to introduce the concept of team role knowledge and to investigate its potential usefulness for team member selection. In Study 1, the authors developed a situational judgment test, called the Team Role Test, to measure knowledge of 10 roles relevant to the team context. The criterion-related validity of this measure was examined in 2 additional studies. In a sample of academic project teams (N = 93), team role knowledge predicted team member role performance (r = .34). Role knowledge also provided incremental validity beyond mental ability and the Big Five personality factors in the prediction of role performance. The results of Study 2 revealed that the predictive validity of role knowledge generalizes to team members in a work setting (N = 82, r = .30). The implications of the results for selection in team environments are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
A forced choice form of an interest inventory was compared with an L-I-D form using the same items, for groups of Navy yeomen (clerical workers) and college students. Unit-weight and multiple weight keys were developed for each inventory to differentiate yeomen from students. The forced-choice keys were superior to the L-I-D keys in separating groups in seven of ten comparisons. The average superiority of forced-choice keys was a 5.9% decrease in overlapping. There was little difference in validity shrinkage for the two kinds of items. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号