首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Most consultant selection models stress the importance of past performance. However, there are so far very few studies on evaluating consultant’s performance. It hampers the whole process of the selection model. This paper tries to establish a systematic approach in developing a consultant’s performance evaluation model for the evaluation of the performance of cost estimators in the construction industry of Hong Kong. Nominal group technique is adopted in identifying the decision criteria for the evaluation, and a reliability interval method (RIM) is developed to assess the importance weighting of each criterion. The RIM allows statistical analysis and fuzzy assessment of the weight. The results report the weights of each decision criterion and its subcriterion in evaluating cost estimator’s performance. Interestingly, the results show that traditional functions of cost estimator are the least treasured from the clients. Proactive and professional advice is considered to be much more important. The study is highly relevant to industry practitioners in assessing performance of costs estimators, as well as to researchers in further developing the consultant selection model.  相似文献   

2.
This paper applies White's (1982, Econometrica 50, 1-25) information matrix (IM) test for correct model specification to proportional hazards models of univariate and multivariate censored survival data. Several alternative estimators of the test statistic are presented and their size performance examined. White also suggested an estimator of the parameter covariance matrix that was robust to certain forms of model misspecification. This has been subsequently proposed by others (e.g., Royall, 1986, International Statistical Review 54, 221-226) and applied by Huster, Brookmeyer, and Self (1989, Biometrics 45, 145-156) as part of an independence working model (IWM) approach to multivariate censored survival data. We illustrate how the IM test can be used for both univariate data and as part of the IWM approach to multivariate data.  相似文献   

3.
Pathological gambling carries high economic and social cost. It is therefore important to develop questionnaires for the early screening of gamblers at risk of developing pathological gambling. This study analyses the factorial structure (exploratory factor analysis) of the questionnaire d`excès aux loteries video (QELVI, video lotteries excess scale). A sample of 290 video lottery gamblers completed the QELVI (20 items). The QELVI’s convergent validity with the obsessive passion subscale of Rousseau, Vallerand, Ratelle, Mageau, and Provencher (2002) and its temporal stability (1 month) was examined. The QELVI has a unifactorial structure that explains 71% of common variance, an excellent internal consistency (Cronbach’s alpha = .97) and temporal stability (Intraclass correlation = .92). Pathological gamblers’ mean QELVI score is higher than that of at-risk gamblers and the mean score of at-risk gamblers is higher than that of nonproblem gamblers. The questionnaire’s psychometric properties are discussed along with suggestions for future development. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
Zimet's knowledge of AIDS scale was completed anonymously by 2,209 university students to assess whether a split-half approach in which items in each half were matched for content would provide better estimates of reliability than other method. Analysis indicates that the odd even Spearman Brown split-half reliability coefficient was lower than both the alpha coefficient and the content-based split-half coefficient. The Cronbach alpha was similar to the content-based Spearman-Brown reliability coefficient.  相似文献   

5.
It is the cost estimator’s task to determine how the building design influences construction costs. Estimators must recognize the design conditions that effect construction costs and adjust the project’s activities, resources, and resource productivity rates accordingly to create a cost estimate for a particular design. Current tools and methodologies help estimators to establish relationships between product and cost information to calculate quantities automatically. However, they do not provide a common vocabulary to represent estimators’ rationale for relating product and cost information. This paper presents the ontology we formalized to represent estimators’ rationale for relating features of building product models to construction activities and associated construction resources to calculate construction costs. A software prototype that implements the ontology enables estimators to generate activities that know what feature requires their execution, what resources are being used and why, and how much the activities’ execution costs. Validation studies of use of the prototype system provide evidence that the ontology enabled estimators to generate and maintain construction cost estimates more completely, consistently, and expeditiously than traditional tools.  相似文献   

6.
Cervical cancer     
Several statistical methods are available for the analysis of responses with ordinal categories or continuous distributions for the respective visits in longitudinal studies. This paper discusses an alternative nonparametric strategy for studies with more than two groups through Mann-Whitney rank measures of association for all pairs of groups. The proposed method is based on U-statistic theory, and it applies a linear or linear logistic model to the Mann-Whitney estimators for the probabilities of better response for each group relative to each of the others. In addition, the ways of adjusting for covariables and managing stratification factors are explained. Analysis of parallel dose-response relationships for two treatments is illustrated for the proposed method with data from a multicenter study with repeated measurements. A nonparametric estimator for relative potency is provided from the method.  相似文献   

7.
In assessments of attitudes, personality, and psychopathology, unidimensional scale scores are commonly obtained from Likert scale items to make inferences about individuals' trait levels. This study approached the issue of how best to combine Likert scale items to estimate test scores from the practitioner's perspective: Does it really matter which method is used to estimate a trait? Analyses of 3 data sets indicated that commonly used methods could be classified into 2 groups: methods that explicitly take account of the ordered categorical item distributions (i.e., partial credit and graded response models of item response theory, factor analysis using an asymptotically distribution-free estimator) and methods that do not distinguish Likert-type items from continuously distributed items (i.e., total score, principal component analysis, maximum-likelihood factor analysis). Differences in trait estimates were found to be trivial within each group. Yet the results suggested that inferences about individuals' trait levels differ considerably between the 2 groups. One should therefore choose a method that explicitly takes account of item distributions in estimating unidimensional traits from ordered categorical response formats. Consequences of violating distributional assumptions were discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Estimation of the log-normal mean   总被引:1,自引:0,他引:1  
The most commonly used estimator for a log-normal mean is the sample mean. In this paper, we show that this estimator can have a large mean square error, even for large samples. Then, we study three main alternative estimators: (i) a uniformly minimum variance unbiased (UMVU) estimator; (ii) a maximum likelihood (ML) estimator; (iii) a conditionally minimal mean square error (MSE) estimator. We find that the conditionally minimal MSE estimator has the smallest mean square error among the four estimators considered here, regardless of the sample size and the skewness of the log-normal population. However, for large samples (n > or = 200), the UMVU estimator, the ML estimator, and the conditionally minimal MSE estimators have very similar mean square errors. Since the ML estimator is the easiest to compute among these three estimators, for large samples we recommend the use of the ML estimator. For small to moderate samples, we recommend the use of the conditionally minimal MSE estimator.  相似文献   

9.
Discusses the concept of a lower bound sample estimator of population reliability. Although it is known that coefficient α is a lower bound to reliability under very general conditions, it is noted that this property applies only in the population. Based on earlier work dealing with the sampling distribution of α coefficients and with interval estimation, a coefficient ρ({l}) is presented that is a lower bound to population reliability in both the population and the sample. Several examples are presented to illustrate that when the sample lower bound coefficient is adequately high, useful inferences can be made about the population reliability, even in relatively small samples. (16 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
Several sampling designs for assessing agreement between two binary classifications on each of n subjects lead to data arrayed in a four-fold table. Following Kraemer's (1979, Psychometrika 44, 461-472) approach, population models are described for binary data analogous to quantitative data models for a one-way random design, a two-way mixed design, and a two-way random design. For each of these models, parameters representing intraclass correlation are defined, and two estimators are proposed, one from constructing ANOVA-type tables for binary data, and one by the method of maximum likelihood. The maximum likelihood estimator of intraclass correlation for the two-way mixed design is the same as the phi coefficient (Chedzoy, 1985, in Encyclopedia of Statistical Sciences, Vol. 6, New York: Wiley). For moderately large samples, the ANOVA estimator for the two-way random design approximates Cohen's (1960, Psychological Measurement 20, 37-46) kappa statistic. Comparisons among the estimators indicate very little difference in values for tables with marginal symmetry. Differences among the estimators increase with increasing marginal asymmetry, and with average prevalence approaching .50.  相似文献   

11.
The importance of accurate estimates during the early stages of capital projects has been widely recognized for many years. Early project estimates represent a key ingredient in business unit decisions and often become the basis for a project’s ultimate funding. However, a stark contrast arises when comparing the importance of early estimates with the amount of information typically available during the preparation of an early estimate. Such limited scope definition often leads to questionable estimate accuracy. Even so, very few quantitative methods are available that enable estimators and business managers to objectively evaluate the accuracy of early estimates. The primary objective of this study was to establish such a model. To accomplish this objective, quantitative data were collected from completed construction projects in the process industry. Each of the respondents was asked to assign a one-to-five rating for each of 45 potential drivers of estimate accuracy for a given estimate. The data were analyzed using factor analysis and multivariate regression analysis. The factor analysis was used to group the 45 elements into 11 orthogonal factors. Multivariate regression analysis was performed on the 11 factors to determine a suitable model for predicting estimate accuracy. The resulting model, known as the estimate score procedure, allows the project team to score an estimate and then predict its accuracy based on the estimate score. In addition, a computer software tool, the Estimate Score Program, was developed to automate the estimate score procedure. The multivariate regression analysis identified 5 of the 11 factors that were significant at the α = 10% level. The five factors, in order of significance, were basic process design, team experience and cost information, time allowed to prepare the estimate, site requirements, and bidding and labor climate.  相似文献   

12.
Inference using complex data from surveys and experiments.   总被引:1,自引:0,他引:1  
Examines methods for analyzing complex data (i.e., data that do not conform to the assumptions of independence and homoscedasticity on which many classical procedures are based). Primary attention is given to regression analysis, with ANOVA as a special case, though reference to related work on loglinear models and logit analysis is also made. The problems associated with using standard methods and software on complex data are discussed. Much of the work on alternative strategies for complex data analysis is based on an inferential framework that is fundamentally different from the model-based inference familiar to most psychologists. Though model-based inference is the most popular approach to analyzing experiments in psychology, the randomization approach is increasingly being advocated as an alternative. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
L. J. Cronbach and L. Furby (see record 1970-15658-001) estimated linear combinations of variables by combining least squares estimators of their components. The present author argues that the more conventional least squares estimator of the linear combination as a unit should be used for this purpose. The 2 procedures produce the same formula for estimating true gain, but for estimating true residual gain the procedure advocated here is more precise. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
15.
B Jones  D Teather  J Wang  JA Lewis 《Canadian Metallurgical Quarterly》1998,17(15-16):1767-77; discussion 1799-800
When a clinical trial is conducted at more than one centre it is likely that the true treatment effect will not be identical at each centre. In other words there will be some degree of treatment-by-centre interaction. A number of alternative approaches for dealing with this have been suggested in the literature. These include frequentist approaches with a fixed or random effects model for the observed data and Bayesian approaches. In the fixed effects model, there are two common competing estimators of the treatment difference, based on weighted or unweighted estimates from individual centres. Which one of these should be used is the subject of some controversy and we do not intend to take a particular methodological position in this paper. Our intention is to provide some insight into the relative merits of the indicated range of possible estimators of the treatment effect. For the fixed effects model, we also look at the merits of using a preliminary test for interaction assuming a 10 per cent significance level for the test. In order to make comparisons we have simulated a 'typical' trial which compares an active drug with a placebo in the treatment of hypertension, using systolic blood pressure as the primary variable. As well as allowing the treatment effect to vary between centres, we have concentrated on the particular case where one centre is out of line with the others in terms of its true treatment difference. The various estimators that result from the different approaches are compared in terms of mean squared error and power to reject the null hypothesis of no treatment difference. Overall, the approach that uses the fixed effects weighted estimator of overall treatment difference is recommended as one that has much to offer.  相似文献   

16.
The Psychosocial Adjustment to Illness Scale (PAIS–SR) is a frequently used self-report measure, yet its factor structure, reliability, and validity have not been tested adequately on a sample of persons with cancer. A group of persons with cancer (N?=?502) completed the PAIS–SR and other measures of adjustment and coping. A principal-axis factor analysis with varimax rotation yielded 7 factors: Social and Leisure Activities (.86), Job and Household Duties (.85), Psychological Distress (.87), Sexual Relationship (.92), Relationships With Panner and Family (.70), Health Care Orientation (.61), and Help From Others (.63). Values in parentheses are Cronbach's αs for the factors; α for the entire scale was .93. Correlations with measures of disease impact, adjustment, and coping support the validity of the PAIS–SR and its use for cancer research. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Traditional methods for assessing the neurocognitive effects of epilepsy surgery are confounded by practice effects, test-retest reliability issues, and regression to the mean. This study employs 2 methods for assessing individual change that allow direct comparison of changes across both individuals and test measures. Fifty-one medically intractable epilepsy patients completed a comprehensive neuropsychological battery twice, approximately 8 months apart, prior to any invasive monitoring or surgical intervention. First, a Reliable Change (RC) index score was computed for each test score to take into account the reliability of that measure, and a cutoff score was empirically derived to establish the limits of statistically reliable change. These indices were subsequently adjusted for expected practice effects. The second approach used a regression technique to establish "change norms" along a common metric that models both expected practice effects and regression to the mean. The RC index scores provide the clinician with a statistical means of determining whether a patient's retest performance is "significantly" changed from baseline. The regression norms for change allow the clinician to evaluate the magnitude of a given patient's change on 1 or more variables along a common metric that takes into account the reliability and stability of each test measure. Case data illustrate how these methods provide an empirically grounded means for evaluating neurocognitive outcomes following medical interventions such as epilepsy surgery.  相似文献   

18.
19.
The authors respond to Bensley’s (see record 2009-12731-009) comment on their alternative formulation of critical thinking in psychology (see record 2008-11592-004). They argue that Bensley’s defense of the traditional critical thinking approach—which they term scientific analytic reasoning (SAR)—fails to address their main objections to SAR and their reasons for presenting an alternative. In particular, the openness, fairness, and generativity that Bensley references as strengths of SAR are themselves informed by scientific analytic assumptions and values, which, they argue, illustrates their original contention—that SAR offers an insular and insufficiently critical approach to critical thinking. The authors conclude by calling for future developments in critical thinking that are not driven by an implicit SAR agenda. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
A meta-analysis of single-item measures of overall job satisfaction (28 correlations from 17 studies with 7,682 people) found an average uncorrected correlation of .63 (SD?=?.09) with scale measures of overall job satisfaction. The overall mean correlation (corrected only for reliability) is .67 (SD?=?.08), and it is moderated by the type of measurement scale used. The mean corrected correlation for the best group of scale measures ( 8 correlations, 1,735 people) is .72 (SD?=?.05). The correction for attenuation formula was used to estimate the minimum level of reliability for a single-item measure. These estimates range from .45 to .69, depending on the assumptions made. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号