首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Clustered data is not simply correlated data, but has its own unique aspects. In this paper, various methods for correlated receiver operating characteristic (ROC) curve data that have been extended specifically to clustered data are reviewed. For those methods that have not yet been extended, suggestions for their application to clustered ROC studies are provided. Various methods with respect to their ability to meet either of two objectives of the analysis of clustered ROC data are compared to consider a variety of ROC indices and their accessibility to researchers. The available statistical methods for clustered data vary in the range of indices that can be considered and in their accessibility to researchers. Parametric models permit all indices to be considered but, owing to computational complexity, are the least accessible of available methods. Nonparametric methods are much more accessible, but only permit estimation and inference about ROC curve area. The jackknife method is the most accessible and permits any index to be considered. Future development of methods for clustered ROC studies should consider the continuation ratio model, which will permit the application of widely available software for the analysis of mixed generalized linear models. Another area of development should be in the adoption of bootstrapping methods to clustered ROC data.  相似文献   

2.
Kaufman et al. compute the 'excess risk' of a disease in the presence of an exposure as the product of the incidence rate of the disease in the source population, the complement of the aetiologic fraction and the relative risk minus one. Methods for calculating confidence intervals for this quantity are derived when (as in case-control studies) the relative risk is estimated by the odds ratio, firstly from multiple logistic regression analysis and secondly without adjustment for covariates. For the latter an innovative approach based on confidence bounds for the two exposure parameters is suggested. The performance of these systems of confidence intervals is assessed by simulation for the former and by exact enumeration of the distributions involved in the latter. Illustrative examples from a study of agranulocytosis and indomethacin are presented.  相似文献   

3.
RATIONALE AND OBJECTIVES: Traditionally, multireader receiver operating characteristic (ROC) studies have used a "paired-case, paired-reader" design. The statistical power of such a design for inferences about the relative accuracies of the tests was assessed and compared with alternative designs. METHODS: The noncentrality parameter of an F statistic was used to compute power as a function of the reader and patient sample sizes and the variability and correlation between readings. RESULTS: For a fixed-power and Type I error rate, the traditional design reduces the number of verified cases required. A hybrid design, in which each reader interprets a different sample of patients, reduces the number of readers, total readings, and reading required per reader. The drawback is a substantial increase in the number of verified cases. CONCLUSION: The ultimate choice of study design depends on the nature of the tests being compared, limiting resources, a priori knowledge of the magnitude of the correlations and variability and logistic complexity.  相似文献   

4.
RATIONALE AND OBJECTIVES: Observer performance studies sometimes use too few cases for estimating diagnostic accuracy from binormal receiver operating characteristic (ROC) curves. One important problem is degenerate data sets. We compared a new algorithm, RSCORE4, with the exact-solution approach to degeneracy in ROCFIT and with the Wilcoxon statistic. METHODS: Degenerate ROC solutions result from empty cells in the data matrix. We addressed this problem by adding a small constant to empty cells in a maximum-likelihood program, RSCORE4. When this method failed, the program branched to a pattern-search algorithm. We tested the program in a series of Monte Carlo studies. RESULTS: RSCORE4 converged to nondegenerate solutions in every case and gave results closer to population values than ROCFIT or Wilcoxon. ROCFIT converged to exact-fit degenerate solutions, those with zero or infinite parameter values, in more than 40% of the samples. The Wilcoxon statistic was biased. CONCLUSION: RSCORE4 seems to outperform other currently recommended methods for dealing with degeneracy.  相似文献   

5.
The receiver operating characteristic (ROC) curve represents characteristics specific to an examination (diagnostic sensitivity and specificity) and is useful for evaluation and comparison of the diagnostic accuracy. However, the ROC curve is not widely used at present. In this symposium, we showed how to draw this curve and its practical utilization, using as examples the diagnosis of the diabetic and impaired glucose tolerance group and the diagnosis of deep-seated fungal infection and acute myocardial infarction. In the ROC curve, true positive is plotted on the vertical axis and false positive on the horizontal axis. This curve is readily drown and visually shows the diagnostic accuracy that can not be clarified by histograms. The advantages of this curve are as follows. 1. Diagnostic accuracy can be compared. 2. The significance of the reference interval in diagnosis can be evaluated. 3. The diagnostic cut-off value can be determined using this curve. 4. Combined with prevalence, the diagnostic probability can be represented quantitatively. The points that require attention are differences in the ROC curve according to selection of subjects (including controls), the time factor (disease stage) and severity (disease condition). By paying attention to these points, the ROC curve can be used as a simple and useful method in laboratory diagnosis. We hope that this curve will be widely used.  相似文献   

6.
Thirteen methods for computing binomial confidence intervals are compared based on their coverage properties, widths and errors relative to exact limits. The use of the standard textbook method, x/n +/- 1.96 square root of [(x/n)(1-x/n)/n], or its continuity corrected version, is strongly discouraged. A commonly cited rule of thumb stating that alternatives to exact methods may be used when the estimated proportion p is such that np and n(1(-)p) both exceed 5 does not ensure adequate accuracy. Score limits are easily calculated from closed form solutions to quadratic equations and can be used at all times. Based on coverage functions, the continuity corrected score method is recommended over exact methods. Its conservative nature should be kept in mind, as should the wider fluctuation of actual coverage that accompanies omission of the continuity correction.  相似文献   

7.
Individual gastric glands of the stomach are composed of cells of different phenotypes. These are derived from multipotent progenitor stem cells located at the isthmus region of the gland. Previous cell lineage analyses suggest that gastric glands, as in the colon and small intestine, are invariably monoclonal by adult stages. However, little is known about the ontogenetic progression of glandular clonality in the stomach. To examine this issue, we employed an in situ cell lineage marker in female mice heterozygous for an X-linked transgene. We found that stomach glands commence development as polyclonal units, but by adulthood (6 weeks), the majority progressed to monoclonal units. Our analysis suggests that at least three progenitor cells are required to initiate the development of individual gastric glands if they are analyzed just after birth. Hence, unlike the colon and small intestine, stomachs showed a significant fraction (10-25%) of polyclonal glands at adult stages. We suggest that these glands persist from polyclonal glands present in the embryonic stomach and hypothesize that they represent a subpopulation of glands with larger numbers of self-renewing stem cells.  相似文献   

8.
9.
10.
An experiment to assess the efficacy of a particular treatment or process often produces dichotomous responses, either favourable or unfavourable. When we administer the treatment on two occasions to the same subjects, we often use McNemar's test to investigate the hypothesis of no difference in the proportions on the two occasions, that is, the hypothesis of marginal homogeneity. A disadvantage in using McNemar's statistic is that we estimate the variance of the sample difference under the restriction that the marginal proportions are equal. A competitor to McNemar's statistic is a Wald statistic that uses an unrestricted estimator of the variance. Because the Wald statistic tends to reject too often in small samples, we investigate an adjusted form that is useful for constructing confidence intervals. Quesenberry and Hurst and Goodman discussed methods of construction that we adapt for constructing confidence intervals for the differences in correlated proportions. We empirically compare the coverage probabilities and average interval lengths for the competing methods through simulation and give recommendations based on the simulation results.  相似文献   

11.
Dual-process models of the word-frequency mirror effect posit that low-frequency words are recollected more often than high-frequency words, producing the hit rate differences in the word-frequency effect, whereas high-frequency words are more familiar, producing the false-alarm-rate differences. In this pair of experiments, the authors demonstrate that the analysis of receiver operating characteristic (ROC) curves provides critical information in support of this interpretation. Specifically, when participants were required to discriminate between studied nouns and their plurality reversed complements, the ROC curve was accurately described by a threshold model that is consistent with recollection-based recognition. Further, the plurality discrimination ROC curves showed characteristics consistent with the interpretation that participants recollected low-frequency items more than high-frequency items. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
One of the main objectives in meta-analysis is to estimate the overall effect size by calculating a confidence interval (CI). The usual procedure consists of assuming a standard normal distribution and a sampling variance defined as the inverse of the sum of the estimated weights of the effect sizes. But this procedure does not take into account the uncertainty due to the fact that the heterogeneity variance (τ2) and the within-study variances have to be estimated, leading to CIs that are too narrow with the consequence that the actual coverage probability is smaller than the nominal confidence level. In this article, the performances of 3 alternatives to the standard CI procedure are examined under a random-effects model and 8 different τ2 estimators to estimate the weights: the t distribution CI, the weighted variance CI (with an improved variance), and the quantile approximation method (recently proposed). The results of a Monte Carlo simulation showed that the weighted variance CI outperformed the other methods regardless of the τ2 estimator, the value of τ2, the number of studies, and the sample size. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Presents improved procedures to approximate confidence intervals for ρ–2 and ρc–2 in both fixed and random predictor models. These approximations require neither point estimates nor variance estimates and are analytically shown to be precise enough for most practical prediction purposes. An application of confidence intervals in regression model development is also given. (16 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
[Correction Notice: An erratum for this article was reported in Vol 13(1) of Psychological Methods (see record 2008-02525-006). The note corrects simulation results presented in the article concerning the performance of confidence intervals (CIs) for Spearman's rs. An error in the author's C++ code affected all simulation results for Spearman's rs (but none of the results for gamma-family indices).] This research focused on confidence intervals (CIs) for 10 measures of monotonic association between ordinal variables. Standard errors (SEs) were also reviewed because more than 1 formula was available per index. For 5 indices, an element of the formula used to compute an SE is given that is apparently new. CIs computed with different SEs were compared in simulations with small samples (N = 25, 50, 75, or 100) for variables with 4 or 5 categories. With N > 25, many CIs performed well. Performance was best for consistent CIs due to N. Cliff and colleagues (N. Cliff, 1996; N. Cliff & V. Charlin, 1991; J. D. Long & N. Cliff, 1997). CIs for Spearman's rank correlation were also examined: Parameter coverage was erratic and sometimes egregiously underestimated. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
Most psychology journals now require authors to report a sample value of effect size along with hypothesis testing results. The sample effect size value can be misleading because it contains sampling error. Authors often incorrectly interpret the sample effect size as if it were the population effect size. A simple solution to this problem is to report a confidence interval for the population value of the effect size. Standardized linear contrasts of means are useful measures of effect size in a wide variety of research applications. New confidence intervals for standardized linear contrasts of means are developed and may be applied to between-subjects designs, within-subjects designs, or mixed designs. The proposed confidence interval methods are easy to compute, do not require equal population variances, and perform better than the currently available methods when the population variances are not equal. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Young and older adults were tested on recognition memory for pictures. The Yonelinas high threshold (YHT) model, a formal implementation of 2-process theory, fit the response distribution data of both young and older adults significantly better than a normal unequal variance signal-detection model. Consistent with this finding, nonlinear z-transformed receiver operating characteristic curves were obtained for both groups. Estimates of recollection from the YHT model were significantly higher for young than for older adults. This deficit was not a consequence of a general decline in memory; older adults showed comparable overall accuracy and in fact a nonsignificant increase in their familiarity scores. Implications of these results for theories of recognition memory and the mnemonic deficit associated with aging are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
RATIONALE AND OBJECTIVES: The authors conducted a series of null-case Monte Carlo simulations to evaluate the Dorfman-Berbaum-Metz (DBM) method for comparing modalities with multireader receiver operating characteristic (ROC) discrete rating data. MATERIALS AND METHODS: Monte Carlo simulations were performed by using discrete ratings on fully crossed factorial designs with two modalities and three, five, and 10 hypothetical readers. The null hypothesis was true for all simulations. The population ROC areas, latent variable structures, case sample sizes, and normal/abnormal case sample ratios used in another study were used in these simulations. RESULTS: For equal allocation ratios and small (Az = 0.702) and moderate (Az = 0.855) ROC areas, the empirical type I error rate closely matched the nominal alpha level. For very large ROC areas (Az = 0.961), however, the empirical type I error rate was somewhat smaller than the nominal alpha level. This conservatism increased with decreasing case sample size and asymmetric normal/abnormal case allocation ratio. The empirical type I error rate was sometimes slightly larger than the nominal alpha level with many cases and few readers, where there was large residual, relatively small treatment-by-case interaction and relatively large treatment-by-reader interaction. CONCLUSION: The results suggest that the DBM method provides trustworthy alpha levels with discrete ratings when the ROC area is not too large and case and reader sample sizes are not too small. In other situations, the test tends to be somewhat conservative or slightly liberal.  相似文献   

18.
Disorders of self-regulatory behavior are common reasons for referral to child and adolescent clinicians. Here, the authors sought to compare 2 methods of empirically based assessment of children with problems in self-regulatory behavior. Using parental reports on 2,028 children (53% boys) from a U.S. national probability sample of the Child Behavior Checklist (CBCL; T. M. Achenbach & L. A. Rescorla, 2001), the receiver operating characteristic curve analysis was applied to compare scores on the Posttraumatic Stress Problems Scale (PTSP) of the CBCL with the CBCL Dysregulation Profile (DP), identified using latent class analysis of the Attention Problems, Aggressive Behavior, and Anxious/Depressed scales of the CBCL. The CBCL–PTSP score demonstrated an area under the curve of between .88 and .91 for predicting membership in the CBCL–DP profile for boys and for girls. These findings suggest that the CBCL–PTSP, which others have shown does not uniquely identify children who have been traumatized, does identify the same profile of behavior as the CBCL–DP. Therefore, the authors recommend renaming the CBCL–PTSP the Dysregulation Short Scale and provide some guidelines for the use of the CBCL–DP scale and the CBCL–PTSP in clinical practice. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Interstitial deletions in chromosome 22 and features associated with CATCH-22 syndrome have been reported in patients with conotruncal congenital heart anomalies. Absent pulmonary valve syndrome is characterized by absent or rudimentary pulmonary valve cusps, absent ductus arteriosus, conoventricular septal defect, and massive dilation of the pulmonary arteries. Because absence of the ductus arteriosus is a key element in the pathogenesis of this syndrome and aortic arch malformations are frequently seen in patients with CATCH-22 syndrome, we hypothesized that patients with absent pulmonary valve syndrome would have a high incidence of deletions in the critical region of chromosome 22. Eight patients with absent pulmonary valve syndrome were studied. Metaphase preparations were examined with fluorescent in situ hybridization of the N25 (D22S75) probe to the critical region of chromosome 22q11.2. Deletions were detected in 6 of 8 patients. The presence of deletions in chromosome 22 in most of the patients we have examined with a diagnosis of absent pulmonary valve syndrome supports a specific genetic and embryologic mechanism involving the interaction of the neural crest and the primitive aortic arches as one cause of congenital absence of the pulmonary valve.  相似文献   

20.
Notes that for those making choices among selection strategies, training programs, or other treatments it can be more important to understand the impact of the choice on individuals identified as the best or poorest rather than on the average. As there are not readily available techniques for making such comparisons, an approach that develops confidence intervals for quantile differences is illustrated based on the recently developed bootstrap principle of nonparametric inference. (14 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号