首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the “general” three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman–Pearson (N-P) criteria. We found that by making assumptions for both MEU and N–P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N–P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.  相似文献   

2.
Receiver operating characteristic (ROC) analysis is well established in the evaluation of systems involving binary classification tasks. However, medical tests often require distinguishing among more than two diagnostic alternatives. The goal of this work was to develop an ROC analysis method for three-class classification tasks. Based on decision theory, we developed a method for three-class ROC analysis. In this method, the objects were classified by making the decision that provided the maximal utility relative to the other two. By making assumptions about the magnitudes of the relative utilities of incorrect decisions, we found a decision model that maximized the expected utility of the decisions when using log-likelihood ratios as decision variables. This decision model consists of a two-dimensional decision plane with log likelihood ratios as the axes and a decision structure that separates the plane into three regions. Moving the decision structure over the decision plane, which corresponds to moving the decision threshold in two-class ROC analysis, and computing the true class 1, 2, and 3 fractions defined a three-class ROC surface. We have shown that the resulting three-class ROC surface shares many features with the two-class ROC curve; i.e., using the log likelihood ratios as the decision variables results in maximal expected utility of the decisions, and the optimal operating point for a given diagnostic setting (set of relative utilities and disease prevalences) lies on the surface. The volume under the three-class surface (VUS) serves as a figure-of-merit to evaluate different data acquisition systems or image processing and reconstruction methods when the assumed utility constraints are relevant.  相似文献   

3.
Previously, we have proposed a method for three-class receiver operating characteristic (ROC) analysis based on decision theory. In this method, the volume under a three-class ROC surface (VUS) serves as a figure-of-merit (FOM) and measures three-class task performance. The proposed three-class ROC analysis method was demonstrated to be optimal under decision theory according to several decision criteria. Further, an optimal three-class linear observer was proposed to simultaneously maximize the signal-to-noise ratio (SNR) between the test statistics of each pair of the classes provided certain data linearity condition. Applicability of this three-class ROC analysis method would be further enhanced by the development of an intuitive meaning of the VUS and a more general method to calculate the VUS that provides an estimate of its standard error. In this paper, we investigated the general meaning and usage of VUS as a FOM for three-class classification task performance. We showed that the VUS value, which is obtained from a rating procedure, equals the percent correct in a corresponding categorization procedure for continuous rating data. The significance of this relationship goes beyond providing another theoretical basis for three-class ROC analysis - it enables statistical analysis of the VUS value. Based on this relationship, we developed and tested algorithms for calculating the VUS and its variance. Finally, we reviewed the current status of the proposed three-class ROC analysis methodology, and concluded that it extends and unifies decision theoretic, linear discriminant analysis, and psychophysical foundations of binary ROC analysis in a three-class paradigm.  相似文献   

4.
5.
The likelihood ratio, or ideal observer, decision rule is known to be optimal for two-class classification tasks in the sense that it maximizes expected utility (or, equivalently, minimizes the Bayes risk). Furthermore, using this decision rule yields a receiver operating characteristic (ROC) curve which is never above the ROC curve produced using any other decision rule, provided the observer's misclassification rate with respect to one of the two classes is chosen as the dependent variable for the curve (i.e., an "inversion" of the more common formulation in which the observer's true-positive fraction is plotted against its false-positive fraction). It is also known that for a decision task requiring classification of observations into N classes, optimal performance in the expected utility sense is obtained using a set of N-1 likelihood ratios as decision variables. In the N-class extension of ROC analysis, the ideal observer performance is describable in terms of an (N2-N-1)-parameter hypersurface in an (N2-N)-dimensional probability space. We show that the result for two classes holds in this case as well, namely that the ROC hypersurface obtained using the ideal observer decision rule is never above the ROC hypersurface obtained using any other decision rule (where in our formulation performance is given exclusively with respect to between-class error rates rather than within-class sensitivities).  相似文献   

6.
Classification of a given observation to one of three classes is an important task in many decision processes or pattern recognition applications. A general analysis of the performance of three-class classifiers results in a complex 6-D receiver operating characteristic (ROC) space, for which no simple analytical tool exists at present. We investigate the performance of an ideal observer under a specific set of assumptions that reduces the 6-D ROC space to 3-D by constraining the utilities of some of the decisions in the classification task. These assumptions lead to a 3-D ROC space in which the true-positive fraction (TPF) can be expressed in terms of the two types of false-positive fractions (FPFs). We demonstrate that the TPF is uniquely determined by, and therefore is a function of, the two FPFs. The domain of this function is shown to be related to the decision boundaries in the likelihood ratio plane. Based on these properties of the 3-D ROC space, we can define a summary measure, referred to as the normalized volume under the surface (NVUS), that is analogous to the area under the ROC curve (AUC) for a two-class classifier. We further investigate the properties of the 3-D ROC surface and the NVUS for the ideal observer under the condition that the three class distributions are multivariate normal with equal covariance matrices. The probability density functions (pdfs) of the decision variables are shown to follow a bivariate log-normal distribution. By considering these pdfs, we express the TPF in terms of the FPFs, and integrate the TPF over its domain numerically to obtain the NVUS. In addition, we performed a Monte Carlo simulation study, in which the 3-D ROC surface was generated by empirical "optimal" classification of case samples in the multidimensional feature space following the assumed distributions, to obtain an independent estimate of NVUS. The NVUS value obtained by using the analytical pdfs was found to be in good agreemen- t with that obtained from the Monte Carlo simulation study. We also found that, under all conditions studied, the NVUS increased when the difficulty of the classification task was reduced by changing the parameters of the class distributions, thereby exhibiting the properties of a performance metric in analogous to AUC. Our results indicate that, under the conditions that lead to our 3-D ROC analysis, the performance of a three-class classifier may be analyzed by considering the ROC surface, and its accuracy characterized by the NVUS.  相似文献   

7.
The diagnosis of cardiac disease using dual-isotope myocardial perfusion SPECT (MPS) is based on the defect status in both stress and rest images, and can be modeled as a three-class task of classifying patients as having no, reversible, or fixed perfusion defects. Simultaneous acquisition protocols for dual-isotope MPS imaging have gained much interest due to their advantages including perfect registration of the (201)Tl and (99m)Tc images in space and time, increased patient comfort, and higher clinical throughput. As a result of simultaneous acquisition, however, crosstalk contamination, where photons emitted by one isotope contribute to the image of the other isotope, degrades image quality. Minimizing the crosstalk is important in obtaining the best possible image quality. One way to minimize the crosstalk is to optimize the injected activity of the two isotopes by considering the three-class nature of the diagnostic problem. To effectively do so, we have previously developed a three-class receiver operating characteristic (ROC) analysis methodology that extends and unifies the decision theoretic, linear discriminant analysis, and psychophysical foundations of binary ROC analysis in a three-class paradigm. In this work, we applied the proposed three-class ROC methodology to the assessment of the image quality of simultaneous dual-isotope MPS imaging techniques and the determination of the optimal injected activity combination. In addition to this application, the rapid development of diagnostic imaging techniques has produced an increasing number of clinical diagnostic tasks that involve not only disease detection, but also disease characterization and are thus multiclass tasks. This paper provides a practical example of the application of the proposed three-class ROC analysis methodology to medical problems.  相似文献   

8.
We are attempting to develop expressions for the coordinates of points on the three-class ideal observer's receiver operating characteristic (ROC) hypersurface as functions of the set of decision criteria used by the ideal observer. This is considerably more difficult than in the two-class classification task, because the conditional probabilities in question are not simply related to the cumulative distribution functions of the decision variables, and because the slopes and intercepts of the decision boundary lines are not independent; given the locations of two of the lines, the location of the third will be constrained depending on the other two. In this paper, we attempt to characterize those constraining relationships among the three-class ideal observer's decision boundary lines. As a result, we show that the relationship between the decision criteria and the misclassification probabilities is not one-to-one, as it is for the two-class ideal observer.  相似文献   

9.
Generating ROC curves for artificial neural networks   总被引:5,自引:0,他引:5  
Receiver operating characteristic (ROC) analysis is an established method of measuring diagnostic performance in medical imaging studies. Traditionally, artificial neural networks (ANN's) have been applied as a classifier to find one “best” detection rate. Recently researchers have begun to report ROC curve results for ANN classifiers. The current standard method of generating ROC curves for an ANN is to vary the output node threshold for classification. Here, the authors propose a different technique for generating ROC curves for a two class ANN classifier. They show that this new technique generates better ROC curves in the sense of having greater area under the ROC curve (AUC), and in the sense of being composed of a better distribution of operating points  相似文献   

10.
11.
For the 2-class detection problem (signal absent/present), the likelihood ratio is an ideal observer in that it minimizes Bayes risk for arbitrary costs and it maximizes the area under the receiver operating characteristic (ROC) curve [AUC]. The AUC-optimizing property makes it a valuable tool in imaging system optimization. If one considered a different task, namely, joint detection and localization of the signal, then it would be similarly valuable to have a decision strategy that optimized a relevant scalar figure of merit. We are interested in quantifying performance on decision tasks involving location uncertainty using the localization ROC (LROC) methodology. Therefore, we derive decision strategies that maximize the area under the LROC curve, A(LROC). We show that these decision strategies minimize Bayes risk under certain reasonable cost constraints. The detection-localization task is modeled as a decision problem in three increasingly realistic ways. In the first two models, we treat location as a discrete parameter having finitely many values resulting in an (L + 1) class classification problem. In our first simple model, we do not include search tolerance effects and in the second, more general, model, we do. In the third and most general model, we treat location as a continuous parameter and also include search tolerance effects. In all cases, the essential proof that the observer maximizes A(LROC) is obtained with a modified version of the Neyman-Pearson lemma. A separate form of proof is used to show that in all three cases, the decision strategy minimizes the Bayes risk under certain reasonable cost constraints.  相似文献   

12.
Adaptive fusion of correlated local decisions   总被引:3,自引:0,他引:3  
An adaptive fusion algorithm is proposed for an environment where the observations and local decisions are dependent from one sensor to another. An optimal decision rule, based on the maximum posterior probability (MAP) detection criterion for such an environment, is derived and compared to the adaptive approach. In the algorithm, the log-likelihood ratio function can be expressed as a linear combination of ratios of conditional probabilities and local decisions. The estimations of the conditional probabilities are adapted by reinforcement learning. The error probability at steady state is analyzed theoretically and, in some cases, found to be equal to the error probability obtained by the optimal fusion rule. The effect of the number of sensors and correlation coefficients on error probability in Gaussian noise is also investigated. Simulation results that conform to the theoretical analysis are also presented  相似文献   

13.
李亚娟 《红外与激光工程》2021,50(8):20210138-1-20210138-8
提出组合多决策准则的稀疏表示分类(Sparse Representation-based Classification,SRC)并在合成孔径雷达(Synthetic Aperture Radar,SAR)目标识别中进行应用。传统SRC通常在全局字典上对测试样本进行重构,分别计算不同训练类别对于测试样本的重构误差,最终根据最小重构误差的原则进行分类决策。然而,由于SAR目标识别问题的复杂性,单一决策准则往往对扩展操作条件的适应性不强,导致整体性能下降。为此,文中基于稀疏表示求解的系数矢量,分别采用最小重构误差原则、最大系数能量原则以及局部最小重构误差原则分别进行分类。最小重构误差准则直接采用传统算法。最大系数能量准则分别计算不同训练类别系数能量,按照能量最大的原则进行判决。局部最小重构误差原则在局部字典上对测试样本进行表征和分析,充分体现SAR图像的视角敏感性。对于三个准则获取的决策变量,通过适当转换统一采用概率分布形式进行表达。最终,基于线性加权融合对三个准则的结果进行分析,判决测试样本所属目标类别。基于MSTAR数据集对方法进行测试,分别验证了提出方法在标准操作条件、俯仰角差异、噪声干扰及目标遮挡等情形的性能。实验结果表明:所提方法通过结合多决策准则能够有效提升SAR目标识别性能。  相似文献   

14.
基于ROC融合准则的SAR边缘检测算法   总被引:2,自引:0,他引:2  
根据ROC(receiver operating characteristics)技术能评估分类器在所有可能工作阈值下总体性能的特点,建立包含边缘像素点相关分析与ROC分类决策的ROC融合准则。依据该准则组合多种SAR边缘检测算子,并得到合成孔径雷达(SAR)影像的"理想"边缘检测结果。实验结果表明,本文方法能融合多种边缘检测算子的优点,有较强的开放性与目标适应性,并且不需要手工设置阈值,自动化程度高,有很强的工程实用性。  相似文献   

15.
The minimum mean-square error (MMSE) and minimum error entropy (MEE) are two important criteria in the estimation related problems. The MMSE can be viewed as a robust MEE criterion in the minimax sense, as its minimization is equivalent to minimizing an upper bound (the maximum value) of the error entropy. This note gives a new and more meaningful interpretation on the robustness of MMSE for problems in which there exists uncertainty in the probability model. It is shown that the MMSE estimator imposes an upper bound on error entropy for the true model. The upper bound consists of two terms. The first term quantifies the “MMSE performance” under nominal conditions, and the second term measures the “distance” between the true and nominal models. This robustness property is parallel to that of the risk-sensitive estimation. Illustration examples are included to confirm the robustness of MMSE.  相似文献   

16.
Bluetooth is an open specification for a technology to enable short‐range wireless communications that operate in an ad hoc fashion. Bluetooth uses frequency hopping with a slot length of 625 μs. Each slot corresponds to a packet and multi‐slot packets of three or five slots can be transmitted to enhance the transmission efficiency. However, the use of multi‐slot packet may degrade the transmission performance under high channel error probability. Thus, the length of multi‐slot should be adjusted according to the current channel condition. Segmentation and reassembly (SAR) operation of Bluetooth enables the adjustment of the length of multi‐slot. In this paper, we propose an efficient multi‐slot transmission scheme that adaptively determines the optimal length of slots of a packet according to the channel error probability. We first discuss the throughput of a Bluetooth connection as a function of the length of a multi‐slot and the channel error probability. A decision criteria which gives the optimal length of the multi‐slot is presented under the assumption that the channel error probability is known. For the implementation in the real Bluetooth system, the channel error probability is estimated with the maximum likelihood estimator (MLE). A simple decision rule for the optimal multi‐slot length is developed to maximize the throughput. Simulation experiment shows that the proposed decision rule for the multi‐slot transmission effectively provides the maximum throughput under any type of channel error correlation. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

17.
In this paper, the power allocation problem in a wireless sensor network (WSN) with binary distributed detection is considered. It is assumed that the sensors independently transmit their local decisions to a fusion center (FC) through a slow fading orthogonal multiple access channel (OMAC), where, in every channel, the interferences from other devices are considered as correlated noises. In this channel, the associated power allocation optimization problem with equal power constraint is established between statistical distributions under different hypotheses by using the Jeffrey divergence (J‐divergence) as a performance criterion. It is shown that this criterion for the power allocation problem is more efficient compared to other criteria such as mean square error (MSE). Moreover, several numerical simulations and examples are presented to illustrate the effectiveness of the proposed approach.  相似文献   

18.
Any population of components produced might be composed of two sub-populations: weak components are less reliable, and deteriorate faster whereas strong components are more reliable, and deteriorate slower. When selecting an approach to classifying the two sub-populations, one could build a criterion aiming to minimize the expected mis-classification cost due to mis-classifying weak (strong) components as strong (weak). However, in practice, the unit mis-classification cost, such as the cost of mis-classifying a strong component as weak, cannot be estimated precisely. Minimizing the expected mis-classification cost becomes more difficult. This problem is considered in this paper by using ROC (Receiver Operating Characteristic) analysis, which is widely used in the medical decision making community to evaluate the performance of diagnostic tests, and in machine learning to select among categorical models. The paper also uses ROC analysis to determine the optimal time for burn-in to remove the weak population. The presented approaches can be used for the scenarios when the following information cannot be estimated precisely: 1) life distributions of the sub-populations, 2) mis-classification cost, and 3) proportions of sub-populations in the entire population.  相似文献   

19.
We express the performance of the N-class "guessing" observer in terms of the N2-N conditional probabilities which make up an N-class receiver operating characteristic (ROC) space, in a formulation in which sensitivities are eliminated in constructing the ROC space (equivalent to using false-negative fraction and false-positive fraction in a two-class task). We then show that the "guessing" observer's performance in terms of these conditional probabilities is completely described by a degenerate hypersurface with only N-1 degrees of freedom (as opposed to the N2-N-1 required, in general, to achieve a true hypersurface in such a ROC space). It readily follows that the hypervolume under such a degenerate hypersurface must be zero when N > 2. We then consider a "near-guessing" task; that is, a task in which the N underlying data probability density functions (pdfs) are nearly identical, controlled by N-1 parameters which may vary continuously to zero (at which point the pdfs become identical). With this approach, we show that the hypervolume under the ROC hypersurface of an observer in an N-class classification task tends continuously to zero as the underlying data pdfs converge continuously to identity (a "guessing" task). The hypervolume under the ROC hypersurface of a "perfect" ideal observer (in a task in which the N data pdfs never overlap) is also found to be zero in the ROC space formulation under consideration. This suggests that hypervolume may not be a useful performance metric in N-class classification tasks for N > 2, despite the utility of the area under the ROC curve for two-class tasks.  相似文献   

20.
When the auxiliary vector (AV) filter generation algorithm utilizes sample average estimated input data statistics, it provides a sequence of estimates of the ideal minimum mean-square error or minimum-variance distortionless-response filter for the given signal processing/receiver design application. Evidently, early nonasymptotic elements of the sequence offer favorable bias/variance balance characteristics and outperform in mean-square filter estimation error the unbiased sample matrix inversion (SMI) estimator as well as the (constraint) least-mean square, recursive least-squares, "multistage nested Wiener filter", and diagonally-loaded SMI filter estimators. Selecting the most successful (in some appropriate sense) AV filter estimator in the sequence for a given data record is a critical problem that has not been addressed so far. We deal exactly with this problem and we propose two data-driven selection criteria. The first criterion minimizes the cross-validated sample average variance of the AV filter output and can be applied to general filter estimation problems; the second criterion maximizes the estimated J-divergence of the AV filter output conditional distributions and is tailored to binary phase-shift-keying-type detection problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号