首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.  相似文献   

2.
Receiver operating characteristic (ROC) analysis is well established in the evaluation of systems involving binary classification tasks. However, medical tests often require distinguishing among more than two diagnostic alternatives. The goal of this work was to develop an ROC analysis method for three-class classification tasks. Based on decision theory, we developed a method for three-class ROC analysis. In this method, the objects were classified by making the decision that provided the maximal utility relative to the other two. By making assumptions about the magnitudes of the relative utilities of incorrect decisions, we found a decision model that maximized the expected utility of the decisions when using log-likelihood ratios as decision variables. This decision model consists of a two-dimensional decision plane with log likelihood ratios as the axes and a decision structure that separates the plane into three regions. Moving the decision structure over the decision plane, which corresponds to moving the decision threshold in two-class ROC analysis, and computing the true class 1, 2, and 3 fractions defined a three-class ROC surface. We have shown that the resulting three-class ROC surface shares many features with the two-class ROC curve; i.e., using the log likelihood ratios as the decision variables results in maximal expected utility of the decisions, and the optimal operating point for a given diagnostic setting (set of relative utilities and disease prevalences) lies on the surface. The volume under the three-class surface (VUS) serves as a figure-of-merit to evaluate different data acquisition systems or image processing and reconstruction methods when the assumed utility constraints are relevant.  相似文献   

3.
Previously, we have proposed a method for three-class receiver operating characteristic (ROC) analysis based on decision theory. In this method, the volume under a three-class ROC surface (VUS) serves as a figure-of-merit (FOM) and measures three-class task performance. The proposed three-class ROC analysis method was demonstrated to be optimal under decision theory according to several decision criteria. Further, an optimal three-class linear observer was proposed to simultaneously maximize the signal-to-noise ratio (SNR) between the test statistics of each pair of the classes provided certain data linearity condition. Applicability of this three-class ROC analysis method would be further enhanced by the development of an intuitive meaning of the VUS and a more general method to calculate the VUS that provides an estimate of its standard error. In this paper, we investigated the general meaning and usage of VUS as a FOM for three-class classification task performance. We showed that the VUS value, which is obtained from a rating procedure, equals the percent correct in a corresponding categorization procedure for continuous rating data. The significance of this relationship goes beyond providing another theoretical basis for three-class ROC analysis - it enables statistical analysis of the VUS value. Based on this relationship, we developed and tested algorithms for calculating the VUS and its variance. Finally, we reviewed the current status of the proposed three-class ROC analysis methodology, and concluded that it extends and unifies decision theoretic, linear discriminant analysis, and psychophysical foundations of binary ROC analysis in a three-class paradigm.  相似文献   

4.
The likelihood ratio, or ideal observer, decision rule is known to be optimal for two-class classification tasks in the sense that it maximizes expected utility (or, equivalently, minimizes the Bayes risk). Furthermore, using this decision rule yields a receiver operating characteristic (ROC) curve which is never above the ROC curve produced using any other decision rule, provided the observer's misclassification rate with respect to one of the two classes is chosen as the dependent variable for the curve (i.e., an "inversion" of the more common formulation in which the observer's true-positive fraction is plotted against its false-positive fraction). It is also known that for a decision task requiring classification of observations into N classes, optimal performance in the expected utility sense is obtained using a set of N-1 likelihood ratios as decision variables. In the N-class extension of ROC analysis, the ideal observer performance is describable in terms of an (N2-N-1)-parameter hypersurface in an (N2-N)-dimensional probability space. We show that the result for two classes holds in this case as well, namely that the ROC hypersurface obtained using the ideal observer decision rule is never above the ROC hypersurface obtained using any other decision rule (where in our formulation performance is given exclusively with respect to between-class error rates rather than within-class sensitivities).  相似文献   

5.
6.
The diagnosis of cardiac disease using dual-isotope myocardial perfusion SPECT (MPS) is based on the defect status in both stress and rest images, and can be modeled as a three-class task of classifying patients as having no, reversible, or fixed perfusion defects. Simultaneous acquisition protocols for dual-isotope MPS imaging have gained much interest due to their advantages including perfect registration of the (201)Tl and (99m)Tc images in space and time, increased patient comfort, and higher clinical throughput. As a result of simultaneous acquisition, however, crosstalk contamination, where photons emitted by one isotope contribute to the image of the other isotope, degrades image quality. Minimizing the crosstalk is important in obtaining the best possible image quality. One way to minimize the crosstalk is to optimize the injected activity of the two isotopes by considering the three-class nature of the diagnostic problem. To effectively do so, we have previously developed a three-class receiver operating characteristic (ROC) analysis methodology that extends and unifies the decision theoretic, linear discriminant analysis, and psychophysical foundations of binary ROC analysis in a three-class paradigm. In this work, we applied the proposed three-class ROC methodology to the assessment of the image quality of simultaneous dual-isotope MPS imaging techniques and the determination of the optimal injected activity combination. In addition to this application, the rapid development of diagnostic imaging techniques has produced an increasing number of clinical diagnostic tasks that involve not only disease detection, but also disease characterization and are thus multiclass tasks. This paper provides a practical example of the application of the proposed three-class ROC analysis methodology to medical problems.  相似文献   

7.
Classification of a given observation to one of three classes is an important task in many decision processes or pattern recognition applications. A general analysis of the performance of three-class classifiers results in a complex 6-D receiver operating characteristic (ROC) space, for which no simple analytical tool exists at present. We investigate the performance of an ideal observer under a specific set of assumptions that reduces the 6-D ROC space to 3-D by constraining the utilities of some of the decisions in the classification task. These assumptions lead to a 3-D ROC space in which the true-positive fraction (TPF) can be expressed in terms of the two types of false-positive fractions (FPFs). We demonstrate that the TPF is uniquely determined by, and therefore is a function of, the two FPFs. The domain of this function is shown to be related to the decision boundaries in the likelihood ratio plane. Based on these properties of the 3-D ROC space, we can define a summary measure, referred to as the normalized volume under the surface (NVUS), that is analogous to the area under the ROC curve (AUC) for a two-class classifier. We further investigate the properties of the 3-D ROC surface and the NVUS for the ideal observer under the condition that the three class distributions are multivariate normal with equal covariance matrices. The probability density functions (pdfs) of the decision variables are shown to follow a bivariate log-normal distribution. By considering these pdfs, we express the TPF in terms of the FPFs, and integrate the TPF over its domain numerically to obtain the NVUS. In addition, we performed a Monte Carlo simulation study, in which the 3-D ROC surface was generated by empirical "optimal" classification of case samples in the multidimensional feature space following the assumed distributions, to obtain an independent estimate of NVUS. The NVUS value obtained by using the analytical pdfs was found to be in good agreemen- t with that obtained from the Monte Carlo simulation study. We also found that, under all conditions studied, the NVUS increased when the difficulty of the classification task was reduced by changing the parameters of the class distributions, thereby exhibiting the properties of a performance metric in analogous to AUC. Our results indicate that, under the conditions that lead to our 3-D ROC analysis, the performance of a three-class classifier may be analyzed by considering the ROC surface, and its accuracy characterized by the NVUS.  相似文献   

8.
提出一种集中式频谱协同检测算法。各认知节点采用能量检测算法,然后使用最大似然准则进行本地判决,且把似然比作为本地判决可靠性的度量;中心节点基于可信度对接收到的认知节点本地检测数据进行数据融合。仿真结果显示,文中提出的认知节点协同频谱检测方案能够减少误检概率。特别是当信道处于深衰落时,少量节点参与协同就能获得较好的检测性能。  相似文献   

9.
We are attempting to develop expressions for the coordinates of points on the three-class ideal observer's receiver operating characteristic (ROC) hypersurface as functions of the set of decision criteria used by the ideal observer. This is considerably more difficult than in the two-class classification task, because the conditional probabilities in question are not simply related to the cumulative distribution functions of the decision variables, and because the slopes and intercepts of the decision boundary lines are not independent; given the locations of two of the lines, the location of the third will be constrained depending on the other two. In this paper, we attempt to characterize those constraining relationships among the three-class ideal observer's decision boundary lines. As a result, we show that the relationship between the decision criteria and the misclassification probabilities is not one-to-one, as it is for the two-class ideal observer.  相似文献   

10.
11.
Adaptive fusion of correlated local decisions   总被引:3,自引:0,他引:3  
An adaptive fusion algorithm is proposed for an environment where the observations and local decisions are dependent from one sensor to another. An optimal decision rule, based on the maximum posterior probability (MAP) detection criterion for such an environment, is derived and compared to the adaptive approach. In the algorithm, the log-likelihood ratio function can be expressed as a linear combination of ratios of conditional probabilities and local decisions. The estimations of the conditional probabilities are adapted by reinforcement learning. The error probability at steady state is analyzed theoretically and, in some cases, found to be equal to the error probability obtained by the optimal fusion rule. The effect of the number of sensors and correlation coefficients on error probability in Gaussian noise is also investigated. Simulation results that conform to the theoretical analysis are also presented  相似文献   

12.
Bluetooth is an open specification for a technology to enable short‐range wireless communications that operate in an ad hoc fashion. Bluetooth uses frequency hopping with a slot length of 625 μs. Each slot corresponds to a packet and multi‐slot packets of three or five slots can be transmitted to enhance the transmission efficiency. However, the use of multi‐slot packet may degrade the transmission performance under high channel error probability. Thus, the length of multi‐slot should be adjusted according to the current channel condition. Segmentation and reassembly (SAR) operation of Bluetooth enables the adjustment of the length of multi‐slot. In this paper, we propose an efficient multi‐slot transmission scheme that adaptively determines the optimal length of slots of a packet according to the channel error probability. We first discuss the throughput of a Bluetooth connection as a function of the length of a multi‐slot and the channel error probability. A decision criteria which gives the optimal length of the multi‐slot is presented under the assumption that the channel error probability is known. For the implementation in the real Bluetooth system, the channel error probability is estimated with the maximum likelihood estimator (MLE). A simple decision rule for the optimal multi‐slot length is developed to maximize the throughput. Simulation experiment shows that the proposed decision rule for the multi‐slot transmission effectively provides the maximum throughput under any type of channel error correlation. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

13.
We consider the classical problem of fitting a model composed of multiple superimposed signals to noisy data using the criteria of maximum likelihood (ML) or subspace fitting, jointly termed generalized subspace fitting (GSF). We analyze a previously proposed approximate dynamic programming algorithm (ADP), which provides a computationally efficient solution to the associated multidimensional multimodal optimization problem. We quantify the error introduced by the approximations in ADP and deviations from the key local interaction signal model (LISMO) modeling assumption in two ways. First, we upper bound the difference between the exact minimum of the GSF criterion and its value at the ADP estimate and compare the ADP with GSF estimates obtained by exhaustive multidimensional search on a fine lattice. Second, motivated by the similar accuracy bounds, we use perturbation analysis to derive approximate expressions for the MSE of the ADP estimates. These various results provide, for the first time, an effective tool to predict the performance of the ADP algorithm for various signal models at nonasymptotic conditions of interest in practical applications. In particular, they demonstrate that for the classical problems of sinusoid retrieval and array processing, ADP performs comparably to exact (but expensive) maximum likelihood (ML) over a wide range of signal-to-noise ratios (SNRs) and is therefore an attractive algorithm  相似文献   

14.
15.
For the 2-class detection problem (signal absent/present), the likelihood ratio is an ideal observer in that it minimizes Bayes risk for arbitrary costs and it maximizes the area under the receiver operating characteristic (ROC) curve [AUC]. The AUC-optimizing property makes it a valuable tool in imaging system optimization. If one considered a different task, namely, joint detection and localization of the signal, then it would be similarly valuable to have a decision strategy that optimized a relevant scalar figure of merit. We are interested in quantifying performance on decision tasks involving location uncertainty using the localization ROC (LROC) methodology. Therefore, we derive decision strategies that maximize the area under the LROC curve, A(LROC). We show that these decision strategies minimize Bayes risk under certain reasonable cost constraints. The detection-localization task is modeled as a decision problem in three increasingly realistic ways. In the first two models, we treat location as a discrete parameter having finitely many values resulting in an (L + 1) class classification problem. In our first simple model, we do not include search tolerance effects and in the second, more general, model, we do. In the third and most general model, we treat location as a continuous parameter and also include search tolerance effects. In all cases, the essential proof that the observer maximizes A(LROC) is obtained with a modified version of the Neyman-Pearson lemma. A separate form of proof is used to show that in all three cases, the decision strategy minimizes the Bayes risk under certain reasonable cost constraints.  相似文献   

16.
This paper considers the problem of phase recovery of QPSK signals with phase-locked loops (PLL) using two new phase error detectors (PEDs) which are designed to minimize the probability of making a symbol decision error. The new PEDs resemble the PED derived from the maximum likelihood criterion. However, the new PEDs penalize matched filter outputs that are close to decision region boundaries. This penalty gives rise to faster converging PLLs relative to PLLs using the maximum likelihood PED, as shown in simulation results. The S-curves for the PEDs are used as a tool to gain insight into the behavior of the new PEDs.  相似文献   

17.
Any population of components produced might be composed of two sub-populations: weak components are less reliable, and deteriorate faster whereas strong components are more reliable, and deteriorate slower. When selecting an approach to classifying the two sub-populations, one could build a criterion aiming to minimize the expected mis-classification cost due to mis-classifying weak (strong) components as strong (weak). However, in practice, the unit mis-classification cost, such as the cost of mis-classifying a strong component as weak, cannot be estimated precisely. Minimizing the expected mis-classification cost becomes more difficult. This problem is considered in this paper by using ROC (Receiver Operating Characteristic) analysis, which is widely used in the medical decision making community to evaluate the performance of diagnostic tests, and in machine learning to select among categorical models. The paper also uses ROC analysis to determine the optimal time for burn-in to remove the weak population. The presented approaches can be used for the scenarios when the following information cannot be estimated precisely: 1) life distributions of the sub-populations, 2) mis-classification cost, and 3) proportions of sub-populations in the entire population.  相似文献   

18.
We present an iterative method for joint channel parameter estimation and symbol selection via the Baum-Welch algorithm, or equivalently the Expectation-Maximization (EM) algorithm. Channel parameters, including noise variance, are estimated using a maximum likelihood criterion. The Markovian properties of the channel state sequence enable us to calculate the required likelihood using a forward-backward algorithm. The calculated likelihood functions can easily give optimum decisions on information symbols which minimize the symbol error probability. The proposed receiver can be used for both linear and nonlinear channels. It improves the system throughput by making saving in the transmission of known symbols, usually employed for channel identification. Simulation results which show fast convergence are presented  相似文献   

19.
针对通信混合信号的盲分离问题,结合通信系统中的误码率性能指标,本文提出一种基于最小误码率准则的盲源分离算法。本算法基本原理是,将推导的最小误码率准则结合最大似然原则建立盲源分离代价函数,形成一种最小误码率约束的代价函数,接着借助于自然梯度下降搜索,最小化代价函数实现盲源分离。仿真实验分析表明提出的最小误码率约束代价函数得到的盲源分离算法,比原有的最大似然原则代价函数得到的盲源分离算法,具有更好的收敛性能和分离精度。   相似文献   

20.
A new estimation criterion based on the discrepancy between the estimator's error covariance and its information lower bound is proposed. This discrepancy measure criterion tries to take the information content of the observed data into account. A minimum discrepancy estimator (MDE) is then obtained under a linearity assumption. This estimator is shown to be equivalent to the maximum likelihood estimator (MLE), if one assumes that a linear efficient estimator exists and the prior distribution of parameters is uniform. Moreover, it is equivalent to the minimum variance unbiased estimator (MVUE) if the MDE is required to be unbiased. Illustrative examples of MDE and its comparisons with other estimators are given  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号