首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 113 毫秒
1.
Imaging is often used for the purpose of estimating the value of some parameter of interest. For example, a cardiologist may measure the ejection fraction (EF) of the heart in order to know how much blood is being pumped out of the heart on each stroke. In clinical practice, however, it is difficult to evaluate an estimation method because the gold standard is not known, e.g., a cardiologist does not know the true EF of a patient. Thus, researchers have often evaluated an estimation method by plotting its results against the results of another (more accepted) estimation method, which amounts to using one set of estimates as the pseudogold standard. In this paper, we present a maximum-likelihood approach for evaluating and comparing different estimation methods without the use of a gold standard with specific emphasis on the problem of evaluating EF estimation methods. Results of numerous simulation studies will be presented and indicate that the method can precisely and accurately estimate the parameters of a regression line without a gold standard, i.e., without the x axis.  相似文献   

2.
This paper considers the problem of maximum likelihood (ML) estimation for reduced-rank linear regression equations with noise of arbitrary covariance. The rank-reduced matrix of regression coefficients is parameterized as the product of two full-rank factor matrices. This parameterization is essentially constraint free, but it is not unique, which renders the associated ML estimation problem rather nonstandard. Nevertheless, the problem turns out to be tractable, and the following results are obtained. An explicit expression is derived for the ML estimate of the regression matrix in terms of the data covariances and their eigenelements. Furthermore, a detailed analysis of the statistical properties of the ML parameter estimate is performed. Additionally, a generalized likelihood ratio test (GLRT) is proposed for estimating the rank of the regression matrix. The paper also presents the results of some simulation exercises, which lend empirical support to the theoretical findings  相似文献   

3.
Presented is a new computer-aided multispectral image processing method which is used in three spatial dimensions and one spectral dimension where the dynamic, contrast enhanced magnetic resonance parameter maps derived from voxel-wise model-fitting represent the spectral dimension. The method is based on co-occurrence analysis using a 3-D window of observation which introduces an automated identification of suspicious lesions. The co-occurrence analysis defines 21 different statistical features, a subset of which were input to a neural network classifier where the assessments of the voxel-wise majority of a group of radiologist readings were used as the gold standard. The voxel-wise true positive fraction (TPF) and false positive fraction (FPF) results of the computer classifier were statistically indistinguishable from the TPF and FPF results of the readers using a one sample paired t-test. In order to observe the generality of the method, two different groups of studies were used with widely different image acquisition specifications.  相似文献   

4.
We compared four automated methods for hippocampal segmentation using different machine learning algorithms: 1) hierarchical AdaBoost, 2) support vector machines (SVM) with manual feature selection, 3) hierarchical SVM with automated feature selection (Ada-SVM), and 4) a publicly available brain segmentation package (FreeSurfer). We trained our approaches using T1-weighted brain MRIs from 30 subjects [10 normal elderly, 10 mild cognitive impairment (MCI), and 10 Alzheimer's disease (AD)], and tested on an independent set of 40 subjects (20 normal, 20 AD). Manually segmented gold standard hippocampal tracings were available for all subjects (training and testing). We assessed each approach's accuracy relative to manual segmentations, and its power to map AD effects. We then converted the segmentations into parametric surfaces to map disease effects on anatomy. After surface reconstruction, we computed significance maps, and overall corrected $p$-values, for the 3-D profile of shape differences between AD and normal subjects. Our AdaBoost and Ada-SVM segmentations compared favorably with the manual segmentations and detected disease effects as well as FreeSurfer on the data tested. Cumulative $p$-value plots, in conjunction with the false discovery rate method, were used to examine the power of each method to detect correlations with diagnosis and cognitive scores. We also evaluated how segmentation accuracy depended on the size of the training set, providing practical information for future users of this technique.   相似文献   

5.
Symmetric noncausal auto-regressive signals (SNARS) arise in several, mostly spatial, signal processing applications. We introduce a subspace fitting approach for parameter estimation of SNARS from noise-corrupted measurements. We show that the subspaces associated with a Hankel matrix built from the data covariances contain enough information to determine the signal parameters in a consistent manner. Based on this result, we propose a multiple signal classification (MUSIC)-like methodology for parameter estimation of SNARS. Compared with the methods previously proposed for SNARS parameter estimation, our SNARS-MUSIC approach is expected to possess a better tradeoff between computational and statistical performances  相似文献   

6.
The task of segmenting the posterior ribs within the lung fields of standard posteroanterior chest radiographs is considered. To this end, an iterative, pixel-based, supervised, statistical classification method is used, which is called iterated contextual pixel classification (ICPC). Starting from an initial rib segmentation obtained from pixel classification, ICPC updates it by reclassifying every pixel, based on the original features and, additionally, class label information of pixels in the neighborhood of the pixel to be reclassified. The method is evaluated on 30 radiographs taken from the JSRT (Japanese Society of Radiological Technology) database. All posterior ribs within the lung fields in these images have been traced manually by two observers. The first observer's segmentations are set as the gold standard; ICPC is trained using these segmentations. In a sixfold cross-validation experiment, ICPC achieves a classification accuracy of 0.86 +/- 0.06, as compared to 0.94 +/- 0.02 for the second human observer.  相似文献   

7.
Total Productive Maintenance—TPM is widely being used in industries for manufacturing excellence. TPM is based on its eight pillars. Successful Implementation of TPM from its kick-off to final stage depends on in-depth knowledge of these pillars. The purpose of the paper is to rank eight pillars of TPM according to their importance with respect to four parameters: Productivity, Cost, Quality and Delivery in Time, by using Analytic Hierarchy Process (AHP) a multiple criteria decision-making methodology. A pairwise comparison of TPM pillars is done by use of AHP method, by considering a case of automotive industries in India. Ranking of TPM pillars is proposed to set guidelines to decide the weightage of each pillar in terms of major factors to improve Overall Equipment Efficiency. This in terms will guide management to give proper preference and allocate fund at proper time to proper pillar. The ranking suggested suites for automotive sector and assembly lines. By varying the judgmental rating the new ranking can be obtained from the suggested guidelines on similar basis.  相似文献   

8.
In synthetic aperture radar (SAR) imaging, low scene contrast may degrade the performance of most of the existing autofocus methods. In this paper, by dividing a slow-time signal into three isolated components, namely target, clutter, and noise, in SAR imaging, a novel parametric statistical model is proposed during the coherent processing interval. Based on the model, Cramer-Rao bounds (CRBs) of the estimation of unknown parameters are derived. It is shown that the CRBs of the target parameter estimation strongly depend on the background, i.e., clutter and noise, and the CRBs of the background parameter estimation may be obtained regardless of the target component. Motivated from this result and using the estimated background parameters, a novel effective parametric autofocus method is developed, which is applicable to any scene contrast. In addition, a preprojection is also introduced to simplify the subsequent parameter estimation. Finally, the proposed model and the novel method are illustrated by some real SAR data.  相似文献   

9.
This paper presents a vessel segmentation method which learns the geometry and appearance of vessels in medical images from annotated data and uses this knowledge to segment vessels in unseen images. Vessels are segmented in a coarse-to-fine fashion. First, the vessel boundaries are estimated with multivariate linear regression using image intensities sampled in a region of interest around an initialization curve. Subsequently, the position of the vessel boundary is refined with a robust nonlinear regression technique using intensity profiles sampled across the boundary of the rough segmentation and using information about plausible cross-sectional vessel shapes. The method was evaluated by quantitatively comparing segmentation results to manual annotations of 229 coronary arteries. On average the difference between the automatically obtained segmentations and manual contours was smaller than the inter-observer variability, which is an indicator that the method outperforms manual annotation. The method was also evaluated by using it for centerline refinement on 24 publicly available datasets of the Rotterdam Coronary Artery Evaluation Framework. Centerlines are extracted with an existing method and refined with the proposed method. This combination is currently ranked second out of 10 evaluated interactive centerline extraction methods. An additional qualitative expert evaluation in which 250 automatic segmentations were compared to manual segmentations showed that the automatically obtained contours were rated on average better than manual contours.  相似文献   

10.
This letter presents a new discriminative model for Information Retrieval (IR), referred to as Ordinal Regression Model (ORM). ORM is different from most existing models in that it views IR as ordinal regression problem (i.e. ranking problem) instead of binary classification. It is noted that the task of IR is to rank documents according to the user information needed, so IR can be viewed as ordinal regression problem. Two parameter learning algorithms for ORM are presented. One is a perceptron-based algorithm. The other is the ranking Support Vector Machine (SVM). The effectiveness of the proposed approach has been evaluated on the task of ad hoc retrieval using three English Text REtrieval Conference (TREC) sets and two Chinese TREC sets. Results show that ORM significantly outperforms the state-of-the-art language model approaches and OKAPI system in all test sets; and it is more appropriate to view IR as ordinal regression other than binary classification.  相似文献   

11.
A method for estimating the parameters of nonstationary ionic channel current fluctuations (NST-ICFs) in the presence of additive measurement noise is proposed. The case in which the sample records of corrupted NST-ICTs are available for estimation, and the experiment can be repeated many times to calculate the statistics of noisy NST-ICFs, is considered. The conventional second-order regression model expressed in terms of the mean and variance of noisy NST-ICFs is derived theoretically, assuming that NST-ICFs are binomially distributed. The parameters of NST-ICFs that are of interest can be estimated without interference from the additive measurement noise by identifying the regression coefficients. The accuracy of the parameter estimates is theoretically evaluated using the error-covariance matrix of the regression coefficients. The validity and effectiveness of the proposed method are demonstrated in a Monte Carlo simulation of Na+ channels kinetics  相似文献   

12.
Variable selection is a topic of great importance in high-dimensional statistical modeling and has a wide range of real-world applications. Many variable selection techniques have been proposed in the context of linear regression, and the Lasso model is probably one of the most popular penalized regression techniques. In this paper, we propose a new, fully hierarchical, Bayesian version of the Lasso model by employing flexible sparsity promoting priors. To obtain the Bayesian Lasso estimate, a reversible-jump MCMC algorithm is developed for joint posterior inference over both discrete and continuous parameter spaces. Simulations demonstrate that the proposed RJ-MCMC-based Bayesian Lasso yields smaller estimation errors and more accurate sparsity pattern detection when compared with state-of-the-art optimization-based Lasso-type methods, a standard Gibbs sampler-based Bayesian Lasso and the Binomial-Gaussian prior model. To demonstrate the applicability and estimation stability of the proposed Bayesian Lasso, we examine a benchmark diabetes data set and real functional Magnetic Resonance Imaging data. As an extension of the proposed RJ-MCMC framework, we also develop an MCMC-based algorithm for the Binomial-Gaussian prior model and illustrate its improved performance over the non-Bayesian estimate via simulations.  相似文献   

13.
Visual saliency is a useful clue to depict visually important image/video contents in many multimedia applications. In visual saliency estimation, a feasible solution is to learn a "feature-saliency" mapping model from the user data obtained by manually labeling activities or eye-tracking devices. However, label ambiguities may also arise due to the inaccurate and inadequate user data. To process the noisy training data, we propose a multi-instance learning to rank approach for visual saliency estimation. In our approach, the correlations between various image patches are incorporated into an ordinal regression framework. By iteratively refining a ranking model and relabeling the image patches with respect to their mutual correlations, the label ambiguities can be effectively removed from the training data. Consequently, visual saliency can be effectively estimated by the ranking model, which can pop out real targets and suppress real distractors. Extensive experiments on two public image data sets show that our approach outperforms 11 state-of-the-art methods remarkably in visual saliency estimation.  相似文献   

14.
MIMO-OFDM系统中基于导频辅助的信道估计   总被引:6,自引:1,他引:5  
该文对MIMO-OFDM系统中基于导频辅助的LMMSE信道估计算法进行了研究,导出了其估计均方差的下界。为降低算法复杂度,首先利用奇异值分解给出一种低阶近似的信道估计器结构;其次提出了一种基于最优导频设计的简化算法。该简化算法不仅降低了算法复杂度,且能有效地获得最优估计性能。最后文中给出了估计信道特性的方法。  相似文献   

15.
Monte Carlo Network Reliability Ranking Estimation   总被引:1,自引:0,他引:1  
Topological optimization is an important problem in communication networks. Exact reliability optimization methods are restricted to small problems, or specialized network topologies. For larger problems, simulation-based approaches are more practical. In simulation-based optimization, one often needs to compare the reliability of a large number of similar networks. The traditional reliability estimation & rank process is the most time consuming step of most optimization algorithms. In this paper, a novel approach is proposed to directly estimate the reliability ranking of some edge relocated networks without the need to estimate their reliabilities. In the case study considered in this paper, the proposed Synchronous Construction Ranking method achieved over 30,000 times speedup over the traditional approach using the Merge Process estimation algorithm  相似文献   

16.
为降低新一代通用视频编码(versatilevideocoding,VVC)标准率失真优化过程的编码复杂度,提出一种基于统计建模的快速码率估计算法。首先,算法充分考虑依赖性量化(dependent quantization,DQ)的量化行为和熵编码中的上下文依赖,提出可以准确刻画编码过程中上下文状态迁移的码率特征,初步预估变换单元(transformunit,TU)中部分语法元素的码率;其次,基于系数分布特性,定义系数混乱度特征和稀疏度特征来区分系数分布差异带来的码率影响,并构建TU级码率模型;最后,算法根据码率构成特性将大尺寸TU和小尺寸TU分开建模实现更精准的码率预估。通过统计方式对大量样本进行回归训练,得到最终的线性码率模型,并应用于VVC的模式决策中。实验结果表明,所提出算法在随机访问(random access,RA)配置下,可以实现16.289%的复杂度降低,而码率变化率(Bjontegaard delta bit rate,BD-BR)仅增加1.567%。  相似文献   

17.
The objective of this paper is to investigate how the complementarity between low earth orbit (LEO) microwave (MW) and geostationary earth orbit (GEO) infrared (IR) radiometric measurements can be exploited for satellite rainfall detection and estimation. Rainfall retrieval is pursued at the space-time scale of typical geostationary observations, that is at a spatial resolution of few kilometers and a repetition period of few tens of minutes. The basic idea behind the investigated statistical integration methods follows an established approach consisting in using the satellite MW-based rain-rate estimates, assumed to be accurate enough, to calibrate spaceborne IR measurements on sufficiently limited subregions and time windows. The proposed methodologies are focused on new statistical approaches, namely the multivariate probability matching (MPM) and variance-constrained multiple regression (VMR). The MPM and VMR methods are rigorously formulated and systematically analyzed in terms of relative detection and estimation accuracy and computing efficiency. In order to demonstrate the potentiality of the proposed MW-IR combined rainfall algorithm (MICRA), three case studies are discussed, two on a global scale on November 1999 and 2000 and one over the Mediterranean area. A comprehensive set of statistical parameters for detection and estimation assessment is introduced to evaluate the error budget. For a comparative evaluation, the analysis of these case studies has been extended to similar techniques available in literature.  相似文献   

18.
It is well known in the pattern recognition community that the accuracy of classifications obtained by combining decisions made by independent classifiers can be substantially higher than the accuracy of the individual classifiers. We have previously shown this to be true for atlas-based segmentation of biomedical images. The conventional method for combining individual classifiers weights each classifier equally (vote or sum rule fusion). In this paper, we propose two methods that estimate the performances of the individual classifiers and combine the individual classifiers by weighting them according to their estimated performance. The two methods are multiclass extensions of an expectation-maximization (EM) algorithm for ground truth estimation of binary classification based on decisions of multiple experts (Warfield et al., 2004). The first method performs parameter estimation independently for each class with a subsequent integration step. The second method considers all classes simultaneously. We demonstrate the efficacy of these performance-based fusion methods by applying them to atlas-based segmentations of three-dimensional confocal microscopy images of bee brains. In atlas-based image segmentation, multiple classifiers arise naturally by applying different registration methods to the same atlas, or the same registration method to different atlases, or both. We perform a validation study designed to quantify the success of classifier combination methods in atlas-based segmentation. By applying random deformations, a given ground truth atlas is transformed into multiple segmentations that could result from imperfect registrations of an image to multiple atlas images. In a second evaluation study, multiple actual atlas-based segmentations are combined and their accuracies computed by comparing them to a manual segmentation. We demonstrate in both evaluation studies that segmentations produced by combining multiple individual registration-based segmentations are more accurate for the two classifier fusion methods we propose, which weight the individual classifiers according to their EM-based performance estimates, than for simple sum rule fusion, which weights each classifier equally.  相似文献   

19.
This paper presents a physically constrained maximum-likelihood (PCML) method for spatial covariance matrix and power spectral density estimation as a reduced-rank adaptive array processing algorithm. The physical constraints of propagating energy imposed by the wave equation and the statistical nature of the snapshots are exploited to estimate the ldquotruerdquo maximum-likelihood covariance matrix that is full rank and physically realizable. The resultant matrix may then be used in adaptive processing for interference cancellation and improved power estimation in nonstationary environments where the amount of available data is limited. Minimum variance distortionless response (MVDR) power estimates are computed for a given environment at different levels of snapshot support using the PCML method and several other reduced-rank techniques. The MVDR power estimates from the PCML method are shown to have less bias and lower standard deviation at a given level of snapshot support than any of the other reduced-rank methods used. Furthermore, the estimated power spectral density from the PCML method is shown to offer better low-level source detection than the MVDR power estimates.  相似文献   

20.
文档表示是排序学习的关键,目前的排序学习算法多采用词袋法表示文档与查询,该方法假设词袋中的词相互独立,忽略了词之间的关系.为了表示文档中词之间的依赖关系,本研究利用文档与查询的主题特征构建排序学习模型,我们将排序函数定义为文档与查询之间的主题关系,提出了基于有监督主题模型的排序学习算法自动学习排序函数.为了评价模型的排序精度,我们在三个标准数据集(OHSUMED,MQ2007,MQ2008)上进行了实验.实验表明基于主题的排序学习算法能够发现文档与查询之间内在的语义关联,并改善排序模型的排序精度.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号