首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Characterizing the performance of image segmentation approaches has been a persistent challenge. Performance analysis is important since segmentation algorithms often have limited accuracy and precision. Interactive drawing of the desired segmentation by human raters has often been the only acceptable approach, and yet suffers from intra-rater and inter-rater variability. Automated algorithms have been sought in order to remove the variability introduced by raters, but such algorithms must be assessed to ensure they are suitable for the task. The performance of raters (human or algorithmic) generating segmentations of medical images has been difficult to quantify because of the difficulty of obtaining or estimating a known true segmentation for clinical data. Although physical and digital phantoms can be constructed for which ground truth is known or readily estimated, such phantoms do not fully reflect clinical images due to the difficulty of constructing phantoms which reproduce the full range of imaging characteristics and normal and pathological anatomical variability observed in clinical data. Comparison to a collection of segmentations by raters is an attractive alternative since it can be carried out directly on the relevant clinical imaging data. However, the most appropriate measure or set of measures with which to compare such segmentations has not been clarified and several measures are used in practice. We present here an expectation-maximization algorithm for simultaneous truth and performance level estimation (STAPLE). The algorithm considers a collection of segmentations and computes a probabilistic estimate of the true segmentation and a measure of the performance level represented by each segmentation. The source of each segmentation in the collection may be an appropriately trained human rater or raters, or may be an automated segmentation algorithm. The probabilistic estimate of the true segmentation is formed by estimating an optimal combination of the segmentations, weighting each segmentation depending upon the estimated performance level, and incorporating a prior model for the spatial distribution of structures being segmented as well as spatial homogeneity constraints. STAPLE is straightforward to apply to clinical imaging data, it readily enables assessment of the performance of an automated image segmentation algorithm, and enables direct comparison of human rater and algorithm performance.  相似文献   

2.
Multiscale image segmentation using wavelet-domain hidden Markovmodels   总被引:35,自引:0,他引:35  
We introduce a new image texture segmentation algorithm, HMTseg, based on wavelets and the hidden Markov tree (HMT) model. The HMT is a tree-structured probabilistic graph that captures the statistical properties of the coefficients of the wavelet transform. Since the HMT is particularly well suited to images containing singularities (edges and ridges), it provides a good classifier for distinguishing between textures. Utilizing the inherent tree structure of the wavelet HMT and its fast training and likelihood computation algorithms, we perform texture classification at a range of different scales. We then fuse these multiscale classifications using a Bayesian probabilistic graph to obtain reliable final segmentations. Since HMTseg works on the wavelet transform of the image, it can directly segment wavelet-compressed images without the need for decompression into the space domain. We demonstrate the performance of HMTseg with synthetic, aerial photo, and document image segmentations.  相似文献   

3.
There have been significant efforts to build a probabilistic atlas of the brain and to use it for many common applications, such as segmentation and registration. Though the work related to brain atlases can be applied to nonbrain organs, less attention has been paid to actually building an atlas for organs other than the brain. Motivated by the automatic identification of normal organs for applications in radiation therapy treatment planning, we present a method to construct a probabilistic atlas of an abdomen consisting of four organs (i.e., liver, kidneys, and spinal cord). Using 32 noncontrast abdominal computed tomography (CT) scans, 31 were mapped onto one individual scan using thin plate spline as the warping transform and mutual information (MI) as the similarity measure. Except for an initial coarse placement of four control points by the operators, the MI-based registration was automatic. Additionally, the four organs in each of the 32 CT data sets were manually segmented. The manual segmentations were warped onto the "standard" patient space using the same transform computed from their gray scale CT data set and a probabilistic atlas was calculated. Then, the atlas was used to aid the segmentation of low-contrast organs in an additional 20 CT data sets not included in the atlas. By incorporating the atlas information into the Bayesian framework, segmentation results clearly showed improvements over a standard unsupervised segmentation method.  相似文献   

4.
To segment brain tissues in magnetic resonance images of the brain, the authors have implemented a stochastic relaxation method which utilizes partial volume analysis for every brain voxel, and operates on fully three-dimensional (3-D) data. However, there are still problems with automatically or semi-automatically segmenting thick magnetic resonance (MR) slices, particularly when trying to segment the small lesions present in MR images of multiple sclerosis patients. To improve lesion segmentation the authors have extended their method of stochastic relaxation by both pre- and post-processing the MR images. The preprocessing step involves image enhancement using homomorphic filtering to correct for nonhomogeneities in the coil and magnet. Because approximately 95% of all multiple sclerosis lesions occur in the white matter of the brain, the post-processing step involves application of morphological processing and thresholding techniques to the intermediate segmentation in order to develop a mask image containing only white matter and Multiple Sclerosis (MS) lesion. This white/lesion masked image is then segmented by again applying the authors' stochastic relaxation technique. The process has been applied to multispectral MRI scans of multiple sclerosis patients and the results compare favorably to manual segmentations of the same scans obtained independently by radiology health professionals.  相似文献   

5.
Simulations provide a way of generating data where ground truth is known, enabling quantitative testing of image processing methods. In this paper, we present the construction of 20 realistic digital brain phantoms that can be used to simulate medical imaging data. The phantoms are made from 20 normal adults to take into account intersubject anatomical variabilities. Each digital brain phantom was created by registering and averaging four T1, T2, and proton density (PD)-weighted magnetic resonance imaging (MRI) scans from each subject. A fuzzy minimum distance classification was used to classify voxel intensities from T1, T2, and PD average volumes into grey-matter, white matter, cerebro-spinal fluid, and fat. Automatically generated mask volumes were required to separate brain from nonbrain structures and ten fuzzy tissue volumes were created: grey matter, white matter, cerebro-spinal fluid, skull, marrow within the bone, dura, fat, tissue around the fat, muscles, and skin/muscles. A fuzzy vessel class was also obtained from the segmentation of the magnetic resonance angiography scan of the subject. These eleven fuzzy volumes that describe the spatial distribution of anatomical tissues define the digital phantom, where voxel intensity is proportional to the fraction of tissue within the voxel. These fuzzy volumes can be used to drive simulators for different modalities including MRI, PET, or SPECT. These phantoms were used to construct 20 simulated T1-weighted MR scans. To evaluate the realism of these simulations, we propose two approaches to compare them to real data acquired with the same acquisition parameters. The first approach consists of comparing the intensities within the segmented classes in both real and simulated data. In the second approach, a whole brain voxel-wise comparison between simulations and real T1-weighted data is performed. The first comparison underlines that segmented classes appear to properly represent the anatomy on average, and that inside these classes, the simulated and real intensity values are quite similar. The second comparison enables the study of the regional variations with no a priori class. The experiments demonstrate that these variations are small when real data are corrected for intensity nonuniformity.  相似文献   

6.
The author formulates a novel segmentation algorithm which combines the use of Markov random field models for image-modeling with the use of the discrete wavepacket transform for image analysis. Image segmentations are derived and refined at a sequence of resolution levels, using as data selected wave-packet transform images or "channels". The segmentation algorithm is compared with nonmultiresolution Markov random field-based image segmentation algorithms in the context of synthetic image example problems, and found to be both significantly more efficient and effective.  相似文献   

7.
This paper presents a new method for deformable model-based segmentation of lumen and thrombus in abdominal aortic aneurysms from computed tomography (CT) angiography (CTA) scans. First the lumen is segmented based on two positions indicated by the user, and subsequently the resulting surface is used to initialize the automated thrombus segmentation method. For the lumen, the image-derived deformation term is based on a simple grey level model (two thresholds). For the more complex problem of thrombus segmentation, a grey level modeling approach with a nonparametric pattern classification technique is used, namely k-nearest neighbors. The intensity profile sampled along the surface normal is used as classification feature. Manual segmentations are used for training the classifier: samples are collected inside, outside, and at the given boundary positions. The deformation is steered by the most likely class corresponding to the intensity profile at each vertex on the surface. A parameter optimization study is conducted, followed by experiments to assess the overall segmentation quality and the robustness of results against variation in user input. Results obtained in a study of 17 patients show that the agreement with respect to manual segmentations is comparable to previous values reported in the literature, with considerable less user interaction.  相似文献   

8.
9.
Automatic three-dimensional (3-D) segmentation of the brain from magnetic resonance (MR) scans is a challenging problem that has received an enormous amount of attention lately. Of the techniques reported in the literature, very few are fully automatic. In this paper, we present an efficient and accurate, fully automatic 3-D segmentation procedure for brain MR scans. It has several salient features; namely, the following. 1) Instead of a single multiplicative bias field that affects all tissue intensities, separate parametric smooth models are used for the intensity of each class. 2) A brain atlas is used in conjunction with a robust registration procedure to find a nonrigid transformation that maps the standard brain to the specimen to be segmented. This transformation is then used to: segment the brain from nonbrain tissue; compute prior probabilities for each class at each voxel location and find an appropriate automatic initialization. 3) Finally, a novel algorithm is presented which is a variant of the expectation-maximization procedure, that incorporates a fast and accurate way to find optimal segmentations, given the intensity models along with the spatial coherence assumption. Experimental results with both synthetic and real data are included, as well as comparisons of the performance of our algorithm with that of other published methods.  相似文献   

10.
A scheme based on a difference scheme using object structures and color analysis is proposed for video object segmentation in rainy situations. Since shadows and color reflections on the wet ground pose problems for conventional video object segmentation, the proposed method combines the background construction-based video object segmentation and the foreground extraction-based video object segmentation where pixels in both the foreground and background from a video sequence are separated using histogram-based change detection from which the background can be constructed and detection of the initial moving object masks based on a frame difference mask and a background subtraction mask can be further used to obtain coarse object regions. Shadow regions and color-reflection regions on the wet ground are removed from the initial moving object masks via a diamond window mask and color analysis of the moving object. Finally, the boundary of the moving object is refined using connected component labeling and morphological operations. Experimental results show that the proposed method performs well for video object segmentation in rainy situations.  相似文献   

11.
It has been shown that employing multiple atlas images improves segmentation accuracy in atlas-based medical image segmentation. Each atlas image is registered to the target image independently and the calculated transformation is applied to the segmentation of the atlas image to obtain a segmented version of the target image. Several independent candidate segmentations result from the process, which must be somehow combined into a single final segmentation. Majority voting is the generally used rule to fuse the segmentations, but more sophisticated methods have also been proposed. In this paper, we show that the use of global weights to ponderate candidate segmentations has a major limitation. As a means to improve segmentation accuracy, we propose the generalized local weighting voting method. Namely, the fusion weights adapt voxel-by-voxel according to a local estimation of segmentation performance. Using digital phantoms and MR images of the human brain, we demonstrate that the performance of each combination technique depends on the gray level contrast characteristics of the segmented region, and that no fusion method yields better results than the others for all the regions. In particular, we show that local combination strategies outperform global methods in segmenting high-contrast structures, while global techniques are less sensitive to noise when contrast between neighboring structures is low. We conclude that, in order to achieve the highest overall segmentation accuracy, the best combination method for each particular structure must be selected.   相似文献   

12.
Segmentation of lungs with (large) lung cancer regions is a nontrivial problem. We present a new fully automated approach for segmentation of lungs with such high-density pathologies. Our method consists of two main processing steps. First, a novel robust active shape model (RASM) matching method is utilized to roughly segment the outline of the lungs. The initial position of the RASM is found by means of a rib cage detection method. Second, an optimal surface finding approach is utilized to further adapt the initial segmentation result to the lung. Left and right lungs are segmented individually. An evaluation on 30 data sets with 40 abnormal (lung cancer) and 20 normal left/right lungs resulted in an average Dice coefficient of 0.975±0.006 and a mean absolute surface distance error of 0.84±0.23 mm, respectively. Experiments on the same 30 data sets showed that our methods delivered statistically significant better segmentation results, compared to two commercially available lung segmentation approaches. In addition, our RASM approach is generally applicable and suitable for large shape models.  相似文献   

13.
This paper presents a comparison study between 10 automatic and six interactive methods for liver segmentation from contrast-enhanced CT images. It is based on results from the “MICCAI 2007 Grand Challenge” workshop, where 16 teams evaluated their algorithms on a common database. A collection of 20 clinical images with reference segmentations was provided to train and tune algorithms in advance. Participants were also allowed to use additional proprietary training data for that purpose. All teams then had to apply their methods to 10 test datasets and submit the obtained results. Employed algorithms include statistical shape models, atlas registration, level-sets, graph-cuts and rule-based systems. All results were compared to reference segmentations five error measures that highlight different aspects of segmentation accuracy. All measures were combined according to a specific scoring system relating the obtained values to human expert variability. In general, interactive methods reached higher average scores than automatic approaches and featured a better consistency of segmentation quality. However, the best automatic methods (mainly based on statistical shape models with some additional free deformation) could compete well on the majority of test images. The study provides an insight in performance of different segmentation approaches under real-world conditions and highlights achievements and limitations of current image analysis techniques.   相似文献   

14.
This paper presents a vessel segmentation method which learns the geometry and appearance of vessels in medical images from annotated data and uses this knowledge to segment vessels in unseen images. Vessels are segmented in a coarse-to-fine fashion. First, the vessel boundaries are estimated with multivariate linear regression using image intensities sampled in a region of interest around an initialization curve. Subsequently, the position of the vessel boundary is refined with a robust nonlinear regression technique using intensity profiles sampled across the boundary of the rough segmentation and using information about plausible cross-sectional vessel shapes. The method was evaluated by quantitatively comparing segmentation results to manual annotations of 229 coronary arteries. On average the difference between the automatically obtained segmentations and manual contours was smaller than the inter-observer variability, which is an indicator that the method outperforms manual annotation. The method was also evaluated by using it for centerline refinement on 24 publicly available datasets of the Rotterdam Coronary Artery Evaluation Framework. Centerlines are extracted with an existing method and refined with the proposed method. This combination is currently ranked second out of 10 evaluated interactive centerline extraction methods. An additional qualitative expert evaluation in which 250 automatic segmentations were compared to manual segmentations showed that the automatically obtained contours were rated on average better than manual contours.  相似文献   

15.
We compared four automated methods for hippocampal segmentation using different machine learning algorithms: 1) hierarchical AdaBoost, 2) support vector machines (SVM) with manual feature selection, 3) hierarchical SVM with automated feature selection (Ada-SVM), and 4) a publicly available brain segmentation package (FreeSurfer). We trained our approaches using T1-weighted brain MRIs from 30 subjects [10 normal elderly, 10 mild cognitive impairment (MCI), and 10 Alzheimer's disease (AD)], and tested on an independent set of 40 subjects (20 normal, 20 AD). Manually segmented gold standard hippocampal tracings were available for all subjects (training and testing). We assessed each approach's accuracy relative to manual segmentations, and its power to map AD effects. We then converted the segmentations into parametric surfaces to map disease effects on anatomy. After surface reconstruction, we computed significance maps, and overall corrected $p$-values, for the 3-D profile of shape differences between AD and normal subjects. Our AdaBoost and Ada-SVM segmentations compared favorably with the manual segmentations and detected disease effects as well as FreeSurfer on the data tested. Cumulative $p$-value plots, in conjunction with the false discovery rate method, were used to examine the power of each method to detect correlations with diagnosis and cognitive scores. We also evaluated how segmentation accuracy depended on the size of the training set, providing practical information for future users of this technique.   相似文献   

16.
Recently, there has been a trend in tracking to use more refined segmentation mask instead of coarse bounding box to represent the target object. Some trackers proposed segmentation branches based on the tracking framework and maintain real-time speed. However, those trackers use a simple FCNs structure and lack of the edge information modeling. This makes performance quite unsatisfactory. In this paper, we propose an edge-aware segmentation network, which uses the complementarity between target information and edge information to provide a more refined representation of the target. Firstly, We use the high-level features of the tracking backbone network and the correlation features of the classification branch of the tracking framework to fuse, and use the target edge and target segmentation mask for simultaneous supervision to obtain an optimized high-level feature with rough edge information and target information. Secondly, we use the optimized high-level features to guide the low-level features of the tracking backbone network to generate more refined edge features. Finally, we use the refined edge features to fuse with the target features of each layer to generate the final mask. Our approach has achieved leading performance on recent pixel-wise object tracking benchmark VOT2020 and segmentation datasets DAVIS2016 and DAVIS2017 while running on 47 fps. Code is available at https://github.com/TJUMMG/EATtracker.  相似文献   

17.
It is well known in the pattern recognition community that the accuracy of classifications obtained by combining decisions made by independent classifiers can be substantially higher than the accuracy of the individual classifiers. We have previously shown this to be true for atlas-based segmentation of biomedical images. The conventional method for combining individual classifiers weights each classifier equally (vote or sum rule fusion). In this paper, we propose two methods that estimate the performances of the individual classifiers and combine the individual classifiers by weighting them according to their estimated performance. The two methods are multiclass extensions of an expectation-maximization (EM) algorithm for ground truth estimation of binary classification based on decisions of multiple experts (Warfield et al., 2004). The first method performs parameter estimation independently for each class with a subsequent integration step. The second method considers all classes simultaneously. We demonstrate the efficacy of these performance-based fusion methods by applying them to atlas-based segmentations of three-dimensional confocal microscopy images of bee brains. In atlas-based image segmentation, multiple classifiers arise naturally by applying different registration methods to the same atlas, or the same registration method to different atlases, or both. We perform a validation study designed to quantify the success of classifier combination methods in atlas-based segmentation. By applying random deformations, a given ground truth atlas is transformed into multiple segmentations that could result from imperfect registrations of an image to multiple atlas images. In a second evaluation study, multiple actual atlas-based segmentations are combined and their accuracies computed by comparing them to a manual segmentation. We demonstrate in both evaluation studies that segmentations produced by combining multiple individual registration-based segmentations are more accurate for the two classifier fusion methods we propose, which weight the individual classifiers according to their EM-based performance estimates, than for simple sum rule fusion, which weights each classifier equally.  相似文献   

18.
Since the invention of diffusion magnetic resonance imaging (dMRI), currently the only established method for studying white matter connectivity in a clinical environment, there has been a great deal of interest in the effects of various pathologies on the connectivity of the brain. As methods for in vivo tractography have been developed, it has become possible to track and segment specific white matter structures of interest for particular study. However, the consistency and reproducibility of tractography-based segmentation remain limited, and attempts to improve them have thus far typically involved the imposition of strong constraints on the tract reconstruction process itself. In this work we take a different approach, developing a formal probabilistic model for the relationships between comparable tracts in different scans, and then using it to choose a tract, a posteriori, which best matches a predefined reference tract for the structure of interest. We demonstrate that this method is able to significantly improve segmentation consistency without directly constraining the tractography algorithm.  相似文献   

19.
1 Introduction The performance of a watershed-based image segmentation method depends largely on the algorithm used to compute the gradient. Conventional morphological gradient operators[2~3] produce too many local minima because of noise and quantization…  相似文献   

20.
In this paper, we present novel algorithms for statistically robust interpolation and approximation of diffusion tensors-which are symmetric positive definite (SPD) matrices-and use them in developing a significant extension to an existing probabilistic algorithm for scalar field segmentation, in order to segment diffusion tensor magnetic resonance imaging (DT-MRI) datasets. Using the Riemannian metric on the space of SPD matrices, we present a novel and robust higher order (cubic) continuous tensor product of B-splines algorithm to approximate the SPD diffusion tensor fields. The resulting approximations are appropriately dubbed tensor splines. Next, we segment the diffusion tensor field by jointly estimating the label (assigned to each voxel) field, which is modeled by a Gauss Markov measure field (GMMF) and the parameters of each smooth tensor spline model representing the labeled regions. Results of interpolation, approximation, and segmentation are presented for synthetic data and real diffusion tensor fields from an isolated rat hippocampus, along with validation. We also present comparisons of our algorithms with existing methods and show significantly improved results in the presence of noise as well as outliers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号