首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We compared four automated methods for hippocampal segmentation using different machine learning algorithms: 1) hierarchical AdaBoost, 2) support vector machines (SVM) with manual feature selection, 3) hierarchical SVM with automated feature selection (Ada-SVM), and 4) a publicly available brain segmentation package (FreeSurfer). We trained our approaches using T1-weighted brain MRIs from 30 subjects [10 normal elderly, 10 mild cognitive impairment (MCI), and 10 Alzheimer's disease (AD)], and tested on an independent set of 40 subjects (20 normal, 20 AD). Manually segmented gold standard hippocampal tracings were available for all subjects (training and testing). We assessed each approach's accuracy relative to manual segmentations, and its power to map AD effects. We then converted the segmentations into parametric surfaces to map disease effects on anatomy. After surface reconstruction, we computed significance maps, and overall corrected $p$-values, for the 3-D profile of shape differences between AD and normal subjects. Our AdaBoost and Ada-SVM segmentations compared favorably with the manual segmentations and detected disease effects as well as FreeSurfer on the data tested. Cumulative $p$-value plots, in conjunction with the false discovery rate method, were used to examine the power of each method to detect correlations with diagnosis and cognitive scores. We also evaluated how segmentation accuracy depended on the size of the training set, providing practical information for future users of this technique.   相似文献   

2.
This paper presents a vessel segmentation method which learns the geometry and appearance of vessels in medical images from annotated data and uses this knowledge to segment vessels in unseen images. Vessels are segmented in a coarse-to-fine fashion. First, the vessel boundaries are estimated with multivariate linear regression using image intensities sampled in a region of interest around an initialization curve. Subsequently, the position of the vessel boundary is refined with a robust nonlinear regression technique using intensity profiles sampled across the boundary of the rough segmentation and using information about plausible cross-sectional vessel shapes. The method was evaluated by quantitatively comparing segmentation results to manual annotations of 229 coronary arteries. On average the difference between the automatically obtained segmentations and manual contours was smaller than the inter-observer variability, which is an indicator that the method outperforms manual annotation. The method was also evaluated by using it for centerline refinement on 24 publicly available datasets of the Rotterdam Coronary Artery Evaluation Framework. Centerlines are extracted with an existing method and refined with the proposed method. This combination is currently ranked second out of 10 evaluated interactive centerline extraction methods. An additional qualitative expert evaluation in which 250 automatic segmentations were compared to manual segmentations showed that the automatically obtained contours were rated on average better than manual contours.  相似文献   

3.
4.
We present a fully automatic method for articular cartilage segmentation from magnetic resonance imaging (MRI) which we use as the foundation of a quantitative cartilage assessment. We evaluate our method by comparisons to manual segmentations by a radiologist and by examining the interscan reproducibility of the volume and area estimates. Training and evaluation of the method is performed on a data set consisting of 139 scans of knees with a status ranging from healthy to severely osteoarthritic. This is, to our knowledge, the only fully automatic cartilage segmentation method that has good agreement with manual segmentations, an interscan reproducibility as good as that of a human expert, and enables the separation between healthy and osteoarthritic populations. While high-field scanners offer high-quality imaging from which the articular cartilage have been evaluated extensively using manual and automated image analysis techniques, low-field scanners on the other hand produce lower quality images but to a fraction of the cost of their high-field counterpart. For low-field MRI, there is no well-established accuracy validation for quantitative cartilage estimates, but we show that differences between healthy and osteoarthritic populations are statistically significant using our cartilage volume and surface area estimates, which suggests that low-field MRI analysis can become a useful, affordable tool in clinical studies.  相似文献   

5.
柯逍  邹嘉伟  杜明智  周铭柯 《电子学报》2017,45(12):2925-2935
针对传统图像标注模型存在着训练时间长、对低频词汇敏感等问题,该文提出了基于蒙特卡罗数据集均衡和鲁棒性增量极限学习机的图像自动标注模型.该模型首先对公共图像库的训练集数据进行图像自动分割,选择分割后相应的种子标注词,并通过提出的基于综合距离的图像特征匹配算法进行自动匹配以形成不同类别的训练集.针对公共数据库中不同标注词的数据规模相差较大,提出了蒙特卡罗数据集均衡算法使得各个标注词间的数据规模大体一致.然后针对单一特征描述存在的不足,提出了多尺度特征融合算法对不同标注词图像进行有效的特征提取.最后针对传统极限学习机存在的隐层节点随机性和输入向量权重一致性的问题,提出了鲁棒性增量极限学习,提高了判别模型的准确性.通过在公共数据集上的实验结果表明:该模型可以在很短时间内实现图像的自动标注,对低频词汇具有较强的鲁棒性,并且在平均召回率、平均准确率、综合值等多项指标上均高于现流行的大多数图像自动标注模型.  相似文献   

6.
Characterizing the performance of image segmentation approaches has been a persistent challenge. Performance analysis is important since segmentation algorithms often have limited accuracy and precision. Interactive drawing of the desired segmentation by human raters has often been the only acceptable approach, and yet suffers from intra-rater and inter-rater variability. Automated algorithms have been sought in order to remove the variability introduced by raters, but such algorithms must be assessed to ensure they are suitable for the task. The performance of raters (human or algorithmic) generating segmentations of medical images has been difficult to quantify because of the difficulty of obtaining or estimating a known true segmentation for clinical data. Although physical and digital phantoms can be constructed for which ground truth is known or readily estimated, such phantoms do not fully reflect clinical images due to the difficulty of constructing phantoms which reproduce the full range of imaging characteristics and normal and pathological anatomical variability observed in clinical data. Comparison to a collection of segmentations by raters is an attractive alternative since it can be carried out directly on the relevant clinical imaging data. However, the most appropriate measure or set of measures with which to compare such segmentations has not been clarified and several measures are used in practice. We present here an expectation-maximization algorithm for simultaneous truth and performance level estimation (STAPLE). The algorithm considers a collection of segmentations and computes a probabilistic estimate of the true segmentation and a measure of the performance level represented by each segmentation. The source of each segmentation in the collection may be an appropriately trained human rater or raters, or may be an automated segmentation algorithm. The probabilistic estimate of the true segmentation is formed by estimating an optimal combination of the segmentations, weighting each segmentation depending upon the estimated performance level, and incorporating a prior model for the spatial distribution of structures being segmented as well as spatial homogeneity constraints. STAPLE is straightforward to apply to clinical imaging data, it readily enables assessment of the performance of an automated image segmentation algorithm, and enables direct comparison of human rater and algorithm performance.  相似文献   

7.

Research on Computer-Aided Diagnosis (CAD) of medical images has been actively conducted to support decisions of radiologists. Since deep learning has shown distinguished abilities in classification, detection, segmentation, etc. in various problems, many studies on CAD have been using deep learning. One of the reasons behind the success of deep learning is the availability of large application-specific annotated datasets. However, it is quite tough work for radiologists to annotate hundreds or thousands of medical images for deep learning, and thus it is difficult to obtain large scale annotated datasets for various organs and diseases. Therefore, many techniques that effectively train deep neural networks have been proposed, and one of the techniques is transfer learning. This paper focuses on transfer learning and especially conducts a case study on ROI-based opacity classification of diffuse lung diseases in chest CT images. The aim of this paper is to clarify what characteristics of the datasets for pre-training and what kinds of structures of deep neural networks for fine-tuning contribute to enhance the effectiveness of transfer learning. In addition, the numbers of training data are set at various values and the effectiveness of transfer learning is evaluated. In the experiments, nine conditions of transfer learning and a method without transfer learning are compared to analyze the appropriate conditions. From the experimental results, it is clarified that the pre-training dataset with more (various) classes and the compact structure for fine-tuning show the best accuracy in this work.

  相似文献   

8.
An algorithm which utilizes digital image processing and pattern recognition methods for automated definition of left ventricular (LV) contours is presented. Digital image processing and pattern recognition techniques are applied to digitally acquired radiographic images of the heart to extract the LV contours required for quantitative analysis of cardiac function. Knowledge of the image domain is invoked at each step of the algorithm to orient the data search and thereby the complexity of the solution. A knowledge-based image transformation, directional gradient search, expectations of object versus background location, least-cost path searches by dynamic programming, and a digital representation of possible versus impossible ventricular shape are exploited. The digital representation, composed of a set of characteristic templates, was created using contours obtained by manual tracing. The algorithm was tested by application of three sets of 25 images each. Test set one and two were used as training sets for creation of the model for contour correction. Model-based correction proved to be an effective technique, producing significant reduction of error in the final contours.  相似文献   

9.
采用深度学习对钢铁材料显微组织图像分类,需要大量带标注信息的训练集。针对训练集人工标注效率低下问题,该文提出一种新的融合自组织增量神经网络和图卷积神经网络的半监督学习方法。首先,采用迁移学习获取图像数据样本的特征向量集合;其次,通过引入连接权重策略的自组织增量神经网络(WSOINN)对特征数据进行学习,获得其拓扑图结构,并引入胜利次数进行少量人工节点标注;然后,搭建图卷积网络(GCN)挖掘图中节点的潜在联系,利用Dropout手段提高网络的泛化能力,对剩余节点进行自动标注进而获得所有金相图的分类结果。针对从某国家重点实验室收集到的金相图数据,比较了在不同人工标注比例下的自动分类精度,结果表明:在图片标注量仅为传统模型12%时,新模型的分类准确度可达到91%。  相似文献   

10.
胡正平 《信号处理》2008,24(1):105-107
支持向量机通过随机选择标记的训练样本进行有监督学习,随着信息容量的增加和数据收集能力的提高,这需要耗费大量的标记工作量,给实际应用带来不少困难.本文提出了基于最佳样本标记的主动支持向量机学习策略:首先利用无监督聚类选择一个小规模的样本集进行标记,然后训练该标记样本集得到一个初始SVM分类器,然后利用该分类器主动选择最感兴趣的无标记样本进行标记,逐渐增加标记样本的数量,并在此基础上更新分类器,反复进行直到得到最佳性能的分类器.实验结果表明在基本不影响分类精度的情况下,主动学习选择的标记样本数量大大低于随机选择的标记样本数量,这大大降低了标记的工作量,而且训练速度同样有所提高.  相似文献   

11.
With the introduction of spectral-domain optical coherence tomography (OCT), much larger image datasets are routinely acquired compared to what was possible using the previous generation of time-domain OCT. Thus, the need for 3-D segmentation methods for processing such data is becoming increasingly important. We report a graph-theoretic segmentation method for the simultaneous segmentation of multiple 3-D surfaces that is guaranteed to be optimal with respect to the cost function and that is directly applicable to the segmentation of 3-D spectral OCT image data. We present two extensions to the general layered graph segmentation method: the ability to incorporate varying feasibility constraints and the ability to incorporate true regional information. Appropriate feasibility constraints and cost functions were learned from a training set of 13 spectral-domain OCT images from 13 subjects. After training, our approach was tested on a test set of 28 images from 14 subjects. An overall mean unsigned border positioning error of $5.69pm 2.41 mu{rm m}$ was achieved when segmenting seven surfaces (six layers) and using the average of the manual tracings of two ophthalmologists as the reference standard. This result is very comparable to the measured interobserver variability of $5.71pm 1.98 mu{rm m}$.   相似文献   

12.
13.
Target detection in remote sensing images (RSIs) is a fundamental yet challenging problem faced for remote sensing images analysis. More recently, weakly supervised learning, in which training sets require only binary labels indicating whether an image contains the object or not, has attracted considerable attention owing to its obvious advantages such as alleviating the tedious and time consuming work of human annotation. Inspired by its impressive success in computer vision field, in this paper, we propose a novel and effective framework for weakly supervised target detection in RSIs based on transferred deep features and negative bootstrapping. On one hand, to effectively mine information from RSIs and improve the performance of target detection, we develop a transferred deep model to extract high-level features from RSIs, which can be achieved by pre-training a convolutional neural network model on a large-scale annotated dataset (e.g. ImageNet) and then transferring it to our task by domain-specifically fine-tuning it on RSI datasets. On the other hand, we integrate negative bootstrapping scheme into detector training process to make the detector converge more stably and faster by exploiting the most discriminative training samples. Comprehensive evaluations on three RSI datasets and comparisons with state-of-the-art weakly supervised target detection approaches demonstrate the effectiveness and superiority of the proposed method.  相似文献   

14.
Toward automated segmentation of the pathological lung in CT   总被引:2,自引:0,他引:2  
Conventional methods of lung segmentation rely on a large gray value contrast between lung fields and surrounding tissues. These methods fail on scans with lungs that contain dense pathologies, and such scans occur frequently in clinical practice. We propose a segmentation-by-registration scheme in which a scan with normal lungs is elastically registered to a scan containing pathology. When the resulting transformation is applied to a mask of the normal lungs, a segmentation is found for the pathological lungs. As a mask of the normal lungs, a probabilistic segmentation built up out of the segmentations of 15 registered normal scans is used. To refine the segmentation, voxel classification is applied to a certain volume around the borders of the transformed probabilistic mask. Performance of this scheme is compared to that of three other algorithms: a conventional, a user-interactive and a voxel classification method. The algorithms are tested on 10 three-dimensional thin-slice computed tomography volumes containing high-density pathology. The resulting segmentations are evaluated by comparing them to manual segmentations in terms of volumetric overlap and border positioning measures. The conventional and user-interactive methods that start off with thresholding techniques fail to segment the pathologies and are outperformed by both voxel classification and the refined segmentation-by-registration. The refined registration scheme enjoys the additional benefit that it does not require pathological (hand-segmented) training data.  相似文献   

15.
The goal of this work is to perform a segmentation of the intimamedia thickness (IMT) of carotid arteries in view of computing various dynamical properties of that tissue, such as the elasticity distribution (elastogram). The echogenicity of a region of interest comprising the intima-media layers, the lumen, and the adventitia in an ultrasonic B-mode image is modeled by a mixture of three Nakagami distributions. In a first step, we compute the maximum a posteriori estimator of the proposed model, using the expectation maximization (EM) algorithm. We then compute the optimal segmentation based on the estimated distributions as well as a statistical prior for disease-free IMT using a variant of the exploration/selection (ES) algorithm. Convergence of the ES algorithm to the optimal solution is assured asymptotically and is independent of the initial solution. In particular, our method is well suited to a semi-automatic context that requires minimal manual initialization. Tests of the proposed method on 30 sequences of ultrasonic B-mode images of presumably disease-free control subjects are reported. They suggest that the semi-automatic segmentations obtained by the proposed method are within the variability of the manual segmentations of two experts.   相似文献   

16.
弱监督语义分割任务常利用训练集中全体图像的超像素及其相似度建立图模型,使用图像级别标记的监督关系进行约束求解。全局建模缺少单幅图像结构信息,同时此类参数方法受到复杂度限制,无法使用大规模的弱监督训练数据。针对以上问题,该文提出一种基于纹元森林和显著性先验的弱监督图像语义分割方法。算法使用弱监督数据和图像显著性训练随机森林分类器用于语义纹元森林特征(Semantic Texton Forest, STF)的提取。测试时,先将图像进行过分割,然后提取超像素语义纹元特征,利用朴素贝叶斯法进行超像素标记的概率估计,最后在条件随机场(CRF)框架下结合图像显著性信息定义了新的能量函数表达式,将图像的标注(labeling)问题转换为能量最小化问题求解。在MSRC-21类数据库上进行了验证,完成了语义分割任务。结果表明,在并未对整个训练集建立图模型的情况下,仅利用单幅图像的显著性信息也可以得到较好的分割结果,同时非参模型有利于规模数据分析。  相似文献   

17.
This paper presents a new method for deformable model-based segmentation of lumen and thrombus in abdominal aortic aneurysms from computed tomography (CT) angiography (CTA) scans. First the lumen is segmented based on two positions indicated by the user, and subsequently the resulting surface is used to initialize the automated thrombus segmentation method. For the lumen, the image-derived deformation term is based on a simple grey level model (two thresholds). For the more complex problem of thrombus segmentation, a grey level modeling approach with a nonparametric pattern classification technique is used, namely k-nearest neighbors. The intensity profile sampled along the surface normal is used as classification feature. Manual segmentations are used for training the classifier: samples are collected inside, outside, and at the given boundary positions. The deformation is steered by the most likely class corresponding to the intensity profile at each vertex on the surface. A parameter optimization study is conducted, followed by experiments to assess the overall segmentation quality and the robustness of results against variation in user input. Results obtained in a study of 17 patients show that the agreement with respect to manual segmentations is comparable to previous values reported in the literature, with considerable less user interaction.  相似文献   

18.
Automatic tumor segmentation using knowledge-based techniques   总被引:11,自引:0,他引:11  
A system that automatically segments and labels glioblastoma-multiforme tumors in magnetic resonance images (MRIs) of the human brain is presented. The MRIs consist of T1-weighted, proton density, and T2-weighted feature images and are processed by a system which integrates knowledge-based (KB) techniques with multispectral analysis. Initial segmentation is performed by an unsupervised clustering algorithm. The segmented image, along with cluster centers for each class are provided to a rule-based expert system which extracts the intracranial region. Multispectral histogram analysis separates suspected tumor from the rest of the intracranial region, with region analysis used in performing the final tumor labeling. This system has been trained on three volume data sets and tested on thirteen unseen volume data sets acquired from a single MRI system. The KB tumor segmentation was compared with supervised, radiologist-labeled “ground truth” tumor volumes and supervised K-nearest neighbors tumor segmentations. The results of this system generally correspond well to ground truth, both on a per slice basis and more importantly in tracking total tumor volume during treatment over time  相似文献   

19.
目前Level-Set图像分割方法存在初始轮廓的确定受人为因素影响较大的问题,对目标被遮盖和目标与背景灰度值相近无法达到理想的分割效果。针对此问题,本文提出了利用Faster-RCNN网络模型确定目标初始轮廓和区域信息的先验水平集图像分割方法,搭建Caffe深度学习框架训练Faster-RCNN网络模型;通过有监督学习的方式在IAILD数据集上训练模型,检测出目标建筑物并初步提取建筑物的轮廓,并将其与形状先验的Level-Set算法结合。对比实验结果表明,本文方法解决了Level-Set算法中图像分割结果初始轮廓受人为标记框选的影响较大的问题,能够更好地完成被遮挡建筑物的分割,对于目标建筑和背景灰度值相近也能达到更好的分割效果。  相似文献   

20.
Automatic ultrasound (US) image segmentation is a difficult task due to the quantity of noise present in the images and the lack of information in several zones produced by the acquisition conditions. In this paper, we propose a method that combines shape priors and image information to achieve this task. In particular, we introduce knowledge about the rib-eye shape using a set of images manually segmented by experts. A method is proposed for the automatic segmentation of new samples in which a closed curve is fitted taking into account both the US image information and the geodesic distance between the evolving curve and the estimated mean rib-eye shape in a shape space. This method can be used to solve similar problems that arise when dealing with US images in other fields. The method was successfully tested over a database composed of 610 US images, for which we have the manual segmentations of two experts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号