首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
Rule-based detection of intrathoracic airway trees   总被引:2,自引:0,他引:2  
New sensitive and reliable methods for assessing alterations in regional lung structure and function are critically important for the investigation and treatment of pulmonary diseases. Accurate identification of the airway tree will provide an assessment of airway structure and will provide a means by which multiple volumetric images of the lung at the same lung volume over time can be used to assess regional parenchymal changes. The authors describe a novel rule-based method for the segmentation of airway trees from three-dimensional (3-D) sets of computed tomography (CT) images, and its validation. The presented method takes advantage of a priori anatomical knowledge about pulmonary airway and vascular trees and their interrelationships. The method is based on a combination of 3-D seeded region growing that is used to identify large airways, rule-based two-dimensional (2-D) segmentation of individual CT slices to identify probable locations of smaller diameter airways, and merging of airway regions across the 3-D set of slices resulting in a tree-like airway structure. The method was validated in 40 3-mm-thick CT sections from five data sets of canine lungs scanned via electron beam CT in vivo with lung volume held at a constant pressure. The method's performance was compared with that of the conventional 3-D region growing method. The method substantially outperformed an existing conventional approach to airway tree detection.  相似文献   

2.
The lungs exchange air with the external environment via the pulmonary airways. Computed tomography (CT) scanning can be used to obtain detailed images of the pulmonary anatomy, including the airways. These images have been used to measure airway geometry, study airway reactivity, and guide surgical interventions. Prior to these applications, airway segmentation can be used to identify the airway lumen in the CT images. Airway tree segmentation can be performed manually by an image analyst, but the complexity of the tree makes manual segmentation tedious and extremely time-consuming. We describe a fully automatic technique for segmenting the airway tree in three-dimensional (3-D) CT images of the thorax. We use grayscale morphological reconstruction to identify candidate airways on CT slices and then reconstruct a connected 3-D airway tree. After segmentation, we estimate airway branchpoints based on connectivity changes in the reconstructed tree. Compared to manual analysis on 3-mm-thick electron-beam CT images, the automatic approach has an overall airway branch detection sensitivity of approximately 73%.  相似文献   

3.
Matching of corresponding branchpoints between two human airway trees, as well as assigning anatomical names to the segments and branchpoints of the human airway tree, are of significant interest for clinical applications and physiological studies. In the past, these tasks were often performed manually due to the lack of automated algorithms that can tolerate false branches and anatomical variability typical for in vivo trees. In this paper, we present algorithms that perform both matching of branchpoints and anatomical labeling of in vivo trees without any human intervention and within a short computing time. No hand-pruning of false branches is required. The results from the automated methods show a high degree of accuracy when validated against reference data provided by human experts. 92.9% of the verifiable branchpoint matches found by the computer agree with experts' results. For anatomical labeling, 97.1% of the automatically assigned segment labels were found to be correct.  相似文献   

4.
随着网络用户的日益增多,许多互联网企业甚至事业部门需要使用用户画像来对不同网络用户进行心理刻画来了解用户。但用户画像算法一般存在着聚类算法的簇值必须手动指定以及无效关键字过多的问题。因此文章设计了一种新的基于网络行为的用户画像方法。该方法首先会对搜集到的用户数据进行分类;接着设计算法自动确定簇值,并将簇值代入聚类算法与分类算法后执行它们,从而在数据中提取关键字;之后使用分词算法进行标签的生成,并利用停用词过滤和关键字补遗机制来去除无效关键字;最后,使用词云算法生成用户画像文件。在采集不同用户的网络数据进行实验后,结果表明文章改进后的用户算法成功简化了聚类步骤,且无效关键字显著减少,用户画像能够更加精准地描述用户特征。  相似文献   

5.
3-D segmentation algorithm of small lung nodules in spiral CT images.   总被引:2,自引:0,他引:2  
Computed tomography (CT) is the most sensitive imaging technique for detecting lung nodules, and is now being evaluated as a screening tool for lung cancer in several large samples studies all over the world. In this report, we describe a semiautomatic method for 3-D segmentation of lung nodules in CT images for subsequent volume assessment. The distinguishing features of our algorithm are the following. 1) The user interaction process. It allows the introduction of the knowledge of the expert in a simple and reproducible manner. 2) The adoption of the geodesic distance in a multithreshold image representation. It allows the definition of a fusion--segregation process based on both gray-level similarity and objects shape. The algorithm was validated on low-dose CT scans of small nodule phantoms (mean diameter 5.3--11 mm) and in vivo lung nodules (mean diameter 5--9.8 mm) detected in the Italung-CT screening program for lung cancer. A further test on small lung nodules of Lung Image Database Consortium (LIDC) first data set was also performed. We observed a RMS error less than 6.6% in phantoms, and the correct outlining of the nodule contour was obtained in 82/95 lung nodules of Italung-CT and in 10/12 lung nodules of LIDC first data set. The achieved results support the use of the proposed algorithm for volume measurements of lung nodules examined with low-dose CT scanning technique.  相似文献   

6.

To develop an automated pulmonary fibrosis (PF) segmentation methodology using a 3D multi-scale convolutional encoder-decoder approach following the robust atlas-based active volume model in thoracic CT for Rhesus Macaques with radiation-induced lung damage. 152 thoracic computed tomography scans of Rhesus Macaques with radiation-induced lung damage were collected. The 3D input data are randomly augmented with the Gaussian blurring when applying the 3D multi-scale convolutional encoder-decoder (3D MSCED) segmentation method.PF in each scan was manually segmented in which 70% scans were used as training data, 20% scans were used as validation data, and 10% scans were used as testing data. The performance of the method is assessed based on a10-fold cross validation method. The workflow of the proposed method has two parts. First, the compromised lung volume with acute radiation-induced PF was segmented using a robust atlas-based active volume model. Next, a 3D multi-scale convolutional encoder-decoder segmentation method was developed which merged the higher spatial information from low-level features with the high-level object knowledge encoded in upper network layers. It included a bottom-up feed-forward convolutional neural network and a top-down learning mask refinement process. The quantitative results of our segmentation method achieved mean Dice score of (0.769, 0.853), mean accuracy of (0.996, 0.999), and mean relative error of (0.302, 0.512) with 95% confidence interval. The qualitative and quantitative comparisons show that our proposed method can achieve better segmentation accuracy with less variance in testing data. This method was extensively validated in NHP datasets. The results demonstrated that the approach is more robust relative to PF than other methods. It is a general framework which can easily be applied to segmentation other lung lesions.

  相似文献   

7.
The purpose of this work is to develop patient-specific models for automatically detecting lung nodules in computed tomography (CT) images. It is motivated by significant developments in CT scanner technology and the burden that lung cancer screening and surveillance imposes on radiologists. We propose a new method that uses a patient's baseline image data to assist in the segmentation of subsequent images so that changes in size and/or shape of nodules can be measured automatically. The system uses a generic, a priori model to detect candidate nodules on the baseline scan of a previously unseen patient. A user then confirms or rejects nodule candidates to establish baseline results. For analysis of follow-up scans of that particular patient, a patient-specific model is derived from these baseline results. This model describes expected features (location, volume and shape) of previously segmented nodules so that the system can relocalize them automatically on follow-up. On the baseline scans of 17 subjects, a radiologist identified a total of 36 nodules, of which 31 (86%) were detected automatically by the system with an average of 11 false positives (FPs) per case. In follow-up scans 27 of the 31 nodules were still present and, using patient-specific models, 22 (81%) were correctly relocalized by the system. The system automatically detected 16 out of a possible 20 (80%) of new nodules on follow-up scans with ten FPs per case.  相似文献   

8.
Reliable segmentation of the colon is a requirement for three-dimensional visualization programs and automatic detection of polyps on computed tomography (CT) colonography. There is an evolving clinical consensus that giving patients positive oral contrast to tag out remnants of stool and residual fluids is mandatory. The presence of positive oral contrast in the colon adds an additional challenge for colonic segmentation but ultimately is beneficial to the patient because the enhanced fluid helps reveal polyps in otherwise hidden areas. Therefore, we developed a new segmentation procedure which can handle both air- and fluid-filled parts of the colon. The procedure organizes individual air- and fluid-filled regions into a graph that enables identification and removal of undesired leakage outside the colon. In addition, the procedure provides a risk assessment of possible leakage to assist the user prior to the tedious task of visual verification. The proposed hybrid algorithm uses modified region growing, fuzzy connectedness and level set segmentation. We tested our algorithm on 160 CT colonography scans containing 183 known polyps. All 183 polyps were in segmented regions. In addition, visual inspection of 24 CT colonography scans demonstrated good performance of our procedure: the reconstructed colonic wall appeared smooth even at the interface between air and fluid and there were no leaked regions.  相似文献   

9.
High-resolution X-ray computed tomography (CT) imaging is routinely used for clinical pulmonary applications. Since lung function varies regionally and because pulmonary disease is usually not uniformly distributed in the lungs, it is useful to study the lungs on a lobe-by-lobe basis. Thus, it is important to segment not only the lungs, but the lobar fissures as well. In this paper, we demonstrate the use of an anatomic pulmonary atlas, encoded with a priori information on the pulmonary anatomy, to automatically segment the oblique lobar fissures. Sixteen volumetric CT scans from 16 subjects are used to construct the pulmonary atlas. A ridgeness measure is applied to the original CT images to enhance the fissure contrast. Fissure detection is accomplished in two stages: an initial fissure search and a final fissure search. A fuzzy reasoning system is used in the fissure search to analyze information from three sources: the image intensity, an anatomic smoothness constraint, and the atlas-based search initialization. Our method has been tested on 22 volumetric thin-slice CT scans from 12 subjects, and the results are compared to manual tracings. Averaged across all 22 data sets, the RMS error between the automatically segmented and manually segmented fissures is 1.96 +/- 0.71 mm and the mean of the similarity indices between the manually defined and computer-defined lobe regions is 0.988. The results indicate a strong agreement between the automatic and manual lobe segmentations.  相似文献   

10.
11.
Toward automated segmentation of the pathological lung in CT   总被引:2,自引:0,他引:2  
Conventional methods of lung segmentation rely on a large gray value contrast between lung fields and surrounding tissues. These methods fail on scans with lungs that contain dense pathologies, and such scans occur frequently in clinical practice. We propose a segmentation-by-registration scheme in which a scan with normal lungs is elastically registered to a scan containing pathology. When the resulting transformation is applied to a mask of the normal lungs, a segmentation is found for the pathological lungs. As a mask of the normal lungs, a probabilistic segmentation built up out of the segmentations of 15 registered normal scans is used. To refine the segmentation, voxel classification is applied to a certain volume around the borders of the transformed probabilistic mask. Performance of this scheme is compared to that of three other algorithms: a conventional, a user-interactive and a voxel classification method. The algorithms are tested on 10 three-dimensional thin-slice computed tomography volumes containing high-density pathology. The resulting segmentations are evaluated by comparing them to manual segmentations in terms of volumetric overlap and border positioning measures. The conventional and user-interactive methods that start off with thresholding techniques fail to segment the pathologies and are outperformed by both voxel classification and the refined segmentation-by-registration. The refined registration scheme enjoys the additional benefit that it does not require pathological (hand-segmented) training data.  相似文献   

12.
Segmentation of intrathoracic airway trees: a fuzzy logic approach   总被引:6,自引:0,他引:6  
Three-dimensional (3-D) analysis of airway trees extracted from computed tomography (CT) image data can provide objective information about lung structure and function. However, manual analysis of 3-D lung CT images is tedious, time consuming and, thus, impractical for routine clinical care. The authors have previously reported an automated rule-based method for extraction of airway trees from 3-D CT images using a priori knowledge about airway-tree anatomy. Although the method's sensitivity was quite good, its specificity suffered from a large number of falsely detected airways. The authors present a new approach to airway-tree detection based on fuzzy logic that increases the method's specificity without compromising its sensitivity. The method was validated in 32 CT image slices randomly selected from five volumetric canine electron-beam CT data sets. The fuzzy-logic method significantly outperformed the previously reported rule-based method (p<0.002)  相似文献   

13.
肺实质分割结果的准确性在实际临床应用中具有非常重要的意义。但由于肺结节的位置、大小、形状的不规则性,肺部病变的多样性,以及人体胸部解剖结构的明显差异等,使得各类分割方法不能统一地适用于所有的胸部CT图像,所以对于肺实质分割方法的研究仍具有很大的挑战。该文在国内外研究分析的基础上提出基于3D区域增长法与改进的凸包修补算法相结合的全肺分割方法。在3D区域增长法的粗分割基础上,对分割的结果进行细化工作,通过连通域标记法与形态学方法相结合去除气管和主支气管,得到初步的肺实质掩膜,最后应用改进的凸包算法对肺部轮廓进行修补平滑,最终得到肺部分割结果。通过与凸包算法及滚球法相对比,证明该文所提改进的凸包算法能够有效地修补肺部轮廓凹陷,修补后的结果分割精度较高。  相似文献   

14.
This paper presents an interactive algorithm for segmentation of natural images. The task is formulated as a problem of spline regression, in which the spline is derived in Sobolev space and has a form of a combination of linear and Green's functions. Besides its nonlinear representation capability, one advantage of this spline in usage is that, once it has been constructed, no parameters need to be tuned to data. We define this spline on the user specified foreground and background pixels, and solve its parameters (the combination coefficients of functions) from a group of linear equations. To speed up spline construction, K-means clustering algorithm is employed to cluster the user specified pixels. By taking the cluster centers as representatives, this spline can be easily constructed. The foreground object is finally cut out from its background via spline interpolation. The computational complexity of the proposed algorithm is linear in the number of the pixels to be segmented. Experiments on diverse natural images, with comparison to existing algorithms, illustrate the validity of our method.  相似文献   

15.
CT检测在人体肺部疾病的诊疗中起着重要作用,快速完整地分割出肺实质区域已成为定性、定量诊断肺部疾病的重要手段。文章在分析研究大量胸部CT图像的基础上,提出一种新的肺实质分割方法:将Mean-Shift算法结合Snake模型分割出肺实质区域。实验证明,文章所提方法精确度高、分割效果完整,满足临床诊疗的要求。  相似文献   

16.
赵泉华  赵雪梅  李玉 《信号处理》2016,32(2):157-166
模糊ISODATA (Fuzzy ISODATA, FISODATA) 算法不但继承了FCM算法的可拓展性以及ISODATA算法的自组织性,还能自动获取数据的类属数,因此在许多数据处理领域中有着广泛的应用。在应用于图像分割时,FISODATA算法定义的FCM目标函数未考虑邻域像素间的数据相关性,导致该算法的抗噪性能较差;此外,FISODATA算法中分裂-合并操作需人工选取阈值参数,而不适当的阈值往往使得该算法陷入局部极值, 因而得到错误的类属数并影响图像分割结果。该文将考虑邻域关系的基于隐马尔可夫随机场(Hidden Markov Random Field,HMRF)的FCM(HMRF-FCM)方法纳入ISODATA框架,提出HMRF-FCM ISODATA (HMRF-FISODATA)算法,在分裂与合并操作后增加了优化操作,并根据优化结果自适应调节控制聚类分裂与合并的各阈值。该算法不仅能够快速获取正确类属数,而且克服了FISODATA算法没有考虑邻域像素的关系、人工选取阈值参数和受图像噪声影响大等问题,实现了自动确定正确确定类属数的同时完成高精度图像分割。   相似文献   

17.
针对人工蜂群优化的K均值算法易陷入局部最优、搜索精度不够、分割图像不够细致等问题,本文融合自适应人工蜂群和K均值聚类,提出了一种新的图像分割算法。算法首先利用距离最大最小乘积对种群进行初始化;其次采用自适应搜索参数动态调整邻域搜索范围,使人工蜂群算法快速收敛于全局最优;然后将人工蜂群输出的所有蜜源进行K均值聚类,克服K均值聚类结果对初始聚类中心的依赖,再将聚类划分结果进行Powell局部搜索,加快算法收敛的速度,将得到的新聚类中心更新蜂群中蜜源位置。最后,将本文算法与其他两种同类分割算法进行试验对比。实验结果表明:与其他两种算法相比,本文提出的分割算法在保证运行时间的前提下,分割准确率比其他两种算法分别至少提高了3.5%和4.8%,表现出了较高的分割质量。  相似文献   

18.
基于CT图像的肺实质分割不仅仅是后续图像处理最基础和最重要的技术,而且是一个典型的亟待解决的问题。基于CT图像的精确的肺实质分割是数学分析、计算机辅助分析和治疗的前提。本文明确了肺实质分割的概念及本质,以及肺实质分割在临床治疗中的重大意义。重点介绍了阈值法、区域生长法、主动轮廓模型和遗传算法,分析了它们的本质以及优缺点,并且列举大量实例说明。最后指出了基于CT图像的肺实质分割方法的发展趋势与面临的挑战。  相似文献   

19.
Segmentation of pulmonary X-ray computed tomography (CT) images is a precursor to most pulmonary image analysis applications. This paper presents a fully automatic method for identifying the lungs in three-dimensional (3-D) pulmonary X-ray CT images. The method has three main steps. First, the lung region is extracted from the CT images by gray-level thresholding. Then, the left and right lungs are separated by identifying the anterior and posterior junctions by dynamic programming. Finally, a sequence of morphological operations is used to smooth the irregular boundary along the mediastinum in order to obtain results consistent with those obtained by manual analysis, in which only the most central pulmonary arteries are excluded from the lung region. The method has been tested by processing 3-D CT data sets from eight normal subjects, each imaged three times at biweekly intervals with lungs at 90% vital capacity. We present results by comparing our automatic method to manually traced borders from two image analysts. Averaged over all volumes, the root mean square difference between the computer and human analysis is 0.8 pixels (0.54 mm). The mean intrasubject change in tissue content over the three scans was 2.75% +/- 2.29% (mean +/- standard deviation).  相似文献   

20.
A statistical model is presented that represents the distributions of major tissue classes in single-channel magnetic resonance (MR) cerebral images. Using the model, cerebral images are segmented into gray matter, white matter, and cerebrospinal fluid (CSF). The model accounts for random noise, magnetic field inhomogeneities, and biological variations of the tissues. Intensity measurements are modeled by a finite Gaussian mixture. Smoothness and piecewise contiguous nature of the tissue regions are modeled by a three-dimensional (3-D) Markov random field (MRF). A segmentation algorithm, based on the statistical model, approximately finds the maximum a posteriori (MAP) estimation of the segmentation and estimates the model parameters from the image data. The proposed scheme for segmentation is based on the iterative conditional modes (ICM) algorithm in which measurement model parameters are estimated using local information at each site, and the prior model parameters are estimated using the segmentation after each cycle of iterations. Application of the algorithm to a sample of clinical MR brain scans, comparisons of the algorithm with other statistical methods, and a validation study with a phantom are presented. The algorithm constitutes a significant step toward a complete data driven unsupervised approach to segmentation of MR images in the presence of the random noise and intensity inhomogeneities  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号