首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Automatic image processing methods are a prerequisite to efficiently analyze the large amount of image data produced by computed tomography (CT) scanners during cardiac exams. This paper introduces a model-based approach for the fully automatic segmentation of the whole heart (four chambers, myocardium, and great vessels) from 3-D CT images. Model adaptation is done by progressively increasing the degrees-of-freedom of the allowed deformations. This improves convergence as well as segmentation accuracy. The heart is first localized in the image using a 3-D implementation of the generalized Hough transform. Pose misalignment is corrected by matching the model to the image making use of a global similarity transformation. The complex initialization of the multicompartment mesh is then addressed by assigning an affine transformation to each anatomical region of the model. Finally, a deformable adaptation is performed to accurately match the boundaries of the patient's anatomy. A mean surface-to-surface error of 0.82 mm was measured in a leave-one-out quantitative validation carried out on 28 images. Moreover, the piecewise affine transformation introduced for mesh initialization and adaptation shows better interphase and interpatient shape variability characterization than commonly used principal component analysis.   相似文献   

2.
Associating specific gene activity with functional locations in the brain results in a greater understanding of the role of the gene. To perform such an association for the more than 20 000 genes in the mammalian genome, reliable automated methods that characterize the distribution of gene expression in relation to a standard anatomical model are required. In this paper, we propose a new automatic method that results in the segmentation of gene expression images into distinct anatomical regions in which the expression can be quantified and compared with other images. Our contribution is a novel hybrid atlas that utilizes a statistical shape model based on a subdivision mesh, texture differentiation at region boundaries, and features of anatomical landmarks to delineate boundaries of anatomical regions in gene expression images. This atlas, which provides a common coordinate system for internal brain data, is being used to create a searchable database of gene expression patterns in the adult mouse brain. Our framework annotates the images about four times faster and has achieved a median spatial overlap of up to 0.92 compared with expert segmentation in 64 images tested. This tool is intended to help scientists interpret large-scale gene expression patterns more efficiently.  相似文献   

3.
Segmentation of pulmonary X-ray computed tomography (CT) images is a precursor to most pulmonary image analysis applications. This paper presents a fully automatic method for identifying the lungs in three-dimensional (3-D) pulmonary X-ray CT images. The method has three main steps. First, the lung region is extracted from the CT images by gray-level thresholding. Then, the left and right lungs are separated by identifying the anterior and posterior junctions by dynamic programming. Finally, a sequence of morphological operations is used to smooth the irregular boundary along the mediastinum in order to obtain results consistent with those obtained by manual analysis, in which only the most central pulmonary arteries are excluded from the lung region. The method has been tested by processing 3-D CT data sets from eight normal subjects, each imaged three times at biweekly intervals with lungs at 90% vital capacity. We present results by comparing our automatic method to manually traced borders from two image analysts. Averaged over all volumes, the root mean square difference between the computer and human analysis is 0.8 pixels (0.54 mm). The mean intrasubject change in tissue content over the three scans was 2.75% +/- 2.29% (mean +/- standard deviation).  相似文献   

4.
High-resolution X-ray computed tomography (CT) imaging is routinely used for clinical pulmonary applications. Since lung function varies regionally and because pulmonary disease is usually not uniformly distributed in the lungs, it is useful to study the lungs on a lobe-by-lobe basis. Thus, it is important to segment not only the lungs, but the lobar fissures as well. In this paper, we demonstrate the use of an anatomic pulmonary atlas, encoded with a priori information on the pulmonary anatomy, to automatically segment the oblique lobar fissures. Sixteen volumetric CT scans from 16 subjects are used to construct the pulmonary atlas. A ridgeness measure is applied to the original CT images to enhance the fissure contrast. Fissure detection is accomplished in two stages: an initial fissure search and a final fissure search. A fuzzy reasoning system is used in the fissure search to analyze information from three sources: the image intensity, an anatomic smoothness constraint, and the atlas-based search initialization. Our method has been tested on 22 volumetric thin-slice CT scans from 12 subjects, and the results are compared to manual tracings. Averaged across all 22 data sets, the RMS error between the automatically segmented and manually segmented fissures is 1.96 +/- 0.71 mm and the mean of the similarity indices between the manually defined and computer-defined lobe regions is 0.988. The results indicate a strong agreement between the automatic and manual lobe segmentations.  相似文献   

5.
A unifying framework for partial volume segmentation of brain MR images   总被引:2,自引:0,他引:2  
Accurate brain tissue segmentation by intensity-based voxel classification of magnetic resonance (MR) images is complicated by partial volume (PV) voxels that contain a mixture of two or more tissue types. In this paper, we present a statistical framework for PV segmentation that encompasses and extends existing techniques. We start from a commonly used parametric statistical image model in which each voxel belongs to one single tissue type, and introduce an additional downsampling step that causes partial voluming along the borders between tissues. An expectation-maximization approach is used to simultaneously estimate the parameters of the resulting model and perform a PV classification. We present results on well-chosen simulated images and on real MR images of the brain, and demonstrate that the use of appropriate spatial prior knowledge not only improves the classifications, but is often indispensable for robust parameter estimation as well. We conclude that general robust PV segmentation of MR brain images requires statistical models that describe the spatial distribution of brain tissues more accurately than currently available models.  相似文献   

6.
基于CT图像的肺实质分割不仅仅是后续图像处理最基础和最重要的技术,而且是一个典型的亟待解决的问题。基于CT图像的精确的肺实质分割是数学分析、计算机辅助分析和治疗的前提。本文明确了肺实质分割的概念及本质,以及肺实质分割在临床治疗中的重大意义。重点介绍了阈值法、区域生长法、主动轮廓模型和遗传算法,分析了它们的本质以及优缺点,并且列举大量实例说明。最后指出了基于CT图像的肺实质分割方法的发展趋势与面临的挑战。  相似文献   

7.
3-D segmentation algorithm of small lung nodules in spiral CT images.   总被引:2,自引:0,他引:2  
Computed tomography (CT) is the most sensitive imaging technique for detecting lung nodules, and is now being evaluated as a screening tool for lung cancer in several large samples studies all over the world. In this report, we describe a semiautomatic method for 3-D segmentation of lung nodules in CT images for subsequent volume assessment. The distinguishing features of our algorithm are the following. 1) The user interaction process. It allows the introduction of the knowledge of the expert in a simple and reproducible manner. 2) The adoption of the geodesic distance in a multithreshold image representation. It allows the definition of a fusion--segregation process based on both gray-level similarity and objects shape. The algorithm was validated on low-dose CT scans of small nodule phantoms (mean diameter 5.3--11 mm) and in vivo lung nodules (mean diameter 5--9.8 mm) detected in the Italung-CT screening program for lung cancer. A further test on small lung nodules of Lung Image Database Consortium (LIDC) first data set was also performed. We observed a RMS error less than 6.6% in phantoms, and the correct outlining of the nodule contour was obtained in 82/95 lung nodules of Italung-CT and in 10/12 lung nodules of LIDC first data set. The achieved results support the use of the proposed algorithm for volume measurements of lung nodules examined with low-dose CT scanning technique.  相似文献   

8.
A novel genetic algorithm (GA) is proposed for searching for the existence of a projective transform which, when applied to the model, results in the best alignment with an unknown 2D edge image. The presence of a valid solution reflects that the latter can be regarded as one of the projected views of the model. On this basis, the identity of an unknown edge image can be deduced by matching it against a set of 3D reference models. To increase the efficiency of the process, a two-pass, coarse-to-fine strategy is adopted. Initially, an unknown image is first classified to a small group of models by matching their outermost boundaries. Next, a fine but slower matching algorithm selects model(s) that share similar internal edge features as the unknown image. In the design of the method, the authors have adopted an alternative projective transform representation that involves fewer parameters and allows constraints to be easily imposed on their dynamic ranges. This effectively lowers the chance of premature convergence and increases the success rate. Experimental results obtained with the proposed scheme are encouraging and demonstrate the feasibility of the approach.  相似文献   

9.
For pt.I, see ibid., vol.11, no.1, p.53.61 (1992). Based on the statistical properties of X-ray CT imaging given in pt.I, an unsupervised stochastic model-based image segmentation technique for X-ray CT images is presented. This technique utilizes the finite normal mixture distribution and the underlying Gaussian random field (GRF) as the stochastic image model. The number of image classes in the observed image is detected by information theoretical criteria (AIC or MDL). The parameters of the model are estimated by expectation-maximization (EM) and classification-maximization (CM) algorithms. Image segmentation is performed by a Bayesian classifier. Results from the use of simulated and real X-ray computerized tomography (CT) image data are presented to demonstrate the promise and effectiveness of the proposed technique.  相似文献   

10.
王小鹏  张雯  崔颖 《光电子快报》2015,11(5):395-400
In lung CT images, the edge of a tumor is frequently fuzzy because of the complex relationship between tumors and tissues, especially in cases that the tumor adheres to the chest and lung in the pathology area. This makes the tumor segmentation more difficult. In order to segment tumors in lung CT images accurately, a method based on support vector machine (SVM) and improved level set model is proposed. Firstly, the image is divided into several block units; then the texture, gray and shape features of each block are extracted to construct eigenvector and then the SVM classifier is trained to detect suspicious lung lesion areas; finally, the suspicious edge is extracted as the initial contour after optimizing lesion areas, and the complete tumor segmentation can be obtained by level set model modified with morphological gradient. Experimental results show that this method can efficiently and fast segment the tumors from complex lung CT images with higher accuracy.  相似文献   

11.
Remote human identification using iris biometrics has high civilian and surveillance applications and its success requires the development of robust segmentation algorithm to automatically extract the iris region. This paper presents a new iris segmentation framework which can robustly segment the iris images acquired using near infrared or visible illumination. The proposed approach exploits multiple higher order local pixel dependencies to robustly classify the eye region pixels into iris or noniris regions. Face and eye detection modules have been incorporated in the unified framework to automatically provide the localized eye region from facial image for iris segmentation. We develop robust postprocessing operations algorithm to effectively mitigate the noisy pixels caused by the misclassification. Experimental results presented in this paper suggest significant improvement in the average segmentation errors over the previously proposed approaches, i.e., 47.5%, 34.1%, and 32.6% on UBIRIS.v2, FRGC, and CASIA.v4 at-a-distance databases, respectively. The usefulness of the proposed approach is also ascertained from recognition experiments on three different publicly available databases.  相似文献   

12.
In this paper, an effective model-based approach for computer-aided kidney segmentation of abdominal CT images with anatomic structure consideration is presented. This automatic segmentation system is expected to assist physicians in both clinical diagnosis and educational training. The proposed method is a coarse to fine segmentation approach divided into two stages. First, the candidate kidney region is extracted according to the statistical geometric location of kidney within the abdomen. This approach is applicable to images of different sizes by using the relative distance of the kidney region to the spine. The second stage identifies the kidney by a series of image processing operations. The main elements of the proposed system are: 1) the location of the spine is used as the landmark for coordinate references; 2) elliptic candidate kidney region extraction with progressive positioning on the consecutive CT images; 3) novel directional model for a more reliable kidney region seed point identification; and 4) adaptive region growing controlled by the properties of image homogeneity. In addition, in order to provide different views for the physicians, we have implemented a visualization tool that will automatically show the renal contour through the method of second-order neighborhood edge detection. We considered segmentation of kidney regions from CT scans that contain pathologies in clinical practice. The results of a series of tests on 358 images from 30 patients indicate an average correlation coefficient of up to 88% between automatic and manual segmentation.  相似文献   

13.
An automated algorithm for tissue segmentation of noisy, low-contrast magnetic resonance (MR) images of the brain is presented. A mixture model composed of a large number of Gaussians is used to represent the brain image. Each tissue is represented by a large number of Gaussian components to capture the complex tissue spatial layout. The intensity of a tissue is considered a global feature and is incorporated into the model through tying of all the related Gaussian parameters. The expectation-maximization (EM) algorithm is utilized to learn the parameter-tied, constrained Gaussian mixture model. An elaborate initialization scheme is suggested to link the set of Gaussians per tissue type, such that each Gaussian in the set has similar intensity characteristics with minimal overlapping spatial supports. Segmentation of the brain image is achieved by the affiliation of each voxel to the component of the model that maximized the a posteriori probability. The presented algorithm is used to segment three-dimensional, T1-weighted, simulated and real MR images of the brain into three different tissues, under varying noise conditions. Results are compared with state-of-the-art algorithms in the literature. The algorithm does not use an atlas for initialization or parameter learning. Registration processes are therefore not required and the applicability of the framework can be extended to diseased brains and neonatal brains.  相似文献   

14.
Automated model-based tissue classification of MR images of the brain   总被引:5,自引:0,他引:5  
We describe a fully automated method for model-based tissue classification of magnetic resonance (MR) images of the brain. The method interleaves classification with estimation of the model parameters, improving the classification at each iteration. The algorithm is able to segment single- and multispectral MR images, corrects for MR signal inhomogeneities, and incorporates contextual information by means of Markov random Fields (MRF's). A digital brain atlas containing prior expectations about the spatial location of tissue classes is used to initialize the algorithm. This makes the method fully automated and therefore it provides objective and reproducible segmentations. We have validated the technique on simulated as well as on real MR images of the brain.  相似文献   

15.
Exact information about the shape of a lumbar pedicle can increase operation accuracy and safety during computer-aided spinal fusion surgery, which requires extreme caution on the part of the surgeon, due to the complexity and delicacy of the procedure. In this paper, a robust framework for segmenting the lumbar pedicle in computed tomography (CT) images is presented. The framework that has been designed takes a CT image, which includes the lumbar pedicle as input, and provides the segmented lumbar pedicle in the form of 3-D voxel sets. This multistep approach begins with 2-D dynamic thresholding using local optimal thresholds, followed by procedures to recover the spine geometry in a high curvature environment. A subsequent canal reference determination using proposed thinning-based integrated cost is then performed. Based on the obtained segmented vertebra and canal reference, the edge of the spinal pedicle is segmented. This framework has been tested on 84 lumbar vertebrae of 19 patients requiring spinal fusion. It was successfully applied, resulting in an average success rate of 93.22 % and a final mean error of 0.14 ± 0.05 mm. Precision errors were smaller than 1 % for spine pedicle volumes. Intra- and interoperator precision errors were not significantly different.  相似文献   

16.
17.
18F-fluorodeoxyglucose positron emission tomography (18FDG PET) has become an essential technique in oncology. Accurate segmentation and uptake quantification are crucial in order to enable objective follow-up, the optimization of radiotherapy planning, and therapeutic evaluation. We have designed and evaluated a new, nearly automatic and operator-independent segmentation approach. This incorporated possibility theory, in order to take into account the uncertainty and inaccuracy inherent in the image. The approach remained independent of PET facilities since it did not require any preliminary calibration. Good results were obtained from phantom images [percent error =18.38% (mean) ± 9.72% (standard deviation)]. Results on simulated and anatomopathological data sets were quantified using different similarity measures and showed the method was efficient (simulated images: Dice index =82.18% ± 13.53% for SUV =2.5 ). The approach could, therefore, be an efficient and robust tool for uptake volume segmentation, and lead to new indicators for measuring volume of interest activity.  相似文献   

18.
Toward automated segmentation of the pathological lung in CT   总被引:2,自引:0,他引:2  
Conventional methods of lung segmentation rely on a large gray value contrast between lung fields and surrounding tissues. These methods fail on scans with lungs that contain dense pathologies, and such scans occur frequently in clinical practice. We propose a segmentation-by-registration scheme in which a scan with normal lungs is elastically registered to a scan containing pathology. When the resulting transformation is applied to a mask of the normal lungs, a segmentation is found for the pathological lungs. As a mask of the normal lungs, a probabilistic segmentation built up out of the segmentations of 15 registered normal scans is used. To refine the segmentation, voxel classification is applied to a certain volume around the borders of the transformed probabilistic mask. Performance of this scheme is compared to that of three other algorithms: a conventional, a user-interactive and a voxel classification method. The algorithms are tested on 10 three-dimensional thin-slice computed tomography volumes containing high-density pathology. The resulting segmentations are evaluated by comparing them to manual segmentations in terms of volumetric overlap and border positioning measures. The conventional and user-interactive methods that start off with thresholding techniques fail to segment the pathologies and are outperformed by both voxel classification and the refined segmentation-by-registration. The refined registration scheme enjoys the additional benefit that it does not require pathological (hand-segmented) training data.  相似文献   

19.
In this paper, we consider the problem of segmentation of large collections of images. We propose a semisupervised optimization model that determines an efficient segmentation of many input images. The advantages of the model are twofold. First, the segmentation is highly controllable by the user so that the user can easily specify what he/she wants. This is done by allowing the user to provide, either offline or interactively, some (fully or partially) labeled pixels in images as strong priors for the model. Second, the model requires only minimal tuning of model parameters during the initial stage. Once initial tuning is done, the setup can be used to automatically segment a large collection of images that are distinct but share similar features. We will show the mathematical properties of the model such as existence and uniqueness of solution and establish a maximum/minimum principle for the solution of the model. Extensive experiments on various collections of biological images suggest that the proposed model is effective for segmentation and is computationally efficient.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号