首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Extensive growth in functional brain imaging, perfusion-weighted imaging, diffusion-weighted imaging, brain mapping and brain scanning techniques has led tremendously to the importance of the cerebral cortical segmentation, both in 2-D and 3-D, from volumetric brain magnetic resonance imaging data sets. Besides that, recent growth in deformable brain segmentation techniques in 2-D and 3-D has brought the engineering community, such as the areas of computer vision, image processing, pattern recognition and graphics, closer to the medical community, such as to neuro-surgeons, psychiatrists, oncologists, neuro-radiologists and internists. This paper is an attempt to review the state-of-the-art 2-D and 3-D cerebral cortical segmentation techniques from brain magnetic resonance imaging based on three main classes: region-based, boundary/surface-based and fusion of boundary/surface-based with region-based techniques. In the first class, region-based techniques, we demonstrated more than 18 different techniques for segmenting the cerebral cortex from brain slices acquired in orthogonal directions. In the second class, boundary/surface-based, we showed more than ten different techniques to segment the cerebral cortex from magnetic resonance brain volumes. Particular emphasis will be placed by presenting four state-of-the-art systems in the third class, based on the fusion of boundary/surface-based with region-based techniques outlined in Part II of the paper, also called regional-geometric deformation models, which take the paradigm of partial differential equations in the level set framework. We also discuss the pros and cons of various techniques, besides giving the mathematical foundations for each sub-class in the cortical taxonomy. Received: 25 August 2000, Received in revised form: 28 March 2001, Accepted: 28 March 2001  相似文献   

2.
Functional magnetic resonance imaging (fMRI) has become a popular technique for studies of human brain activity. Typically, fMRI is performed with >3-mm sampling, so that the imaging data can be regarded as two-dimensional samples that average through the 1.5-4-mm thickness of cerebral cortex. The increasing use of higher spatial resolutions, <1.5-mm sampling, complicates the analysis of fMRI, as one must now consider activity variations within the depth of the brain tissue. We present a set of surface-based methods to exploit the use of high-resolution fMRI for depth analysis. These methods utilize white-matter segmentations coupled with deformable-surface algorithms to create a smooth surface representation at the gray-white interface and pial membrane. These surfaces provide vertex positions and normals for depth calculations, enabling averaging schemes that can increase contrast-to-noise ratio, as well as permitting the direct analysis of depth profiles of functional activity in the human brain.  相似文献   

3.
We present a novel multimodality image registration system for spinal surgery. The system comprises a surface-based algorithm that performs computed tomography/magnetic resonance (CT/MR) rigid registration and MR image segmentation in an iterative manner. The segmentation/registration process progressively refines the result of MR image segmentation and CT/MR registration. For MR image segmentation, we propose a method based on the double-front level set that avoids boundary leakages, prevents interference from other objects in the image, and reduces computational time by constraining the search space. In order to reduce the registration error from the misclassification of the soft tissue surrounding the bone in MR images, we propose a weighted surface-based CT/MR registration scheme. The resultant weighted surface is registered to the segmented surface of the CT image. Contours are generated from the reconstructed CT surfaces for subsequent MR image segmentation. This process iterates till convergence. The registration method achieves accuracy comparable to conventional techniques while being significantly faster. Experimental results demonstrate the advantages of the proposed approach and its application to different anatomies.  相似文献   

4.
This paper proposes a fast algorithm, called the SpiralFFT, that computes samples of the 3-D discrete Fourier transform of an object of interest along spiral contours in frequency space. This type of sampling geometry is prevalent in 3-D magnetic resonance imaging, as spiral sampling patterns allow for rapid, uninterrupted scanning over a large range of frequencies. We show that parameterizing the spiral contours in a certain way allows us to decompose the computation into a series of 1-D transforms, meaning that the 3-D transform is effectively separable, while still yielding spiral sampling patterns that are geometrically faithful and provide dense coverage of 3-D frequency space. We present a number of simulations which demonstrate that the SpiralFFT compares favorably to a state-of-the-art algorithm for computing general non-uniform discrete Fourier transforms.  相似文献   

5.
Peripheral arterial disease (PAD) is characterized by arterial obstruction in the lower extremities due to atherosclerosis, and manifests itself as intermittent claudicating, limb ischemia, or gangrene. Diagnosis of PAD using magnetic resonance imaging (MRI) equipment without contrast medium is available as a useful visual screening in clinical practice. In this article, we propose a new method for the segmentation of arterial images which are obtained from non-contrast-enhanced magnetic resonance angiography (MRA) based on a dot enhancement filter and 3-D region-growing methods, and satisfactory experimental results are obtained.  相似文献   

6.
Image segmentation plays an important role in medical image analysis. The most widely used image segmentation algorithms, region-based methods that typically rely on the homogeneity of image intensities in the regions of interest, often fail to provide accurate segmentation results due to the existence of bias field, heavy noise and rich structures. In this paper, we incorporate nonlocal regularization mechanism in the coherent local intensity clustering formulation for brain image segmentation with simultaneously estimating bias field and denoising, specially preserving good structures. We define an energy functional with a local data fitting term, two nonlocal regularization terms for both image and membership functions, and a $L_2$ image fidelity term. By minimizing the energy, we get good segmentation results with well preserved structures. Meanwhile, the bias estimation and noise reduction can also be achieved. Experiments performed on synthetic and clinical brain magnetic resonance imaging data and comparisons with other methods are given to demonstrate that by introducing the nonlocal regularization mechanism, we can get more regularized segmentation results.  相似文献   

7.
Accurate and efficient automatic or semi-automatic brain image segmentation methods are of great interest to both scientific and clinical researchers of the human central neural system. Cerebral white matter segmentation in brain Magnetic Resonance Imaging (MRI) data becomes a challenging problem due to a combination of several factors like low contrast, presence of noise and imaging artifacts, partial volume effects, intrinsic tissue variation due to neurodevelopment and neuropathologies, and the highly convoluted geometry of the cortex. In this paper, we propose a new set of edge weights for the traditional graph cut algorithm (Boykov and Jolly, 2001) to correctly segment the cerebral white matter from T1-weighted MRI sequence. In this algorithm, the edge weights of Boykov and Jolly (2001) are modified by comparing the probabilities of an individual voxel and its neighboring voxels to belong to different segmentation classes. A shape prior in form of a series of ellipses is used next to model the contours of the human skull in various 2D slices in the sequence. This shape constraint is imposed to prune the original graph constructed from the input to form a subgraph consisting of voxels within the skull contours. Our graph cut algorithm with new set of edge weights is applied to the above subgraph, thereby increasing the segmentation accuracy as well as decreasing the computation time. Average segmentation errors for the proposed algorithm, the graph cut algorithm (Boykov and Jolly, 2001), and the Expectation Maximization Segmentation (EMS) algorithm Van Leemput et al., 2001 in terms of Dice coefficients are found to be (3.72 ± 1.12)%, (14.88 ± 1.69)%, and (11.95 ± 5.2)%, respectively.  相似文献   

8.
核磁共振图像的脑组织提取是神经图像处理研究中的一个重要步骤。将传统的几何活动轮廓模型与二值水平集函数相结合,提出了一种新型的二值水平集活动轮廓模型,并基于该模型提出了一种能够自动、准确实现MRI脑组织提取的方法。该方法在脑组织内部自动设定最优初始轮廓曲线,将该演化曲线隐含地表示成一个高维函数的零水平集,零水平集在基于区域的图像力驱动下不断演化并达到待分割脑部图像的边缘。将基于该方法的脑组织提取结果与作为金标准的专家手动分割结果和其他流行算法相比较,结果表明提出的脑组织提取方法能够自动、准确和快速地提取MRI脑组织,是一种鲁棒性较好的MRI脑组织提取方法。  相似文献   

9.
Cardiac magnetic resonance imaging (MRI) has been extensively used in the diagnosis of cardiovascular disease and its quantitative evaluation. Cardiac MRI techniques have been progressively improved, providing high-resolution anatomical and functional information. One of the key steps in the assessment of cardiovascular disease is the quantitative analysis of the left ventricle (LV) contractile function. Thus, the accurate delineation of LV boundary is of great interest to improve diagnostic performance. In this work, we present a novel segmentation algorithm of LV from cardiac MRI incorporating an implicit shape prior without any training phase using level sets in a variational framework. The segmentation of LV still remains a challenging problem due to its subtle boundary, occlusion, and inhomogeneity. In order to overcome such difficulties, a shape prior knowledge on the anatomical constraint of LV is integrated into a region-based segmentation framework. The shape prior is introduced based on the anatomical shape similarity between endocardium and epicardium. The shape of endocardium is assumed to be mutually similar under scaling to the shape of epicardium. An implicit shape representation using signed distance function is introduced and their discrepancy is measured in a probabilistic way. Our shape constraint is imposed by a mutual similarity of shapes without any training phase that requires a collection of shapes to learn their statistical properties. The performance of the proposed method has been demonstrated on fifteen clinical datasets, showing its potential as the basis in the clinical diagnosis of cardiovascular disease.  相似文献   

10.
Personal authentication using 3-D finger geometry   总被引:1,自引:0,他引:1  
In this paper, a biometric authentication system based on measurements of the user's three-dimensional (3-D) hand geometry is proposed. The system relies on a novel real-time and low-cost 3-D sensor that generates a dense range image of the scene. By exploiting 3-D information we are able to limit the constraints usually posed on the environment and the placement of the hand, and this greatly contributes to the unobtrusiveness of the system. Efficient, close to real-time algorithms for hand segmentation, localization and 3-D feature measurement are described and tested on an image database simulating a variety of working conditions. The performance of the system is shown to be similar to state-of-the-art hand geometry authentication techniques but without sacrificing the convenience of the user.  相似文献   

11.
3-D object segmentation is an important and challenging topic in computer vision that could be tackled with artificial life models.A Channeler Ant Model (CAM), based on the natural ant capabilities of dealing with 3-D environments through self-organization and emergent behaviours, is proposed.Ant colonies, defined in terms of moving, pheromone laying, reproduction, death and deviating behaviours rules, is able to segment artificially generated objects of different shape, intensity, background.The model depends on few parameters and provides an elegant solution for the segmentation of 3-D structures in noisy environments with unknown range of image intensities: even when there is a partial overlap between the intensity and noise range, it provides a complete segmentation with negligible contamination (i.e., fraction of segmented voxels that do not belong to the object). The CAM is already in use for the automated detection of nodules in lung Computed Tomographies.  相似文献   

12.
Computer-aided detection/diagnosis (CAD) systems can enhance the diagnostic capabilities of physicians and reduce the time required for accurate diagnosis. The objective of this paper is to review the recent published segmentation and classification techniques and their state-of-the-art for the human brain magnetic resonance images (MRI). The review reveals the CAD systems of human brain MRI images are still an open problem. In the light of this review we proposed a hybrid intelligent machine learning technique for computer-aided detection system for automatic detection of brain tumor through magnetic resonance images. The proposed technique is based on the following computational methods; the feedback pulse-coupled neural network for image segmentation, the discrete wavelet transform for features extraction, the principal component analysis for reducing the dimensionality of the wavelet coefficients, and the feed forward back-propagation neural network to classify inputs into normal or abnormal. The experiments were carried out on 101 images consisting of 14 normal and 87 abnormal (malignant and benign tumors) from a real human brain MRI dataset. The classification accuracy on both training and test images is 99% which was significantly good. Moreover, the proposed technique demonstrates its effectiveness compared with the other machine learning recently published techniques. The results revealed that the proposed hybrid approach is accurate and fast and robust. Finally, possible future directions are suggested.  相似文献   

13.
Bin picking by a robot in real time requires the performance of a series of tasks that are beyond the capabilities of commercially available state-of-the-art robotic systems. In this paper, a laser-ranging sensor for real-time robot control is described. This sensor is incorporated into a robot system that has been applied to the bin-picking or random-parts problem. This system contains new technological components that have been developed recently at the Environmental Research Institute of Michigan (ERIM). These components (the 3-D imaging scanner and a recirculating cellular-array pipeline processor) make generalized real-time robot vision a practical and viable technology. This paper describes these components and their implementation in a typical real-time robot vision system application.  相似文献   

14.
一种结合二维熵和模糊熵的图像分割方法   总被引:1,自引:1,他引:0  
基于二维熵的分割方法是一种常用的阈值分割技术,其基本假设是对象区域和背景区域占据了二维直方图的绝大部分区域,即假设对象区域和背景区域的概率和近似为1。该方法存在的不足是忽略了边界区域的信息对分割结果的影响,鉴于此,提出了一种结合二维熵和模糊熵的图像分割方法,先采用二维熵对图像进行初步分割,再采用模糊熵作后续处理以弥补忽略边界信息带来的问题。实验结果表明,对于含噪图像,该文方法的分割效果是比较理想的。  相似文献   

15.
This study presents an efficient variational region-based active contour model for segmenting images without priori knowledge about the object or background. In order to handle intensity inhomogeneities and noise, we propose to integrate into the region-based local intensity model a global density distance inspired by the Bhattacharyya flow. The local term based on local information of segmented image allows the model to deal with bias field artifact, which arises in data acquisition processes. The global term, which is based on the density distance between the probability distribution functions of image intensity inside and outside the active contour, provides information for accurate segmentation, keeps the curve from spilling, and addresses noise in the image. Intensive 2D and 3D experiments on many imaging modalities of medical fields such as computed tomography, magnetic resonance imaging, and ultrasound images demonstrate the effectiveness of the model when dealing with images with blurred object boundary, intensity inhomogeneities, and noise.  相似文献   

16.
矩不变调整的二维Shannon嫡图像分割及其快速实现   总被引:1,自引:0,他引:1  
为了克服二维Shannon熵阈值法的缺陷,提出了一种使用矩不变法来调整二维直方图斜分Shannon熵的阈值分割方法。首先将二维直方图斜分原理运用到两种Shannon熵阈值法中,然后利用矩不变法从两种熵阈值法获取的阈值中选择最佳阈值,并提出二维直方图斜分Shannon熵阈值法的一般递推算法,最后将二维直方图分布特性与这种算法有机结合得到新型快速的递推算法。实验结果表明,提出的方法不仅分割效果优于当前的二维直方图斜分的最大熵阈值法,而且运行速度更快,约快4倍。  相似文献   

17.
研究了基于二维最大熵的图像分割算法,针对基于二维最大熵的图像分割算法存在的计算复杂度高、计算时间长等问题,提出了一种基于混沌遗传算法的二维最大熵算法。该方法利用类似载波的方法将混沌序列映射至双阈值的二维空间,之后利用混沌遗传算法搜索最佳阈值进行图像分割。实验结果表明,由于该方法考虑了点灰度和区域灰度均值,且采用了有效的全局搜索算法,所以不仅得到了令人满意的分割效果,而且大大提高了计算速度,是一种实用有效的图像分割方法。  相似文献   

18.
Review of brain MRI image segmentation methods   总被引:3,自引:2,他引:1  
Brain image segmentation is one of the most important parts of clinical diagnostic tools. Brain images mostly contain noise, inhomogeneity and sometimes deviation. Therefore, accurate segmentation of brain images is a very difficult task. However, the process of accurate segmentation of these images is very important and crucial for a correct diagnosis by clinical tools. We presented a review of the methods used in brain segmentation. The review covers imaging modalities, magnetic resonance imaging and methods for noise reduction, inhomogeneity correction and segmentation. We conclude with a discussion on the trend of future research in brain segmentation.  相似文献   

19.
将微粒群算法运用于二维最大熵图像阈值分割法。首先构建图像分割的二维最大熵准则函数,然后采用适用于整数规划的微粒群算法最大化该准则函数,最终实现含噪声干扰下图像的有效分割。分割实验表明,该方法具有较强的抗噪声能力,且比普通和基于遗传算法的二维最大熵法运算速度更快。  相似文献   

20.
In the last decade, computer vision, pattern recognition, image processing and cardiac researchers have given immense attention to cardiac image analysis and modelling. This paper survets state-of-the-art computer vision and pattern recognition techniques for Left Ventricle (LV) segmentation and modelling during the second half of the twentieth century. The paper presents the key charateristics of successful model-based segmentation and modelling during the second half of the twentieth century. The paper presents the key characteristics of successful model-based segmentation techniques for LV modelling. This survey paper concludes the following: (1) any one pattern recognition or computer vision technique is not sufficient for accurate 2D, 3D or 4D modelling of LV; (2) fitting mathematical models for LV modelling have dominated in the last 15 tears; (3) knowledge extrated from the ground truth has lead to very successful attempts for LV modelling have dominated in the last 15 uears; (3) knowledge extracted from the ground truth has lead to very successful attempts at LV modelling;(4) spatial and temporal behaviour of LV through different imaging modalities has yielded information which has led to accurate LV modelling; and (5) not much attention has been paid to LC modelling validation. Received: 25 September 1998, Received in revised form: 25 August 1999, Accepted: 20 October 1999  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号