首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
We present and validate a novel registration algorithm mapping two data sets, generated from a rigid object, in the presence of Gaussian noise. The proposed method is based on the Unscented Kalman Filter (UKF) algorithm that is generally employed for analyzing nonlinear systems corrupted by additive Gaussian noise. First, we employ our proposed registration algorithm to fit two randomly generated data sets in the presence of isotropic Gaussian noise, when the corresponding points between the two data sets are assumed to be known. Then, we extend the registration method to the case where the data (with known correspondences) is stimulated by anisotropic Gaussian noise. The new registration method not only reliably converges to the correct registration solution, but it also estimates the variance, as a confidence measure, for each of the estimated registration transformation parameters. Furthermore, we employ the proposed registration algorithm for rigid-body, point-based registration where corresponding points between two registering data sets are unknown. The algorithm is tested on point data sets which are garnered from a pelvic cadaver and a scaphoid bone phantom by means of computed tomography (CT) and tracked free-hand ultrasound imaging. The collected 3-D points in the ultrasound frame are registered to the 3-D meshes in the CT frame by using the proposed and the standard Iterative Closest Points (ICP) registration algorithms. Experimental results demonstrate that our proposed method significantly outperforms the ICP registration algorithm in the presence of additive Gaussian noise. It is also shown that the proposed registration algorithm is more robust than the ICP registration algorithm in terms of outliers in data sets and initial misalignment between the two data sets.  相似文献   

2.
The circular scanning trajectory is one of the most widely adopted data-acquisition configurations in computed tomography (CT). The Feldkamp, Davis, Kress (FDK) algorithm and its various modifications have been developed for reconstructing approximately three-dimensional images from circular cone-beam data. When data contain transverse truncations, however, these algorithms may reconstruct images with significant truncation artifacts. It is of practical significance to develop algorithms that can reconstruct region-of-interest (ROI) images from truncated circular cone-beam data that are free of truncation artifacts and that have an accuracy comparable to that obtained from nontruncated cone-beam data. In this work, we have investigated and developed a backprojection-filtration (BPF)-based algorithm for ROI-image reconstruction from circular cone-beam data containing transverse truncations. Furthermore, we have developed a weighted BPF algorithm to exploit "redundant" information in data for improving image quality. In an effort to validate and evaluate the proposed BPF algorithms for circular cone-beam CT, we have performed numerical studies by using both computer-simulation data and experimental data acquired with a radiotherapy cone-beam CT system. Quantitative results in these studies demonstrate that the proposed BPF algorithms for circular cone-beam CT can reconstruct ROI images free of truncation artifacts.  相似文献   

3.
Multislice helical CT: image temporal resolution   总被引:7,自引:0,他引:7  
A multislice helical computed tomography (CT) halfscan (HS) reconstruction algorithm is proposed for cardiac applications. The imaging performances (in terms of the temporal resolution, z-axis resolution, image noise, and image artifacts) of the HS algorithm are compared to the existing algorithms using theoretical models and clinical data. A theoretical model of the temporal resolution performance (in terms of the temporal sensitivity profile) is established for helical CT, in general, i.e., for any number of detector rows and any reconstruction algorithm used. It is concluded that the HS reconstruction results in improved image temporal resolution than the corresponding 180 degrees LI (linear interpolation) reconstruction and is more immune to the inconsistent data problem induced by cardiac motions. The temporal resolution of multislice helical CT with the HS algorithm is comparable to that of single-slice helical CT with the HS algorithm. In practice, the 180 degrees LI and HS-LI algorithms can be used in parallel to generate two image sets from the same scan acquisition, one (180 degrees LI) for improved z-resolution and noises, and the other (HS-LI) for improved image temporal resolution.  相似文献   

4.
针对低剂量计算机断层扫描(computerized tomography,CT)在图像采集过程中引入较多噪声,造成图像质量严重下降的问题, 提出一种基于残差注意力机制与复合感知损失的低剂量CT去噪算法。在该算法中,利用生 成对抗网络完成对低剂量CT图像的去噪,在网络框架中引入多尺度特征提取及残差注意力 模块,以融合图像中不同尺度的信息,提高网络对噪声特征的区分能力,避免在去噪过程中 丢失图像细节信息。同时采用复合感知损失函数,以加快网络收敛速度,促使去噪图像在感 知上与原图像更接近。实验结果表明:与现有的算法相比,所提算法能够有效抑制低剂量 CT图像中的噪声,并恢复更多的纹理细节;对比低剂量CT图像,所提算法处理后的CT 图像峰值信噪比(peak signal-to-noise ratio,PSNR) 值提高了31.72%, 结构相似性(structural similarity,SSIM)值提高了13.15%,可以满足更高的医学影像诊断要求 。  相似文献   

5.
张爱武  赵江华  赵宁宁  康孝岩  郭超凡 《红外与激光工程》2018,47(10):1026002-1026002(10)
传统去噪去混叠算法大多针对单波段图像,针对于高光谱影像的特点以及噪声、混叠对于图像的影响,提出了一种结合张量与倒易晶胞的多维滤波算法,并将其应用在高光谱影像的去噪和去混叠中。该方法引入张量,将高光谱影像数据视为三阶的张量表达,以倒易晶胞获取影像混叠和噪声较小的频谱覆盖,从最小均方误差的角度交替迭代求解三个方向的滤波器,最终完成影像滤波,在保证影像空间和光谱信息一致性的前提下,有效地减少影像混叠和噪声,提高图像的质量。通过与二维维纳滤波算法、张量多维去噪算法的多组高光谱数据对比实验,证明了文中算法的有效性。  相似文献   

6.
当降低X射线计算机断层成像(computer tomography,CT)的管电流强度,投影 数据将变为低信噪比的,使CT图像 复原变为欠定问题。提出了一种基于离散剪切波MAP-马尔可夫随机场(map Markov random field,MAP-MRF)的低 剂量医学CT图像去噪算法。首先,利用离散剪切波多尺度、多方向的优点,可以精确地计算 剪切系数,而且可以更稀疏的表示图像信息,结合马尔可夫理论提出了基于MAP-MRF的去噪 算法,使盲复原结果在噪声消除和边缘保持上达到有效的平衡,且优化了正则化项,使算法 快速收敛。图像的噪声得到了较为有效的滤除,并且图像特征得到了较好的保留,将MAP-M RF算法与其他5种对比算法进行Shepp-Logan体膜、临床脑部CT、临床低剂量肩部CT的定性 及 定量实验,实验结果表明,在定性实验中该方法可以获得良好的CT重建图像,保留清晰的纹 理细节和结构特征;在10 mA模拟噪声及20 mA 临床低管电流定量实验中峰值信噪比(PSNR)、 结构相似性指数(SSIM)指标均有较大改进,均方根误差(RMSE)低于对比算法,证明了算 法的有效性。  相似文献   

7.
Vessel tree reconstruction in volumetric data is a necessary prerequisite in various medical imaging applications. Specifically, when considering the application of automated lung nodule detection in thoracic computed tomography (CT) scans, vessel trees can be used to resolve local ambiguities based on global considerations and so improve the performance of nodule detection algorithms. In this study, a novel approach to vessel tree reconstruction and its application to nodule detection in thoracic CT scans was developed by using correlation-based enhancement filters and a fuzzy shape representation of the data. The proposed correlation-based enhancement filters depend on first-order partial derivatives and so are less sensitive to noise compared with Hessian-based filters. Additionally, multiple sets of eigenvalues are used so that a distinction between nodules and vessel junctions becomes possible. The proposed fuzzy shape representation is based on regulated morphological operations that are less sensitive to noise. Consequently, the vessel tree reconstruction algorithm can accommodate vessel bifurcation and discontinuities. A quantitative performance evaluation of the enhancement filters and of the vessel tree reconstruction algorithm was performed. Moreover, the proposed vessel tree reconstruction algorithm reduced the number of false positives generated by an existing nodule detection algorithm by 38%.  相似文献   

8.
针对高光谱图像谱段数目较多、近邻谱段相关性过高而导致分类困难的问题,提出了一种自适应差分进化特征选择的高光谱图像分类算法.首先初始化种群向量集,利用自适应差分进化算法搜索特征的自适应性生成特征子集;然后,通过使用ReliefF技术根据特征排序去除重复特征,从而为所有的特征构建一个特征列表;最后,借助于模糊k-近邻分类器计算每个向量的分类精度,利用包裹模型评估特征子集.在印第安纳数据集和KSC数据集上的实验结果验证了算法的有效性及可靠性,实验结果表明,相比其他几种特征选择算法,该算法取得了更高的总分类精度和更好的Kappa系数.  相似文献   

9.
为满足基于旋翼无人机(UAV)载具的室外目标检测所需的低资源开销混合噪声抑制,该文提出一种基于图像局部曲面可展化的混合噪声抑制算法(DLS),该算法实现了局部曲面可展化算法和分层降噪算法优势互补,达到了两算法各自无法企及的降噪效果。首先,对图像进行局部可展化处理,抑制图像的椒盐噪声和低密度高斯噪声,得到初步降噪图像;接着,在空间域和傅里叶域分层降噪,在去除高斯噪声残余的同时,最大限度地保留图像边缘、纹理等细节;最后,迭代局部曲面可展化和分层降噪,进一步去除混合噪声残余成分,达到抑制目标检测图像混合噪声的目的。实验结果表明,在去除图像混合噪声时,相比于其他7种降噪算法,本文算法具有一定的优势,其降噪图像的主观视觉指标和客观数据指标统计优于其他7种算法。  相似文献   

10.
Optimal CT scanning plan for long-bone 3-D reconstruction   总被引:1,自引:0,他引:1  
Digital computed tomographic (CT) data are widely used in three-dimensional (3-D) construction of bone geometry and density features for 3-D modelling purposes. During in vivo CT data acquisition the number of scans must be limited in order to protect patients from the risks related to X-ray absorption. The aim of this work is to automatically define, given a finite number of CT slices, the scanning plan which returns the optimal 3-D reconstruction of a bone segment from in vivo acquired CT images. An optimization algorithm based on a Discard-Insert-Exchange technique has been developed. In the proposed method the optimal scanning sequence is searched by minimizing the overall reconstruction error of a two-dimensional (2-D) prescanning image: an anterior-posterior (AP) X-ray projection of the bone segment. This approach has been validated in vitro on 3 different femurs. The 3-D reconstruction errors obtained through the optimization of the scanning plan on the 3-D prescanning images and on the corresponding 3-D data sets have been compared. 2-D and 3-D data sets have been reconstructed by linear interpolation along the longitudinal axis. Results show that direct 3-D optimization yields root mean square reconstruction errors which are only 4%-7% lower than the 2-D-optimized plan, thus proving that 2-D-optimization provides a good suboptimal scanning plan for 3-D reconstruction. Further on, 3-D reconstruction errors given by the optimized scanning plan and a standard radiological protocol for long bones have been compared. Results show that the optimized plan yields 20%-50% lower 3-D reconstruction errors  相似文献   

11.
SAR Image Regularization With Fast Approximate Discrete Minimization   总被引:1,自引:0,他引:1  
Synthetic aperture radar (SAR) images, like other coherent imaging modalities, suffer from speckle noise. The presence of this noise makes the automatic interpretation of images a challenging task and noise reduction is often a prerequisite for successful use of classical image processing algorithms. Numerous approaches have been proposed to filter speckle noise. Markov random field (MRF) modelization provides a convenient way to express both data fidelity constraints and desirable properties of the filtered image. In this context, total variation minimization has been extensively used to constrain the oscillations in the regularized image while preserving its edges. Speckle noise follows heavy-tailed distributions, and the MRF formulation leads to a minimization problem involving nonconvex log-likelihood terms. Such a minimization can be performed efficiently by computing minimum cuts on weighted graphs. Due to memory constraints, exact minimization, although theoretically possible, is not achievable on large images required by remote sensing applications. The computational burden of the state-of-the-art algorithm for approximate minimization (namely the alpha -expansion) is too heavy specially when considering joint regularization of several images. We show that a satisfying solution can be reached, in few iterations, by performing a graph-cut-based combinatorial exploration of large trial moves. This algorithm is applied to joint regularization of the amplitude and interferometric phase in urban area SAR images.  相似文献   

12.
Dual-energy material density images obtained by prereconstruction-basis material decomposition techniques offer specific tissue information, but they exhibit relatively high pixel noise. It is shown that noise in the material density images is negatively correlated and that this can be exploited for noise reduction in the two-basis material density images. The algorithm minimizes noise-related differences between pixels and their local mean values, with the constraint that monoenergetic CT values, which can be calculated from the density images, remain unchanged. Applied to the material density images, a noise reduction by factors of 2 to 5 is achieved. While quantitative results for regions of interest remain unchanged, edge effects can occur in the processed images. To suppress these, locally adaptive algorithms are presented and discussed. Results are documented by both phantom measurements and clinical examples.  相似文献   

13.
孙云山  张立毅  耿艳香 《信号处理》2015,31(10):1354-1360
在医学CT成像过程中,由于引入了不可避免的噪声,致使图像质量下降,影响临床诊断。因此,研究医学CT图像降噪方法在诊疗服务中具有重要意义。本文结合图像分割的思想,利用模糊神经网络将图像像素分成边缘区、平滑区与纹理区等不同图像区域,通过小波稀疏表示对不同类型的图像块进行阈值去噪处理,以便更好地保留医学CT图像的细节特征。实验结果表明,本文算法对医学CT图像降噪有一定的效果,峰值信噪比(PSNR)和结构相似性指数(SSIM)都得到了改善,更好并且很好地保留CT图像的细节信息。   相似文献   

14.
Shear Modulus Decomposition Algorithm in Magnetic Resonance Elastography   总被引:1,自引:0,他引:1  
Magnetic resonance elastography (MRE) is an imaging modality capable of visualizing the elastic properties of an object using magnetic resonance imaging (MRI) measurements of transverse acoustic strain waves induced in the object by a harmonically oscillating mechanical vibration. Various algorithms have been designed to determine the mechanical properties of the object under the assumptions of linear elasticity, isotropic and local homogeneity. One of the challenging problems in MRE is to reduce the noise effects and to maintain contrast in the reconstructed shear modulus images. In this paper, we propose a new algorithm designed to reduce the degree of noise amplification in the reconstructed shear modulus images without the assumption of local homogeneity. Investigating the relation between the measured displacement data and the stress wave vector, the proposed algorithm uses an iterative reconstruction formula based on a decomposition of the stress wave vector. Numerical simulation experiments and real experiments with agarose gel phantoms and human liver data demonstrate that the proposed algorithm is more robust to noise compared to standard inversion algorithms and stably determines the shear modulus.  相似文献   

15.
Tomographic reconstruction for tilted helical multislice CT   总被引:2,自引:0,他引:2  
One of the most recent technical advancements in computed tomography (CT) is the introduction of multislice CT (MCT). Because multiple detector rows are used for data acquisition, MCT offers higher volume coverage, faster scan speed, and reduced X-ray tube loading. Recognizing its unique data-sampling pattern, several image reconstruction algorithms were developed. These algorithms have been shown to be adequate in producing clinically acceptable images. Recent studies, however, have revealed that the image quality of MCT can be significantly degraded when helical data are acquired with a tilted gantry. The degraded image quality has rendered this feature unacceptable for clinical usage. In this paper, we first present a detailed investigation on the cause of the image quality degradation. An analytical model is derived to provide a mathematical basis for correction. Several compensation schemes are subsequently presented, and a detailed performance comparison is provided in terms of spatial resolution, noise, computation efficiency, and image artifacts.  相似文献   

16.
生成对抗网络(GAN)用于低剂量CT(LDCT)图像降噪具有一定的性能优势,成为近年CT图像降噪领域新的研究热点。不同剂量的LDCT图像中噪声和伪影分布的强度发生变化时,GAN网络降噪性能不稳定,网络泛化能力较差。为了克服这一缺陷,该文首先设计了一个编解码结构的噪声水平估计子网,用于生成不同剂量LDCT图像对应的噪声图,并用原始输入图像与之相减来初步抑制噪声;其次,在主干降噪网络中,采用GAN框架,并将生成器设计为多路编码的U-Net结构,通过博弈对抗实现网络结构优化,进一步抑制CT图像噪声;最后,设计了多种损失函数来约束不同功能模块的参数优化,进一步保障了LDCT图像降噪网络的性能。实验结果表明,与目前流行算法相比,所提出的降噪网络能够在保留LDCT图像原有重要信息的基础上,取得较好的降噪效果。  相似文献   

17.
In this paper, we proposed novel noise reduction algorithms that can be used to enhance image quality in various medical imaging modalities such as magnetic resonance and multidetector computed tomography. The noisy captured 3-D data are first transformed by discrete complex wavelet transform. Using a nonlinear function, we model the data as the sum of the clean data plus additive Gaussian or Rayleigh noise. We use a mixture of bivariate Laplacian probability density functions for the clean data in the transformed domain. The MAP and minimum mean-squared error (MMSE) estimators allow us to efficiently reduce the noise. The employed prior distribution is mixture and bivariate, and thus accurately characterizes the heavy-tail distribution of clean images and exploits the interscale properties of wavelets coefficients. In addition, we estimate the parameters of the model using local information; as a result, the proposed denoising algorithms are spatially adaptive, i.e., the intrascale dependency of wavelets is also well exploited in the enhancement process. The proposed approach results in significant noise reduction while the introduced distortions are not noticeable as a result of accurate statistical modeling. The obtained shrinkage functions have closed form, are simple in implementation, and efficiently enhances data. Our experiments on CT images show that among our derived shrinkage functions usually BiLapGausMAP produces images with higher peak SNR. However, BiLapGausMMSE is preferred especially for CT images, which have high SNRs. Furthermore, BiLapRayMAP yields better noise reduction performance for low SNR MR datasets such as high-resolution whole heart imaging while BiLapGauMAP results in better performance in MR data with higher intrinsic SNR such as functional cine data.   相似文献   

18.
In this work, a computer-based algorithm is proposed for the initial interpretation of human cardiac images. Reconstructed single photon emission computed tomography images are used to differentiate between subjects with normal value and abnormal value of ejection fraction. The method analyses pixel intensities that correspond to blood flow in the left ventricular region. The algorithm proceeds through three main stages: the initial stage does a pre-processing task to reduce noise as well as blur in the image. The second stage extracts features from the images. Classification is done in the final stage. The pre-processing stage consists of a de-noising part and a de-blurring part. Novel features are used for classification. Features are extracted as three different sets based on: the pixel intensity distribution in different regions, spatial relationship of pixels and multi-scale image information. Two supervised algorithms are proposed for classification: one algorithm is based on a threshold value computed from the features extracted from the training images and the other algorithm is based on sequential minimal optimization-based support vector machine approach. Experimental studies were performed on real cardiac SPECT images obtained from hospital. The result of classification has been verified by an expert nuclear medicine physician and by the ejection fraction value obtained from quantitative gated SPECT, the most widely used software package for quantifying gated SPECT images.  相似文献   

19.
This paper presents a new algorithm for denoising dynamic contrast-enhanced (DCE) MR images. It is a novel variation on the nonlocal means (NLM) algorithm. The algorithm, called dynamic nonlocal means (DNLM), exploits the redundancy of information in the temporal sequence of images. Empirical evaluations of the performance of the DNLM algorithm relative to seven other denoising methods—simple Gaussian filtering, the original NLM algorithm, a trivial extension of NLM to include the temporal dimension, bilateral filtering, anisotropic diffusion filtering, wavelet adaptive multiscale products threshold, and traditional wavelet thresholding—are presented. The evaluations include quantitative evaluations using simulated data and real data (20 DCE-MRI data sets from routine clinical breast MRI examinations) as well as qualitative evaluations using the same real data (24 observers: 14 image/signal-processing specialists, 10 clinical breast MRI radiographers). The results of the quantitative evaluation using the simulated data show that the DNLM algorithm consistently yields the smallest MSE between the denoised image and its corresponding original noiseless version. The results of the quantitative evaluation using the real data provide evidence, at the $alpha =0.05$ level of significance, that the DNLM algorithm yields the smallest MSE between the denoised image and its corresponding original noiseless version. The results of the qualitative evaluation provide evidence, at the $alpha=0.05$ level of significance, that the DNLM algorithm performs visually better than all of the other algorithms. Collectively the qualitative and quantitative results suggest that the DNLM algorithm more effectively attenuates noise in DCE MR images than any of the other algorithms.   相似文献   

20.
Detection of the left ventricular (LV) endocardial (inner) and epicardial (outer) boundaries in cardiac images, provided by fast computer tomography (cine CT), magnetic resonance (MR), or ultrasound (echocardiography), is addressed. The automatic detection of the LV boundaries is difficult due to background noise, poor contrast, and often unclear differentiation of the tissue characteristics of the ventricles, papillary muscles, and surrounding tissues. An approach to the automatic ventricular boundary detection that employs set-theoretic techniques, and is based on incorporating a priori knowledge of the heart geometry, its brightness, spatial structure, and temporal dynamics into the boundaries detection algorithm is presented. Available knowledge is interpreted as constraint sets in the functional space, and the consistent boundaries are considered to belong to the intersection of all the introduced sets, thus satisfying the a priori information. An algorithm is also suggested for the simultaneous detection of the endocardial and epicardial boundaries of the LV. The procedure is demonstrated using cine CT images of the human heart.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号