首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 171 毫秒
1.
X-射线工业CT中,传统的线性化(ConventionalLinearization,CL)校正法去除射束硬化影响时,会放大投影的噪声,进而降低CT图像的信噪比(SNR)。为克服该点不足,提出了一种改进的线性化(ImprovedLinearization,IL)校正法。该方法基于投影噪声的特点,设计了一种自适应滤波器;利用该滤波器从原始投影数据中提取其噪声,求得由噪声引起的校正误差;根据该误差对校正结果进行修正以抑制投影噪声的放大。通过不同管电压和被检材质,对IL校正法进行了验证。实验结果表明,IL校正法能去除射束硬化的影响,并能有效地抑制校正过程中投影噪声的放大。与CL校正法相比,IL校正的CT图像SNR提高了约6倍。  相似文献   

2.
In single photon emission computed tomography (SPECT), the nonstationary Poisson noise in projection data (sinogram) is a major cause of compromising the quality of reconstructed images. To improve the quality, we must suppress the Poisson noise in the sinogram before or during image reconstruction. However, the conventional space or frequency domain denoising methods will likely remove some information that is very important for accurate image reconstruction, especially for analytical SPECT reconstruction with compensation for nonuniform attenuation. As a time‐frequency analysis tool, wavelet transform has been widely used in the signal and image processing fields and demonstrated its powerful functions in the application of denoising. In this article, we studied the denoising abilities of wavelet‐based denoising method and the impact of the denoising on analytical SPECT reconstruction with nonuniform attenuation. Six popular wavelet‐based denoising methods were tested. The reconstruction results showed that the Revised BivaShrink method with complex wavelet is better than others in analytical SPECT reconstruction with nonuniform attenuation compensation. Meanwhile, we found that the effect of the Anscombe transform for denoising is not significant on the wavelet‐based denoising methods, and the wavelet‐based de‐noise methods can obtain good denoising result even if we do not use Anscombe transform. The wavelet‐based denoising methods are the good choice for analytical SPECT reconstruction with compensation for nonuniform attenuation. © 2013 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 23, 36–43, 2013  相似文献   

3.
Modern photon radiotherapy optimization methods require the use of a number of nonuniform dose distributions incident on the tumor. From this point of view, radiotherapy optimization has strong similarities with the reconstruction problem in tomographic imaging. In general, the image reconstruction problem is simpler because in the absence of noise and with sufficiently many projections and exact solution always exists. However, it is in general impossible by external beam irradiation to produce an arbitrary desired dose distribution in the patient. This is primarily because the order of events from physical collection of projection data to reconstruction theory is reversed in therapy optimization, starting with the theory and ending with physical irradiation, where negative dose delivery is impossible. Despite this fundamental problem, many approximate image reconstruction methods work quite well for therapy optimization even though strict optimization requires radiobiological models and the finest external beam radiation tool available: the pencil beam.  相似文献   

4.
It is a significant challenge to accurately reconstruct medical computed tomography (CT) images with important details and features. Reconstructed images always suffer from noise and artifact pollution because the acquired projection data may be insufficient or undersampled. In reality, some “isolated noise points” (similar to impulse noise) always exist in low‐dose CT projection measurements. Statistical iterative reconstruction (SIR) methods have shown greater potential to significantly reduce quantum noise but still maintain the image quality of reconstructions than the conventional filtered back‐projection (FBP) reconstruction algorithm. Although the typical total variation‐based SIR algorithms can obtain reconstructed images of relatively good quality, noticeable patchy artifacts are still unavoidable. To address such problems as impulse‐noise pollution and patchy‐artifact pollution, this work, for the first time, proposes a joint regularization constrained SIR algorithm for sparse‐view CT image reconstruction, named “SIR‐JR” for simplicity. The new joint regularization consists of two components: total generalized variation, which could process images with many directional features and yield high‐order smoothness, and the neighborhood median prior, which is a powerful filtering tool for impulse noise. Subsequently, a new alternating iterative algorithm is utilized to solve the objective function. Experiments on different head phantoms show that the obtained reconstruction images are of superior quality and that the presented method is feasible and effective.  相似文献   

5.
Abstract

In the summation convolution backprojection method of image reconstruction in computed tomography, the final image accuracy depends on the convolution filter used. Filters are designed to attenuate high spatial frequencies when noisy projection data are used. This paper explores the differences between the images reconstructed using a range of filters, and compares the results with the case of the ramp filter that provides the “best” image for ideal, noise-free, projection data. It is shown that systematic errors between these images and the best image exist, and that these errors are related to the second differential of the reconstruction filter with respect to spatial frequency. This error determination may be used to correct computed tomography images that have been reconstructed using inappropriate filters, and this theory is tested using noise-free projection data from two computer simulated images. It is shown that the corrected images are far closer to the original images.  相似文献   

6.
In the summation convolution backprojection method of image reconstruction in computed tomography, the final image accuracy depends on the convolution filter used. Filters are designed to attenuate high spatial frequencies when noisy projection data are used. This paper explores the differences between the images reconstructed using a range of filters, and compares the results with the case of the ramp filter that provides the “best” image for ideal, noise-free, projection data. It is shown that systematic errors between these images and the best image exist, and that these errors are related to the second differential of the reconstruction filter with respect to spatial frequency. This error determination may be used to correct computed tomography images that have been reconstructed using inappropriate filters, and this theory is tested using noise-free projection data from two computer simulated images. It is shown that the corrected images are far closer to the original images.  相似文献   

7.
超声反射CT在水下物体成像中的应用   总被引:1,自引:1,他引:0       下载免费PDF全文
本文重点介绍了沿平面扫描成像,提出了修正扫描起始点位置以及换能器孔径影响的一种方法,得到较好的实验结果。  相似文献   

8.
Recently, the potential harm of electromagnetic radiation used in computed tomography (CT) scanning has been paid much attention to. This makes the few‐view CT reconstruction become an important issue in medical imaging. In this article, an adaptive dynamic combined energy minimization model is proposed for few‐view CT reconstruction based on the compress sensing theory. The L2 energy of the image gradient and the total variation (TV) energy are combined, and the working regions of them are separated adaptively with a dynamic threshold. With the proposed model, the TV model's disadvantageous tendency of uniformly penalize the image gradient irrespective of the underlying image structures is overcome. Numerical experiments of the proposed model are performed with various insufficient data problems in fan‐beam CT and suggest that both the reconstruction speed and quality of the results are generally improved. © 2013 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 23, 44–52, 2013.  相似文献   

9.
Background and Objective: The constrained, total variation (TV) minimization algorithm has been applied in computed tomography (CT) for more than 10 years to reconstruct images accurately from sparse-view projections. Chambolle-Pock (CP) algorithm framework has been used to derive the algorithm instance for the constrained TV minimization programme. However, the ordinary CP (OCP) algorithm has slower convergence rate and each iteration is also time-consuming. Thus, we investigate the acceleration approaches for achieving fast convergence and high-speed reconstruction. Methods: To achieve fast convergence rate, we propose a new algorithm parameters setting approach for OCP. To achieve high-speed reconstruction in each iteration, we use graphics processing unit (GPU) to speed-up the two time-consuming operations, projection and backprojection. Results: We evaluate and validate our acceleration approaches via two-dimensional (2D) reconstructions of a low-resolution Shepp–Logan phantom from noise-free data and a high-resolution Shepp–Logan phantom from noise-free and noisy data. For the three-specific imaging cases considered here, the convergence process are speeded up for 89, 9 and 81 times, and the reconstruction in each iteration maybe speeded up for 120, 340 and 340 times, respectively. Totally, the whole reconstructions for the three cases are speeded up for about 10,000, 3060 and 27,540 times, respectively. Conclusions: The OCP algorithm maybe tremendously accelerated by use of the improved algorithm parameters setting and use of GPU. The integrated acceleration techniques make the OCP algorithm more practical in the CT reconstruction area.  相似文献   

10.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号