首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
Fast maximum entropy approximation in SPECT using the RBI-MAP algorithm   总被引:3,自引:0,他引:3  
In this work, we present a method for approximating constrained maximum entropy (ME) reconstructions of SPECT data with modifications to a block-iterative maximum a posteriori (MAP) algorithm. Maximum likelihood (ML)-based reconstruction algorithms require some form of noise smoothing. Constrained ME provides a more formal method of noise smoothing without requiring the user to select parameters. In the context of SPECT, constrained ME seeks the minimum-information image estimate among those whose projections are a given distance from the noisy measured data, with that distance determined by the magnitude of the Poisson noise. Images that meet the distance criterion are referred to as feasible images. We find that modeling of all principal degrading factors (attenuation, detector response, and scatter) in the reconstruction is critical because feasibility is not meaningful unless the projection model is as accurate as possible. Because the constrained ME solution is the same as a MAP solution for a particular value of the MAP weighting parameter, beta, the constrained ME solution can be found with a MAP algorithm if the correct value of beta is found. We show that the RBI-MAP algorithm, if used with a dynamic scheme for estimating beta, can approximate constrained ME solutions in 20 or fewer iterations. We compare results for various methods of achieving feasible images on a simulation of Tl-201 cardiac SPECT data. Results show that the RBI-MAP ME approximation provides images and quantitative estimates close to those from a slower algorithm that gives the true ME solution. Also, we find that the ME results have higher spatial resolution and greater high-frequency noise content than a feasibility-based stopping rule, feasibility-based low-pass filtering, and a quadratic Gibbs prior with beta selected according to the feasibility criterion. We conclude that fast ME approximation is possible using either RBI-MAP with the dynamic procedure or a feasibility-based stopping rule, and that such reconstructions may be particularly useful in applications where resolution is critical.  相似文献   

2.
为了改善医学图像的视觉效果,提高图像的清晰度,使之更适合于机器的分析处理以及人的视觉特性,并突出病灶点,为病理学诊断和临床诊断提供可靠依据。设计了一个对医学图像十分具有针对性的图像增强系统。针对CT图像的电子噪声提出了基于修正维纳滤波的小波包去噪算法;针对B型超声图像的散斑噪声提出了基于脉冲耦合神经网络(PCNN)模型的小波自适应斑点噪声滤除算法;针对医学图像对比度低,边缘信息模糊等特点,提出了基于小波变换的医学图像增强算法。当噪声方差为0.01时,基于脉冲耦合神经网络(PCNN)模型的小波自适应斑点噪声滤除算法获得的PSNR比经Wiener滤波方法获得的PSNR高出9 dB。系统能快速找到噪声点进行定点去噪,能有效提高医学图像的对比度,增强边缘细节信息,突出病灶点的位置,从而达到较好的处理效果,为医疗工作者观察病症提供更加清晰准确的依据。  相似文献   

3.
We investigate an image recovery method for sparse‐view computed tomography (CT) using an iterative shrinkage algorithm based on a second‐order approach. The two‐step iterative shrinkage‐thresholding (TwIST) algorithm including a total variation regularization technique is elucidated to be more robust than other first‐order methods; it enables a perfect restoration of an original image even if given only a few projection views of a parallel‐beam geometry. We find that the incoherency of a projection system matrix in CT geometry sufficiently satisfies the exact reconstruction principle even when the matrix itself has a large condition number. Image reconstruction from fan‐beam CT can be well carried out, but the retrieval performance is very low when compared to a parallel‐beam geometry. This is considered to be due to the matrix complexity of the projection geometry. We also evaluate the image retrieval performance of the TwIST algorithm using measured projection data.  相似文献   

4.
Edge-preserving tomographic reconstruction with nonlocal regularization   总被引:4,自引:0,他引:4  
Tomographic image reconstruction using statistical methods can provide more accurate system modeling, statistical models, and physical constraints than the conventional filtered backprojection (FBP) method. Because of the ill posedness of the reconstruction problem, a roughness penalty is often imposed on the solution to control noise. To avoid smoothing of edges, which are important image attributes, various edge-preserving regularization methods have been proposed. Most of these schemes rely on information from local neighborhoods to determine the presence of edges. In this paper, we propose a cost function that incorporates nonlocal boundary information into the regularization method. We use an alternating minimization algorithm with deterministic annealing to minimize the proposed cost function, jointly estimating region boundaries and object pixel values. We apply variational techniques implemented using level-sets methods to update the boundary estimates; then, using the most recent boundary estimate, we minimize a space-variant quadratic cost function to update the image estimate. For the positron emission tomography transmission reconstruction application, we compare the bias-variance tradeoff of this method with that of a "conventional" penalized-likelihood algorithm with local Huber roughness penalty.  相似文献   

5.
非均匀传输线特性重构中的噪声影响分析   总被引:1,自引:0,他引:1  
利用时域反射技术(Time Domain Reflectometry,TDR),由测量得到的时域反射信号,可以重构出非均匀传输线的一些特征参数。当反射信号中混有噪声时,对非均匀传输线特性参数的重构会产生影响。采用Zakharov-Shabat类型逆散射问题的数值反演算法,以指数型非均匀传输线为例,对时域反射信号中混有高斯白噪声情况下的非均匀传输线特性参数重构问题进行了数值实验。数值计算结果表明,在噪声干扰下,算法本身是稳定的,在较宽的信噪比范围内,能有效地重构出非均匀传输线的特征参数。  相似文献   

6.
Multislice helical CT: image temporal resolution   总被引:7,自引:0,他引:7  
A multislice helical computed tomography (CT) halfscan (HS) reconstruction algorithm is proposed for cardiac applications. The imaging performances (in terms of the temporal resolution, z-axis resolution, image noise, and image artifacts) of the HS algorithm are compared to the existing algorithms using theoretical models and clinical data. A theoretical model of the temporal resolution performance (in terms of the temporal sensitivity profile) is established for helical CT, in general, i.e., for any number of detector rows and any reconstruction algorithm used. It is concluded that the HS reconstruction results in improved image temporal resolution than the corresponding 180 degrees LI (linear interpolation) reconstruction and is more immune to the inconsistent data problem induced by cardiac motions. The temporal resolution of multislice helical CT with the HS algorithm is comparable to that of single-slice helical CT with the HS algorithm. In practice, the 180 degrees LI and HS-LI algorithms can be used in parallel to generate two image sets from the same scan acquisition, one (180 degrees LI) for improved z-resolution and noises, and the other (HS-LI) for improved image temporal resolution.  相似文献   

7.
Adaptive AR modeling in white Gaussian noise   总被引:2,自引:0,他引:2  
Autoregressive (AR) modeling is widely used in signal processing. The coefficients of an AR model can be easily obtained with a least mean square (LMS) prediction error filter. However, it is known that this filter gives a biased solution when the input signal is corrupted by white Gaussian noise. Treichler (1979) suggested the γ-LMS algorithm to remedy this problem and proved that the mean weight vector can converge to the Wiener solution. In this paper, we develop a new algorithm that extends works of Vijayan et al. (1990), for adaptive AR modeling in the presence of white Gaussian noise. By theoretical analysis, we show that the performance of the new algorithm is superior to the γ-LMS filter. Simulations are also provided to support our theoretical results  相似文献   

8.
Multiscale morphological operators are studied extensively in the literature for image processing and feature extraction purposes. In this paper, we model a nonlinear regularization method based on multiscale morphology for edge-preserving super resolution (SR) image reconstruction. We formulate SR image reconstruction as a deblurring problem and then solve the inverse problem using Bregman iterations. The proposed algorithm can suppress inherent noise generated during low-resolution image formation as well as during SR image estimation efficiently. Experimental results show the effectiveness of the proposed regularization and reconstruction method for SR image.  相似文献   

9.
10.
Measurements of various image quality parameters were carried out with two different detector systems in an otherwise unchanged medical computed tomography (CT) scanner. As all other components of the scanner and the image reconstruction system remained identical, we were able to quantify the difference in performance between a Xenon gas ionization detector and a new solid-state scintillation detector in an isolated fashion. We determined noise, spatial resolution, and artifact behavior and assessed the potential for dose reduction. No significant impact of the detector change on absolute CT values of a calibration phantom was observed. Spatial resolution was improved by more than 10% for the solid-state system. As the system's modulation transfer functions were measured with a wire phantom and otherwise unchanged scanner geometry and image reconstruction algorithm, the increase of resolution is explained by the improved temporal response of the solid-state detector. At the same time, noise was reduced by 12% for a 20-cm diameter water phantom. The noise reduction corresponds to a possible reduction of patient dose by 23% for constant image quality, which is in good agreement with our prediction by estimations of both systems total detective quantum efficiency. Also, a significant improvement of scatter rejection was found for the solid-state system.  相似文献   

11.
杨彪  胡以华 《红外与激光工程》2019,48(7):726002-0726002(7)
为了提高激光反射断层成像目标重构的图像质量,在目前激光反射断层成像普遍采用反投影算法重构图像的基础上,将CT成像中常用的迭代重建算法引入到激光反射断层成像的图像重构过程中。分析了反投影算法中的直接反投影、R-L和S-L滤波反投影以及迭代重建算法在图像重构中的性能特性。进行了仿真和外场实验,结果表明:在直接反投影基础上添加了滤波器的反投影算法在减小误差和抑噪能力上都明显提高;另外相比于反投影算法,代数迭代重建算法表现出更好的重建质量,且具有更强的抑噪性能。  相似文献   

12.
We show that the problem of signal reconstruction from missing samples can be handled by using reconstruction algorithms similar to the Reed-Solomon (RS) decoding techniques. Usually, the RS algorithm is used for error detection and correction of samples in finite fields. For the case of missing samples of a speech signal, we work with samples in the field of real or complex numbers, and we can use FFT or some new transforms in the reconstruction algorithm. DSP implementation and simulation results show that the proposed methods are better than the ones previously published in terms of the quality of recovered speech signal for a given complexity. The burst error recovery method using the FFT kernel is sensitive to quantization and additive noise like the other techniques. However, other proposed transform kernels are very robust in correcting bursts of errors with the presence of quantization and additive noise  相似文献   

13.
在信号的稀疏表示方法中,传统的基于变换基的稀疏逼近不能自适应性地提取图像的纹理特征,而基于过完备字典的稀疏逼近算法复杂度过高.针对该问题,文章提出了一种基于小波变换稀疏字典优化的图像稀疏表示方法.该算法在图像小波变换的基础上构建图像过完备字典,利用同一场景图像的小波变换在纹理上具有内部和外部相似的属性,对过完备字典进行灰色关联度的分类,有效提高了图像表示的稀疏性.将该新算法应用于图像信号进行稀疏表示,以及基于压缩感知理论的图像采样和重建实验,结果表明新算法总体上提升了重建图像的峰值信噪比与结构相似度,并能有效缩短图像重建时间.  相似文献   

14.
Most present day computerized tomography (CT) systems are based on reconstruction algorithms that produce only approximate deterministic solutions of the image reconstruction problem. These algorithms yield reasonable results in cases of low measurement noise and regular measurement geometry, and are considered acceptable because they require far less computation and storage than more powerful algorithms that can yield near optimal results. However, the special geometry of the CT image reconstruction problem can be used to reduce by orders of magnitude the computation required for optimal reconstruction methods, such as the minimum variance estimator. These simplifications can make the minimum variance technique very competitive with well-known approximate techniques such as the algebraic reconstruction technique (ART) and convolution-back projection. The general minimum variance estimator for CT is first presented, and then a fast algorithm is described that uses Fourier transform techniques to implement the estimator for either fan beam or parallel beam geometries. The computational requirements of these estimators are examined and compared to other techniques. To allow further comparison with the commonly used convolution-back projection method, a representation of the fast algorithm is derived which allows its equivalent convolving function to be examined. Several examples are presented.  相似文献   

15.
白键  冯象初 《电子学报》2007,35(1):123-126
将图像分解为卡通部分(有界变差部分)和震荡部分(文本部分)是近年来图像处理的一个重要问题.图像的卡通部分是由一个有界变差(BV)函数来刻画,相应的将BV罚项合并到变分泛函中需要解偏微分方程.Daubechies用B11(L1)项代替BV罚项并且用小波解变分问题.按照她的方法,我们通过设计一种数字曲线波算法和一种依赖于尺度的阈值规则,从而得到了一种有效的基于数字曲线波变换的图像分解算法.我们可以看出该算法对噪声具有很强的鲁棒性并且能使图像边缘保持稳定.  相似文献   

16.
Denoising by singularity detection   总被引:10,自引:0,他引:10  
A new algorithm for noise reduction using the wavelet transform is proposed. Similar to Mallat's (1992) wavelet transform modulus maxima denoising approach, we estimate the regularity of a signal from the evolution of its wavelet transform coefficients across scales. However, we do not perform maxima detection and processing; therefore, complicated reconstruction is avoided. Instead, the local regularities of a signal are estimated by computing the sum of the modulus of its wavelet coefficients inside the corresponding “cone of influence”, and the coefficients that correspond to the regular part of the signal for reconstruction are selected. The algorithm gives an improved denoising result, as compared with the previous approaches, in terms of mean squared error and visual quality. The new denoising algorithm is also invariant to translation. It does not introduce spurious oscillations and requires very little a priori information of the signal or noise. Besides, we extend the method to two dimensions to estimate the regularity of an image by computing the sum of the modulus of its wavelet coefficients inside the so-called “directional cone of influence”. The denoising technique is applied to tomographic image reconstruction, where the improved performance of the new approach can clearly be observed  相似文献   

17.
An efficient code-timing estimator for DS-CDMA signals   总被引:5,自引:0,他引:5  
We present an efficient algorithm for estimating the code timing of a known training sequence in an asynchronous direct-sequence code division multiple access (DS-CDMA) system. The algorithm is a large sample maximum likelihood (LSML) estimator that is derived by modeling the known training sequence as the desired signal and all other signals including the interfering signals and thermal noise as unknown colored Gaussian noise that is uncorrelated with the desired signal. The LSML estimator is shown to be robust against the near-far problem and is also compared with several other code timing estimators via numerical examples. It is found that the LSML approach can offer noticeable performance improvement, especially when the loading of the system is heavy  相似文献   

18.
Exact image deconvolution from multiple FIR blurs   总被引:2,自引:0,他引:2  
We address the problem of restoring an image from its noisy convolutions with two or more blur functions (channels). Deconvolution from multiple blurs is, in general, better conditioned than from a single blur, and can be performed without regularization for moderate noise levels. We characterize the problem of missing data at the image boundaries, and show that perfect reconstruction is impossible (even in the no-noise case) almost surely unless there are at least three channels. Conversely, when there are at least three channels, we show that perfect reconstruction is not only possible almost surely in the absence of noise, but also that it can be accomplished by finite impulse response (FIR) filtering. Such FIR reconstruction is vastly more efficient computationally than the least-squares solution, and is suitable for low noise levels. Even in the high-noise case, the estimates obtained by FIR filtering provide useful starting points for iterative least-squares algorithms. We present results on the minimum possible sizes of such deconvolver filters. We derive expressions for the mean-square errors in the FIR reconstructions, and show that performance comparable to that of the least-squares reconstruction may be obtained with relatively small deconvolver filters. Finally, we demonstrate the FIR reconstruction on synthetic and real data.  相似文献   

19.
High resolution image reconstruction is an image process to reconstruct a high resolution image from a set of blurred, degraded and shifted low resolution images. In this paper, the reconstruction problem is treated as a function approximation. We use linear interpolation to build up an algorithm to obtain the relationship between the detail coefficients in wavelet subbands and the set of low resolution images. We use Haar wavelet as an example and establish the connection between the Haar wavelet subband and the low resolution images. Experiments show that we can use just 3 low resolution images to obtain a high resolution image which has better quality than Tikhonov least-squares approach and Chan et al. Algorithm 3 in low noise cases. We also propose an error correction extension for our method which can lead to very good results even in noisy cases. Moreover, our approach is very simple to implement and very efficient.  相似文献   

20.
针对分块压缩感知算法在平滑块效应时损失了大量的细节纹理信息,从而影响图像的重构效果问题,提出了一种基于块稀疏信号的压缩感知重构算法。该算法先采用块稀疏度估计对信号的稀疏性做初步估计,通过对块稀疏度进行估算初始化阶段长,运用块矩阵与残差信号最匹配原则来选取支撑块,再运用自适应迭代计算实现对块稀疏信号的重构,较好地解决了浪费存储资源和计算量大的问题。实验结果表明,相比常用压缩感知方法,所提算法能明显减少运算时间,且能有效提高图像重构效果。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号