首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
洪贤勇  乔志伟 《电视技术》2014,(7):35-38,29
为了实现感兴趣区域(Region of Interest,ROI)的重建,使用了一种利用微分和有限希尔伯特变换的局部特性的反投影滤波(back projection filteration,BPF)算法。该算法是一种理论上精确的ROI重建算法,首先对仅仅覆盖ROI区域的投影数据微分,然后反投影到ROI区域,最后沿着覆盖ROI区域的PI线做有限希尔伯特滤波得到ROI图像。仿真实验表明BPF算法和经典的滤波反投影(Filtered Back Projection,FBP)算法的全局重建的精度基本相同,但该算法可以实现精确的ROI重建,而FBP算法因不具有局部特性,不能实现ROI重建。在实现有限希尔伯特变换时,采用了加权希尔伯特变换的方法,有效地避免了图像两边的亮条伪像。  相似文献   

2.
为了实现感兴趣区域(Region of Interest,ROI)的重建,本文使用了一种利用微分和有限希尔伯特变换的局部特性的反投影滤波(back projection filteration,BPF)算法。这种算法是一种理论上精确的ROI重建算法,首先对仅仅覆盖ROI区域的投影数据微分,然后反投影到ROI区域,最后沿着覆盖ROI区域的PI线做有限希尔伯特滤波得到ROI图像。仿真实验表明BPF算法和经典的滤波反投影(filtered back projection, FBP)算法的全局重建的精度基本相同,但本文算法可以实现精确的ROI重建,而FBP算法因不具有局部特性,不能实现ROI重建。本文在实现有限希尔伯特变换时,采用了加权希尔伯特变换的方法,有效地避免了图像两边的亮条伪像。  相似文献   

3.
杨彪  胡以华 《红外与激光工程》2019,48(7):726002-0726002(7)
为了提高激光反射断层成像目标重构的图像质量,在目前激光反射断层成像普遍采用反投影算法重构图像的基础上,将CT成像中常用的迭代重建算法引入到激光反射断层成像的图像重构过程中。分析了反投影算法中的直接反投影、R-L和S-L滤波反投影以及迭代重建算法在图像重构中的性能特性。进行了仿真和外场实验,结果表明:在直接反投影基础上添加了滤波器的反投影算法在减小误差和抑噪能力上都明显提高;另外相比于反投影算法,代数迭代重建算法表现出更好的重建质量,且具有更强的抑噪性能。  相似文献   

4.
关于CBP算法的一种新型滤波函数和它的性质   总被引:4,自引:0,他引:4       下载免费PDF全文
用卷积反投影(CBP)算法作CT重建,滤波函数是关键.本文建议一种新型滤波函数,给出了用它作CT重建的误差估计,分析了该滤波函数的时频特性,并用来作局部重建.模拟和实测数据的数字实验表明其在保证空间分辨率的同时能较好地克服Gibbs效应.  相似文献   

5.
过传卫  胡福乔 《电讯技术》2007,47(1):182-184
提出了一种用于扇束CT(计算机断层扫描)重建的快速滤波反投影算法.这种算法是传统标准滤波反投影(FBP)重建算法的加速形式,主要通过减少投影数量然后重建子图像来实现.实验结果表明:对于一幅512×512图像,这种算法可以将重建过程加速40倍以上,并且不会引入明显的图像误差.这种算法也适应于多层螺旋三维重建,并且可以延伸用于三维锥形重建.  相似文献   

6.
聚束SAR快速卷积反投影成像算法的研究   总被引:1,自引:0,他引:1  
许猛  张平 《现代雷达》2006,28(8):54-57
对应用于聚束合成孔径雷达成像的卷积反投影(CBP)算法进行了研究,提出了一种新的快速反投影算法,在分割图像及角度降采样基础上,采用递归调用方法,降低了CBP算法的计算量,相对于直接反投影的计算量O(N3),快速反投影算法的计算复杂度可降低为O(N21bN)。同时讨论了递归参数和角过采样参数的选择对算法的影响,仿真了四种插值方法对运算性能的影响,从而分析了计算复杂度和精确度的关系,并且通过与直接反投影算法仿真结果的比较,验证了快速卷积反投影算法在较小均方根误差水平下可以显著减小计算量。  相似文献   

7.
王晓鹏 《电视技术》2014,38(7):32-34,26
针对卷积反投影CT图像重建算法(CBP)中采用传统滤波函数存在图像振荡,抗噪声性能差和图像细节模糊等问题,结合混合滤波器和多点加权平均的思想,提出了一种新的滤波函数即R-L-MS-L滤波函数,采用该滤波函数能够有效地抑制图像重建时的振荡,使重建后的图像更平滑,且其抗噪声性能比传统的滤波函数更高。实验仿真结果表明:采用R-L-MS-L滤波函数比传统的滤波函数有着更好的重建质量。  相似文献   

8.
用快速哈达玛变换加速滤波反投影算法的滤波过程   总被引:1,自引:0,他引:1  
为了加速滤波反投影算法的滤波过程,提出了用快速哈达玛变换(FHT)实现线性卷积的快速算法。分析了哈达玛变换的特点和快速算法的时间复杂度,设计了用FHT计算线性卷积的矩阵表达式,并推导出了哈达玛域滤波器的增益矩阵的求解公式,分析了该方法的加速原理及其适用条件。理论分析表明,该方法比FFT线性卷积法快了一倍。仿真实验表明,该方法在不影响图像重建精度的情况下,相对于FFT线性卷积法,将滤波过程的速度提高了近一倍。  相似文献   

9.
缪辉  赵会娟  高峰  周仲兴 《中国激光》2012,39(1):104002-134
作为新型医学成像技术,数字合成X线体层成像(DTS)技术有助于分辨重叠的成像组织并精确定位组织病变。DTS技术采用重建速度快、图像质量高的滤波反投影算法重建出冠断面的断层图像。但由于DTS投影数据集的不完整性,因此利用滤波反投影算法重建出的图像其强度在旋转轴方向上存在着类似帽状衰减现象。分析了X线源锥束角度对DTS图像中轴向强度衰减的影响,在此基础上引入了一种反余弦加权的强度衰减的校正方法。为验证校正方法对降低DTS图像中轴向强度衰减的效果,利用搭建的DTS系统在不同的X线源锥角下对乳房仿体进行了实验。结果表明,该校正方法能有效地降低DTS重建图像中的轴向强度衰减程度。  相似文献   

10.
王旭  杨明川  郭庆 《通信技术》2011,44(5):146-147,150
对Shepp-Logan头部图像模型提供的原始图像进行低剂量X射线计算机断层成像(CT,Computed Tomography)平移/旋转扫描,利用所得数据对图像的统计重建算法进行仿真。将仿真结果与传统的滤波反投影重建算法的重建结果进行比较,并分别将两种算法的重建图像与原图像进行比较。通过比较得出,滤波反投影重建算法不能有效重建低剂量CT图像,而统计重建算法重建效果优于前者,能够较好地恢复原图像,从而能够适用于低剂量条件下的CT图像重建。  相似文献   

11.
O(N2log2N) filtered backprojectionreconstruction algorithm for tomography   总被引:2,自引:0,他引:2  
We present a new fast reconstruction algorithm for parallel beam tomography. The new algorithm is an accelerated version of the standard filtered backprojection (FBP) reconstruction, and uses a hierarchical decomposition of the backprojection operation to reduce the computational cost from O(N(3)) to O(N(2)log(2 )N). We discuss the choice of the various parameters that affect the behavior of the algorithm, and present numerical studies that demonstrate the cost versus distortion tradeoff. Comparisons with Fourier reconstruction algorithms and a multilevel inversion algorithm by Brandt et al., both of which also have O(N(2)log(2)N) cost, suggest that the proposed hierarchical scheme has a superior cost versus distortion performance. It offers RMS reconstruction errors comparable to the FBP with considerable speedup. For an example with a 512x512-pixel image and 1024 views, the speedup achieved with a particular implementation is over 40 fold, with reconstructions visually indistinguishable from the FBP.  相似文献   

12.
This paper presents a new type of filtered backprojection (FBP) algorithm for fan-beam full- and partial-scans. The filtering is shift-invariant with respect to the angular variable. The backprojection does not include position-dependent weights through the Hilbert transform and the one-dimensional transformation between the fan- and parallel-beam coordinates. The strong symmetry of the filtered projections directly leads to an exact reconstruction for partial data. The use of the Hilbert transform avoids the approximation introduced by the nonuniform cutoff frequency required in the ramp filter-based FBP algorithm. Variance analysis indicates that the algorithm might lead to a better uniformity of resolution and noise in the reconstructed image. Numerical simulations are provided to evaluate the algorithm with noise-free and noisy projections. Our simulation results indicate that the algorithm does have better stability over the ramp-filter-based FBP and circular harmonic reconstruction algorithms. This may help improve the image quality for in place computed tomography scanners with single-row detectors.  相似文献   

13.
The filtered backprojection (FBP) algorithm is widely used in computed tomography for inverting the two-dimensional Radon transform. In this paper, we analyze the processing of an inconsistent data function by the FBP algorithm (in its continuous form). Specifically, we demonstrate that an image reconstructed using the FBP algorithm can be represented as the sum of a pseudoinverse solution and a residual image generated from an inconsistent component of the measured data. This reveals that, when the original data function is in the range of the Radon transform, the image reconstructed using the FBP algorithm corresponds to the pseudoinverse solution. When the data function is inconsistent, we demonstrate that the FBP algorithm makes use of a nonorthogonal projection of the data function to the range of the Radon transform.  相似文献   

14.
The problem of negative artifacts in emission tomography reconstructions computed by filtered backprojection (FBP) is of practical concern particularly in low count studies. Statistical reconstruction methods based on maximum likelihood (ML) are automatically constrained to be non-negative but their excessive computational overhead (orders of magnitude greater than FBP) has limited their use in operational settings. Motivated by the statistical character of the negativity artifact, the authors develop a simple post-processing technique that iteratively adjusts negative values by cancellation with positive values in a surrounding local neighborhood. The compute time of this approach is roughly equivalent to 2 applications of FBP. The approach was evaluated by numerical simulation in 1- and 2-dimensional (2D) settings. In 2D, the source distributions included the Hoffman, the Shepp and Vardi, and a digitized version of the Jaszczak cold spheres phantoms. The authors' studies compared smoothed versions of FBP, the post-processed FBP, and ML implemented by the expectation-maximization algorithm. The root mean square (RMS) error between the true and estimated source distribution was used to evaluate performance; in 2D, additional region-of-interest-based measures of reconstruction accuracy were also employed. In making comparisons between the different methods, the amount of smoothing applied to each reconstruction method was adapted to minimize the RMS error-this was found to be critical.  相似文献   

15.
The low signal-to-noise ratio (SNR) in emission data has stimulated the development of statistical image reconstruction methods based on the maximum a posteriori (MAP) principle. Experimental examples have shown that statistical methods improve image quality compared to the conventional filtered backprojection (FBP) method. However, these results depend on isolated data sets. Here we study the lesion detectability of MAP reconstruction theoretically, using computer observers. These theoretical results can be applied to different object structures. They show that for a quadratic smoothing prior, the lesion detectability using the prewhitening observer is independent of the smoothing parameter and the neighborhood of the prior, while the nonprewhitening observer exhibits an optimum smoothing point. We also compare the results to those of FBP reconstruction. The comparison shows that for ideal positron emission tomography (PET) systems (where data are true line integrals of the tracer distribution) the MAP reconstruction has a higher SNR for lesion detection than FBP reconstruction due to the modeling of the Poisson noise. For realistic systems, MAP reconstruction further benefits from accurately modeling the physical photon detection process in PET.  相似文献   

16.
Presents an image reconstruction method for positron-emission tomography (PET) based on a penalized, weighted least-squares (PWLS) objective. For PET measurements that are precorrected for accidental coincidences, the author argues statistically that a least-squares objective function is as appropriate, if not more so, than the popular Poisson likelihood objective. The author proposes a simple data-based method for determining the weights that accounts for attenuation and detector efficiency. A nonnegative successive over-relaxation (+SOR) algorithm converges rapidly to the global minimum of the PWLS objective. Quantitative simulation results demonstrate that the bias/variance tradeoff of the PWLS+SOR method is comparable to the maximum-likelihood expectation-maximization (ML-EM) method (but with fewer iterations), and is improved relative to the conventional filtered backprojection (FBP) method. Qualitative results suggest that the streak artifacts common to the FBP method are nearly eliminated by the PWLS+SOR method, and indicate that the proposed method for weighting the measurements is a significant factor in the improvement over FBP.  相似文献   

17.
Emission computerised tomography images reconstructed using a maximum likelihood-expectation maximization (ML)-based method with different reconstruction kernels and 1-200 iterations are compared to images reconstructed using filtered backprojection (FBP). ML-based reconstructions using a single pixel (SP) kernel with or without a sieve filter show no quantitative advantage over FBP except in the background where a reduction of noise is possible if the number of iterations is kept small (<50). ML-based reconstructions using a Gaussian kernel with a multipixel full-width-at-half-maximum (FWHM) and a large number of iterations (200) require a sieve filtering step to reduce the noise and contrast overshoot in the final images. These images have some small quantitative advantages over FBP depending on the structures being imaged. It is demonstrated that a feasibility stopping criterion controls the noise in a reconstructed image, but is insensitive to quantitation errors, and that the use of an appropriate overrelaxation parameter can accelerate the convergence of the ML-based method during the iterative process without quantitative instabilities.  相似文献   

18.
Edge-preserving tomographic reconstruction with nonlocal regularization   总被引:4,自引:0,他引:4  
Tomographic image reconstruction using statistical methods can provide more accurate system modeling, statistical models, and physical constraints than the conventional filtered backprojection (FBP) method. Because of the ill posedness of the reconstruction problem, a roughness penalty is often imposed on the solution to control noise. To avoid smoothing of edges, which are important image attributes, various edge-preserving regularization methods have been proposed. Most of these schemes rely on information from local neighborhoods to determine the presence of edges. In this paper, we propose a cost function that incorporates nonlocal boundary information into the regularization method. We use an alternating minimization algorithm with deterministic annealing to minimize the proposed cost function, jointly estimating region boundaries and object pixel values. We apply variational techniques implemented using level-sets methods to update the boundary estimates; then, using the most recent boundary estimate, we minimize a space-variant quadratic cost function to update the image estimate. For the positron emission tomography transmission reconstruction application, we compare the bias-variance tradeoff of this method with that of a "conventional" penalized-likelihood algorithm with local Huber roughness penalty.  相似文献   

19.
The imaging characteristics of maximum likelihood (ML) reconstruction using the EM algorithm for emission tomography have been extensively evaluated. There has been less study of the precision and accuracy of ML estimates of regional radioactivity concentration. The authors developed a realistic brain slice simulation by segmenting a normal subject's MRI scan into gray matter, white matter, and CSF and produced PET sinogram data with a model that included detector resolution and efficiencies, attenuation, scatter, and randoms. Noisy realizations at different count levels were created, and ML and filtered backprojection (FBP) reconstructions were performed. The bias and variability of ROI values were determined. In addition, the effects of ML pixel size, image smoothing and region size reduction were assessed. Hit estimates at 3,000 iterations (0.6 sec per iteration on a parallel computer) for 1-cm(2) gray matter ROIs showed negative biases of 6%+/-2% which can be reduced to 0%+/-3% by removing the outer 1-mm rim of each ROI. FBP applied to the full-size ROIs had 15%+/-4% negative bias with 50% less noise than hit. Shrinking the FBP regions provided partial bias compensation with noise increases to levels similar to ML. Smoothing of ML images produced biases comparable to FBP with slightly less noise. Because of its heavy computational requirements, the ML algorithm will be most useful for applications in which achieving minimum bias is important.  相似文献   

20.
We quantitatively compare filtered backprojection (FBP), expectation-maximization (EM), and Bayesian reconstruction algorithms as applied to the IndyPET scanner--a dedicated research scanner which has been developed for small and intermediate field of view imaging applications. In contrast to previous approaches that rely on Monte Carlo simulations, a key feature of our investigation is the use of an empirical system kernel determined from scans of line source phantoms. This kernel is incorporated into the forward model of the EM and Bayesian algorithms to achieve resolution recovery. Three data sets are used, data collected on the IndyPET scanner using a bar phantom and a Hoffman three-dimensional brain phantom, and simulated data containing a hot lesion added to a uniform background. Reconstruction quality is analyzed quantitatively in terms of bias-variance measures (bar phantom) and mean square error (lesion phantom). We observe that without use of the empirical system kernel, the FBP, EM, and Bayesian algorithms give similar performance. However, with the inclusion of the empirical kernel, the iterative algorithms provide superior reconstructions compared with FBP, both in terms of visual quality and quantitative measures. Furthermore, Bayesian methods outperform EM. We conclude that significant improvements in reconstruction quality can be realized by combining accurate models of the system response with Bayesian reconstruction algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号