首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we propose a new wavelet-based reconstruction method suited to three-dimensional (3-D) cone-beam (CB) tomography. It is derived from the Feldkamp algorithm and is valid for the same geometrical conditions. The demonstration is done in the framework of nonseparable wavelets and requires ideally radial wavelets. The proposed inversion formula yields to a filtered backprojection algorithm but the filtering step is implemented using quincunx wavelet filters. The proposed algorithm reconstructs slice by slice both the wavelet and approximation coefficients of the 3-D image directly from the CB projection data. The validity of this multiresolution approach is demonstrated on simulations from both mathematical phantoms and 3-D rotational angiography clinical data. The same quality is achieved compared with the standard Feldkamp algorithm, but in addition, the multiresolution decomposition allows to apply directly image processing techniques in the wavelet domain during the inversion process. As an example, a fast low-resolution reconstruction of the 3-D arterial vessels with the progressive addition of details in a region of interest is demonstrated. Other promising applications are the improvement of image quality by denoising techniques and also the reduction of computing time using the space localization of wavelets.  相似文献   

2.
Magnetic resonance image (MRI) reconstruction using SENSitivity Encoding (SENSE) requires regularization to suppress noise and aliasing effects. Edge-preserving and sparsity-based regularization criteria can improve image quality, but they demand computation-intensive nonlinear optimization. In this paper, we present novel methods for regularized MRI reconstruction from undersampled sensitivity encoded data--SENSE-reconstruction--using the augmented Lagrangian (AL) framework for solving large-scale constrained optimization problems. We first formulate regularized SENSE-reconstruction as an unconstrained optimization task and then convert it to a set of (equivalent) constrained problems using variable splitting. We then attack these constrained versions in an AL framework using an alternating minimization method, leading to algorithms that can be implemented easily. The proposed methods are applicable to a general class of regularizers that includes popular edge-preserving (e.g., total-variation) and sparsity-promoting (e.g., l(1)-norm of wavelet coefficients) criteria and combinations thereof. Numerical experiments with synthetic and in vivo human data illustrate that the proposed AL algorithms converge faster than both general-purpose optimization algorithms such as nonlinear conjugate gradient (NCG) and state-of-the-art MFISTA.  相似文献   

3.
Despite the tremendous success of wavelet-based image regularization, we still lack a comprehensive understanding of the exact factor that controls edge preservation and a principled method to determine the wavelet decomposition structure for dimensions greater than 1. We address these issues from a machine learning perspective by using tree classifiers to underpin a new image regularizer that measures the complexity of an image based on the complexity of the dyadic-tree representations of its sublevel sets. By penalizing unbalanced dyadic trees less, the regularizer preserves sharp edges. The main contribution of this paper is the connection of concepts from structured dyadic-tree complexity measures, wavelet shrinkage, morphological wavelets, and smoothness regularization in Besov space into a single coherent image regularization framework. Using the new regularizer, we also provide a theoretical basis for the data-driven selection of an optimal dyadic wavelet decomposition structure. As a specific application example, we give a practical regularized image denoising algorithm that uses this regularizer and the optimal dyadic wavelet decomposition structure.  相似文献   

4.
Wavelet domain image restoration with adaptive edge-preservingregularization   总被引:21,自引:0,他引:21  
In this paper, we consider a wavelet based edge-preserving regularization scheme for use in linear image restoration problems. Our efforts build on a collection of mathematical results indicating that wavelets are especially useful for representing functions that contain discontinuities (i.e., edges in two dimensions or jumps in one dimension). We interpret the resulting theory in a statistical signal processing framework and obtain a highly flexible framework for adapting the degree of regularization to the local structure of the underlying image. In particular, we are able to adapt quite easily to scale-varying and orientation-varying features in the image while simultaneously retaining the edge preservation properties of the regularizer. We demonstrate a half-quadratic algorithm for obtaining the restorations from observed data.  相似文献   

5.
Coronary magnetic resonance imaging (MRI) is a noninvasive imaging modality for diagnosis of coronary artery disease. One of the limitations of coronary MRI is its long acquisition time due to the need of imaging with high spatial resolution and constraints on respiratory and cardiac motions. Compressed sensing (CS) has been recently utilized to accelerate image acquisition in MRI. In this paper, we develop an improved CS reconstruction method, Bayesian least squares-Gaussian scale mixture (BLS-GSM), that uses dependencies of wavelet domain coefficients to reduce the observed blurring and reconstruction artifacts in coronary MRI using traditional l(1) regularization. Images of left and right coronary MRI was acquired in 7 healthy subjects with fully-sampled k-space data. The data was retrospectively undersampled using acceleration rates of 2, 4, 6, and 8 and reconstructed using l(1) thresholding, l(1) minimization and BLS-GSM thresholding. Reconstructed right and left coronary images were compared with fully-sampled reconstructions in vessel sharpness and subjective image quality (1-4 for poor-excellent). Mean square error (MSE) was also calculated for each reconstruction. There were no significant differences between the fully sampled image score versus rate 2, 4, or 6 for BLS-GSM for both right and left coronaries (=N.S.). However, for l(1) thresholding significant differences were observed for rates higher than 2 and 4 for right and left coronaries respectively. l(1) minimization also yields images with lower scores compared to the reference for rates higher than 4 for both coronaries. These results were consistent with the quantitative vessel sharpness readings. BLS-GSM allows acceleration of coronary MRI with acceleration rates beyond what can be achieved with l(1) regularization.  相似文献   

6.
We propose multi-parameter regularization methods for high-resolution image reconstruction which is described by an ill-posed problem. The regularization operator for the ill-posed problem is decomposed in a multiscale manner by using bi-orthogonal wavelets or tight frames. In the multiscale framework, for different scales of the operator we introduce different regularization parameters. These methods are analyzed under certain reasonable hypotheses. Numerical examples are presented to demonstrate the efficiency and accuracy of these methods.  相似文献   

7.
This paper presents a new approach for the estimation of 2-channel nonseparable wavelets matched to images in the statistical sense. To estimate a matched wavelet system, first, we estimate the analysis wavelet filter of a 2-channel nonseparable filterbank using the minimum mean square error (MMSE) criterion. The MMSE criterion requires statistical characterization of the given image. Because wavelet basis expansion behaves as Karhunen-Loève type expansion for fractional Brownian processes, we assume that the given image belongs to a 1st order or a 2nd order isotropic fractional Brownian field (IFBF). Next, we present a method for the design of a 2-channel two-dimensional finite-impulse response (FIR) biorthogonal perfect reconstruction filterbank (PRFB) leading to the estimation of a compactly supported statistically matched wavelet. The important contribution of the paper lies in the fact that all filters are estimated from the given image itself. Several design examples are presented using the proposed theory. Because matched wavelets will have better energy compaction, performance of estimated wavelets is evaluated by computing the transform coding gain. It is seen that nonseparable matched wavelets give better coding gain as compared to nonseparable non-matched orthogonal and biorthogonal wavelets.  相似文献   

8.
We describe a matched subspace detection algorithm to assist in the detection of small tumors in dynamic positron emission tomography (PET) images. The algorithm is designed to differentiate tumors from background using the time activity curves (TACs) that characterize the uptake of PET tracers. TACs are modeled using linear subspaces with additive Gaussian noise. Using TACs from a primary tumor region of interest (ROI) and one or more background ROIs, each identified by a human observer, two linear subspaces are identified. Applying a matched subspace detector to these identified subspaces on a voxel-by-voxel basis throughout the dynamic image produces a test statistic at each voxel which on thresholding indicates potential locations of secondary or metastatic tumors. The detector is derived for three cases: using a single TAC with white noise of unknown variance, using a single TAC with known noise covariance, and detection using multiple TACs within a small ROI with known noise covariance. The noise covariance is estimated for the reconstructed image from the observed sinogram data. To evaluate the proposed method, a simulation-based receiver operating characteristic (ROC) study for dynamic PET tumor detection is designed. The detector uses a dynamic sequence of frame-by-frame 2-D reconstructions as input. We compare the performance of the subspace detectors with that of a Hotelling observer applied to a single frame image and of the Patlak method applied to the dynamic data. We also show examples of the application of each detection approach to clinical PET data from a breast cancer patient with metastatic disease.   相似文献   

9.
Many motion-compensated image reconstruction (MCIR) methods have been proposed to correct for subject motion in medical imaging. MCIR methods incorporate motion models to improve image quality by reducing motion artifacts and noise. This paper analyzes the spatial resolution properties of MCIR methods and shows that nonrigid local motion can lead to nonuniform and anisotropic spatial resolution for conventional quadratic regularizers. This undesirable property is akin to the known effects of interactions between heteroscedastic log-likelihoods (e.g., Poisson likelihood) and quadratic regularizers. This effect may lead to quantification errors in small or narrow structures (such as small lesions or rings) of reconstructed images. This paper proposes novel spatial regularization design methods for three different MCIR methods that account for known nonrigid motion. We develop MCIR regularization designs that provide approximately uniform and isotropic spatial resolution and that match a user-specified target spatial resolution. Two-dimensional PET simulations demonstrate the performance and benefits of the proposed spatial regularization design methods.  相似文献   

10.
Traditional space-invariant regularization methods in tomographic image reconstruction using penalized-likelihood estimators produce images with nonuniform spatial resolution properties. The local point spread functions that quantify the smoothing properties of such estimators are space-variant, asymmetric, and object-dependent even for space-invariant imaging systems. We propose a new quadratic regularization scheme for tomographic imaging systems that yields increased spatial uniformity and is motivated by the least-squares fitting of a parameterized local impulse response to a desired global response. We have developed computationally efficient methods for PET systems with shift-invariant geometric responses. We demonstrate the increased spatial uniformity of this new method versus conventional quadratic regularization schemes in simulated PET thorax scans.  相似文献   

11.
We present accurate and efficient methods for estimating the spatial resolution and noise properties of nonquadratically regularized image reconstruction for positron emission tomography (PET). It is well known that quadratic regularization tends to over-smooth sharp edges. Many types of edge-preserving nonquadratic penalties have been proposed to overcome this problem. However, there has been little research on the quantitative analysis of nonquadratic regularization due to its nonlinearity. In contrast, quadratically regularized estimators are approximately linear and are well understood in terms of resolution and variance properties. We derive new approximate expressions for the linearized local perturbation response (LLPR) and variance using the Taylor expansion with the remainder term. Although the expressions are implicit, we can use them to accurately predict resolution and variance for nonquadratic regularization where the conventional expressions based on the first-order Taylor truncation fail. They also motivate us to extend the use of a certainty-based modified penalty to nonquadratic regularization cases in order to achieve spatially uniform perturbation responses, analogous to uniform spatial resolution in quadratic regularization. Finally, we develop computationally efficient methods for predicting resolution and variance of nonquadratically regularized reconstruction and present simulations that illustrate the validity of these methods.  相似文献   

12.
The low signal-to-noise ratio (SNR) in emission data has stimulated the development of statistical image reconstruction methods based on the maximum a posteriori (MAP) principle. Experimental examples have shown that statistical methods improve image quality compared to the conventional filtered backprojection (FBP) method. However, these results depend on isolated data sets. Here we study the lesion detectability of MAP reconstruction theoretically, using computer observers. These theoretical results can be applied to different object structures. They show that for a quadratic smoothing prior, the lesion detectability using the prewhitening observer is independent of the smoothing parameter and the neighborhood of the prior, while the nonprewhitening observer exhibits an optimum smoothing point. We also compare the results to those of FBP reconstruction. The comparison shows that for ideal positron emission tomography (PET) systems (where data are true line integrals of the tracer distribution) the MAP reconstruction has a higher SNR for lesion detection than FBP reconstruction due to the modeling of the Poisson noise. For realistic systems, MAP reconstruction further benefits from accurately modeling the physical photon detection process in PET.  相似文献   

13.
Recently Compressed Sensing (CS) based techniques are being used for reconstructing magnetic resonance (MR) images from partially sampled k-space data. CS based reconstruction techniques can be categorized into three categories based on the objective function: (i) synthesis prior, (ii) analysis prior and (iii) mixed (analysis+synthesis) prior. Each of these can be further subdivided into convex and non-convex forms. There is also a wide choice available for the sparsifying transforms, viz. Daubechies wavelets (orthogonal and redundant), fractional spline wavelet (orthogonal), complex dualtree wavelet (redundant), contourlet (redundant) and finite difference (redundant). Previous studies in MR image reconstruction have used a various combinations of objective functions (priors) and sparsifying transforms; and each of these studies claimed the superiority of their method over others. In this work, we will review and evaluate the popular MR image reconstruction techniques and show that analysis prior with complex dualtree wavelets yields the best reconstruction results. We have evaluated our experimental results on real data. The metric for quantitative evaluation is the Normalized Mean Squared Error. Our qualitative evaluation is based both on the reconstructed and the difference images.The other significant contribution of this paper is the development of convex and non-convex versions of synthesis, analysis and mixed prior algorithms from a uniform majorization-minimization framework. The algorithms are compared with a state-of-the-art CS based techniques; the proposed ones have better reconstruction accuracy and are only fractionally slow. The algorithms that are derived in this paper are all efficient first order algorithms that are easy to implement.  相似文献   

14.
利用子波变换的图象压缩编码技术   总被引:12,自引:0,他引:12  
从1988年Malial将子波交换用于信号处理提出多分辨率分析概念,给出了信号或图象分解为不同频率通道的算法和重构算法,开创了子波变换在图象处理中的应用.于波变换在时域和频域同时具有良好的局部化性能,使子波变换成为视频图象压缩编码的主要技术,专家预计在1996年将会高清晰度电视标准的子波变换视频图象压缩技术将推出问世.本文将综述子波变换图象压缩编码技术当前的进展。  相似文献   

15.
Wavelet-based multiresolution local tomography   总被引:8,自引:0,他引:8  
We develop an algorithm to reconstruct the wavelet coefficients of an image from the Radon transform data. The proposed method uses the properties of wavelets to localize the Radon transform and can be used to reconstruct a local region of the cross section of a body, using almost completely local data that significantly reduces the amount of exposure and computations in X-ray tomography. The property that distinguishes our algorithm from the previous algorithms is based on the observation that for some wavelet bases with sufficiently many vanishing moments, the ramp-filtered version of the scaling function as well as the wavelet function has extremely rapid decay. We show that the variance of the elements of the null-space is negligible in the locally reconstructed image. Also, we find an upper bound for the reconstruction error in terms of the amount of data used in the algorithm. To reconstruct a local region 16 pixels in radius in a 256x256 image, we require 22% of full exposure data.  相似文献   

16.
The use of time-of-flight (TOF) information during positron emission tomography (PET) reconstruction has been found to improve the image quality. In this work we quantified this improvement using two existing methods: 1) a very simple analytical expression only valid for a central point in a large uniform disk source and 2) efficient analytical approximations for postfiltered maximum likelihood expectation maximization (MLEM) reconstruction with a fixed target resolution, predicting the image quality in a pixel or in a small region of interest based on the Fisher information matrix. Using this latter method the weighting function for filtered backprojection reconstruction of TOF PET data proposed by C. Watson can be derived. The image quality was investigated at different locations in various software phantoms. Simplified as well as realistic phantoms, measured both with TOF PET systems and with a conventional PET system, were simulated. Since the time resolution of the system is not always accurately known, the effect on the image quality of using an inaccurate kernel during reconstruction was also examined with the Fisher information-based method. First, we confirmed with this method that the variance improvement in the center of a large uniform disk source is proportional to the disk diameter and inversely proportional to the time resolution. Next, image quality improvement was observed in all pixels, but in eccentric and high-count regions the contrast-to-noise ratio (CNR) increased less than in central and low- or medium-count regions. Finally, the CNR was seen to decrease when the time resolution was inaccurately modeled (too narrow or too wide) during reconstruction. Although the maximum CNR is not very sensitive to the time resolution error, using an inaccurate TOF kernel tends to introduce artifacts in the reconstructed image.   相似文献   

17.
Compressive sensing (CS) theory, which has been widely used in magnetic resonance (MR) image processing, indicates that a sparse signal can be reconstructed by the optimization programming process from non-adaptive linear projections. Since MR Images commonly possess a blocky structure and have sparse representations under certain wavelet bases, total variation (TV) and wavelet domain ?1 norm regularization are enforced together (TV-wavelet L1 method) to improve the recovery accuracy. However, the components of wavelet coefficients are different: low-frequency components of an image, that carry the main energy of the MR image, perform a decisive impact for reconstruction quality. In this paper, we propose a TV and wavelet L2–L1 model (TVWL2–L1) to measure the low frequency wavelet coefficients with ?2 norm and high frequency wavelet coefficients with ?1 norm. We present two methods to approach this problem by operator splitting algorithm and proximal gradient algorithm. Experimental results demonstrate that our method can obviously improve the quality of MR image recovery comparing with the original TV-wavelet method.  相似文献   

18.
A binary wavelet decomposition of binary images   总被引:7,自引:0,他引:7  
We construct a theory of binary wavelet decompositions of finite binary images. The new binary wavelet transform uses simple module-2 operations. It shares many of the important characteristics of the real wavelet transform. In particular, it yields an output similar to the thresholded output of a real wavelet transform operating on the underlying binary image. We begin by introducing a new binary field transform to use as an alternative to the discrete Fourier transform over GF(2). The corresponding concept of sequence spectra over GF(2) is defined. Using this transform, a theory of binary wavelets is developed in terms of two-band perfect reconstruction filter banks in GF(2). By generalizing the corresponding real field constraints of bandwidth, vanishing moments, and spectral content in the filters, we construct a perfect reconstruction wavelet decomposition. We also demonstrate the potential use of the binary wavelet decomposition in lossless image coding.  相似文献   

19.
We conducted positron emission tomography (PET) image reconstruction experiments using the wavelet transform. The Wavelet-Vaguelette decomposition was used as a framework from which expressions for the necessary wavelet coefficients might be derived, and then the wavelet shrinkage was applied to the wavelet coefficients for the reconstruction (WVS). The performances of WVS were evaluated and compared with those of the filtered back-projection (FBP) using software phantoms, physical phantoms, and human PET studies. The results demonstrated that WVS gave stable reconstruction over the range of shrinkage parameters and provided better noise and spatial resolution characteristics than FBP.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号