首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In tomographic medical devices such as single photon emission computed tomography or positron emission tomography cameras, image reconstruction is an unstable inverse problem, due to the presence of additive noise. A new family of regularization methods for reconstruction, based on a thresholding procedure in wavelet and wavelet packet (WP) decompositions, is studied. This approach is based on the fact that the decompositions provide a near-diagonalization of the inverse Radon transform and of prior information in medical images. A WP decomposition is adaptively chosen for the specific image to be restored. Corresponding algorithms have been developed for both two-dimensional and full three-dimensional reconstruction. These procedures are fast, noniterative, and flexible. Numerical results suggest that they outperform filtered back-projection and iterative procedures such as ordered-subset-expectation-maximization.  相似文献   

2.
We describe the efficient algebraic reconstruction (EAR) method, which applies to cone-beam tomographic reconstruction problems with a circular symmetry. Three independant steps/stages are presented, which use two symmetries and a factorization of the point spread functions (PSFs), each reducing computing times and eventually storage in memory or hard drive. In the case of pinhole single photon emission computed tomography (SPECT), we show how the EAR method can incorporate most of the physical and geometrical effects which change the PSF compared to the Dirac function assumed in analytical methods, thus showing improvements on reconstructed images. We also compare results obtained by the EAR method with a cubic grid implementation of an algebraic method and modeling of the PSF and we show that there is no significant loss of quality, despite the use of a noncubic grid for voxels in the EAR method. Data from a phantom, reconstructed with the EAR method, demonstrate 1.08-mm spatial tomographic resolution despite the use of a 1.5-mm pinhole SPECT device and several applications in rat and mouse imaging are shown. Finally, we discuss the conditions of application of the method when symmetries are broken, by considering the different parameters of the calibration and nonsymmetric physical effects such as attenuation.  相似文献   

3.
A method is presented to estimate the acquisition geometry of a pinhole single photon emission computed tomography (SPECT) camera with a circular detector orbit. This information is needed for the reconstruction of tomographic images. The calibration uses the point source projection locations of a tomographic acquisition of three point sources located at known distances from each other. It is shown that this simple phantom provides the necessary and sufficient information for the proposed calibration method. The knowledge of two of the distances between the point sources proves to be essential. The geometry is estimated by fitting analytically calculated projections to the measured ones, using a simple least squares Powell algorithm. Some mild a priori knowledge is used to constrain the solutions of the fit. Several of the geometrical parameters are however highly correlated. The effect of these correlations on the reconstructed images is evaluated in simulation studies and related to the estimation accuracy. The highly correlated detector tilt and electrical shift are shown to be the critical parameters for accurate image reconstruction. The performance of the algorithm is finally demonstrated by phantom measurements. The method is based on a single SPECT scan of a simple calibration phantom, executed immediately after the actual SPECT acquisition. The method is also applicable to cone-beam SPECT and X-ray CT.  相似文献   

4.
Quantitative accuracy of single photon emission computed tomography (SPECT) images is highly dependent on the photon scatter model used for image reconstruction. Monte Carlo simulation (MCS) is the most general method for detailed modeling of scatter, but to date, fully three-dimensional (3-D) MCS-based statistical SPECT reconstruction approaches have not been realized, due to prohibitively long computation times and excessive computer memory requirements. MCS-based reconstruction has previously been restricted to two-dimensional approaches that are vastly inferior to fully 3-D reconstruction. Instead of MCS, scatter calculations based on simplified but less accurate models are sometimes incorporated in fully 3-D SPECT reconstruction algorithms. We developed a computationally efficient fully 3-D MCS-based reconstruction architecture by combining the following methods: 1) a dual matrix ordered subset (DM-OS) reconstruction algorithm to accelerate the reconstruction and avoid massive transition matrix precalculation and storage; 2) a stochastic photon transport calculation in MCS is combined with an analytic detector modeling step to reduce noise in the Monte Carlo (MC)-based reprojection after only a small number of photon histories have been tracked; and 3) the number of photon histories simulated is reduced by an order of magnitude in early iterations, or photon histories calculated in an early iteration are reused. For a 64 x 64 x 64 image array, the reconstruction time required for ten DM-OS iterations is approximately 30 min on a dual processor (AMD 1.4 GHz) PC, in which case the stochastic nature of MCS modeling is found to have a negligible effect on noise in reconstructions. Since MCS can calculate photon transport for any clinically used photon energy and patient attenuation distribution, the proposed methodology is expected to be useful for obtaining highly accurate quantitative SPECT images within clinically acceptable computation times.  相似文献   

5.
An evaluation of maximum likelihood reconstruction for SPECT   总被引:2,自引:0,他引:2  
A reconstruction method for SPECT (single photon emission computerized tomography) that uses the maximum likelihood (ML) criterion and an iterative expectation-maximization (EM) algorithm solution is examined. The method is based on a model that incorporates the physical effects of photon statistics, nonuniform photon attenuation, and a camera-dependent point-spread response function. Reconstructions from simulation experiments are presented which illustrate the ability of the ML algorithm to correct for attenuation and point-spread. Standard filtered backprojection method reconstructions, using experimental and simulated data, are included for reference. Three studies were designed to focus on the effects of noise and point-spread, on the effect of nonuniform attenuation, and on the combined effects of all three. The last study uses a chest phantom and simulates Tl-201 imaging of the myocardium. A quantitative analysis of the reconstructed images is used to support the conclusion that the ML algorithm produces reconstructions that exhibit improved signal-to-noise ratios, improved image resolution, and image quantifiability.  相似文献   

6.
A wavelet-based method for multiscale tomographic reconstruction   总被引:4,自引:0,他引:4  
The authors represent the standard ramp filter operator of the filtered-back-projection (FBP) reconstruction in different bases composed of Haar and Daubechies compactly supported wavelets. The resulting multiscale representation of the ramp-filter matrix operator is approximately diagonal. The accuracy of this diagonal approximation becomes better as wavelets with larger numbers of vanishing moments are used. This wavelet-based representation enables the authors to formulate a multiscale tomographic reconstruction technique in which the object is reconstructed at multiple scales or resolutions. A complete reconstruction is obtained by combining the reconstructions at different scales. The authors' multiscale reconstruction technique has the same computational complexity as the FBP reconstruction method. It differs from other multiscale reconstruction techniques in that (1) the object is defined through a one-dimensional multiscale transformation of the projection domain, and (2) the authors explicitly account for noise in the projection data by calculating maximum a posteriori probability (MAP) multiscale reconstruction estimates based on a chosen fractal prior on the multiscale object coefficients. The computational complexity of this maximum a posteriori probability (MAP) solution is also the same as that of the FBP reconstruction. This result is in contrast to commonly used methods of statistical regularization, which result in computationally intensive optimization algorithms.  相似文献   

7.
8.
Deformable template models for emission tomography   总被引:1,自引:0,他引:1  
The reconstruction of emission tomography data is an ill-posed inverse problem and, as such, requires some form of regularization. Previous efforts to regularize the restoration process have incorporated rather general assumptions about the isotope distribution within a patient's body. A theoretical and algorithmic framework is presented in which the notion of a deformable template can be used to identify and quantify brain tumors in pediatric patients. Patient data and computer simulation experiments are presented which illustrate the performance of the deformable template approach to single photon emission computed tomography  相似文献   

9.
Regularization is desirable for image reconstruction in emission tomography. A powerful regularization method is the penalized-likelihood (PL) reconstruction algorithm (or equivalently, maximum a posteriori reconstruction), where the sum of the likelihood and a noise suppressing penalty term (or Bayesian prior) is optimized. Usually, this approach yields position-dependent resolution and bias. However, for some applications in emission tomography, a shift-invariant point spread function would be advantageous. Recently, a new method has been proposed, in which the penalty term is tuned in every pixel to impose a uniform local impulse response. In this paper, an alternative way to tune the penalty term is presented. We performed positron emission tomography and single photon emission computed tomography simulations to compare the performance of the new method to that of the postsmoothed maximum-likelihood (ML) approach, using the impulse response of the former method as the postsmoothing filter for the latter. For this experiment, the noise properties of the PL algorithm were not superior to those of postsmoothed ML reconstruction.  相似文献   

10.
Reconstruction algorithms for transmission tomography have generally assumed that the photons reaching a particular detector bin at a particular angle originate from a single point source. In this paper, we highlight several cases of extended transmission sources, in which it may be useful to approach the estimation of attenuation coefficients as a problem involving multiple transmission point sources. Examined in detail is the case of a fixed transmission line source with a fan-beam collimator. This geometry can result in attenuation images that have significant axial blur. Herein it is also shown, empirically, that extended transmission sources can result in biased estimates of the average attenuation, and an explanation is proposed. The finite axial resolution of the transmission line source configuration is modeled within iterative reconstruction using an expectation-maximization algorithm that was previously derived for estimating attenuation coefficients from single photon emission computed tomography (SPECT) emission data. The same algorithm is applicable to both problems because both can be thought of as involving multiple transmission sources. It is shown that modeling axial blur within reconstruction removes the bias in the average estimated attenuation and substantially improves the axial resolution of attenuation images.  相似文献   

11.
In this paper we discuss the potentialities of a time-coded single photon emissive imaging system for thyroid tomography. We have performed three-dimensional simulation studies in order to answer two questions: 1) does this coded aperture device produce good quality reconstructions, and 2) can the reconstruction be carried out sufficiently fast on a microcomputer. Our study leads to the conclusion that both questions can be answered affirmatively. Hence, time-coded emission tomography is a potentially useful imaging technique for diagnostic clinical practice.  相似文献   

12.
In tomographic imaging, dynamic images are typically obtained by reconstructing the frames of a time sequence independently, one by one. A disadvantage of this frame-by-frame reconstruction approach is that it fails to account. For temporal correlations in the signal. Ideally, one should treat the entire image sequence as a single spatio-temporal signal. However, the resulting reconstruction task becomes computationally intensive. Fortunately, as the authors show in this paper, the spatio-temporal reconstruction problem call be greatly simplified by first applying a temporal Karhunen-Loeve (KL) transformation to the imaging equation. The authors show that if the regularization operator is chosen to be separable into space and time components, penalized weighted least squares reconstruction of the entire image sequence is approximately equivalent to frame-by-frame reconstruction in the space-KL domain. By this approach, spatio-temporal reconstruction can be achieved at reasonable computational cost. One can achieve further computational savings by discarding high-order KL components to avoid reconstructing them. Performance of the method is demonstrated through statistical evaluations of the bias-variance tradeoff obtained by computer simulation reconstruction  相似文献   

13.
Respiratory motion during the collection of computed tomography (CT) projections generates structured artifacts and a loss of resolution that can render the scans unusable. This motion is problematic in scans of those patients who cannot suspend respiration, such as the very young or intubated patients. Here, the authors present an algorithm that can be used to reduce motion artifacts in CT scans caused by respiration. An approximate model for the effect of respiration is that the object cross section under interrogation experiences time-varying magnification and displacement along two axes. Using this model an exact filtered backprojection algorithm is derived for the case of parallel projections. The result is extended to generate an approximate reconstruction formula for fan-beam projections. Computer simulations and scans of phantoms on a commercial CT scanner validate the new reconstruction algorithms for parallel and fan-beam projections. Significant reduction in respiratory artifacts is demonstrated clinically when the motion model is satisfied. The method can be applied to projection data used in CT, single photon emission computed tomography (SPECT), positron emission tomography (PET), and magnetic resonance imaging (MRI).  相似文献   

14.
Over the past years there has been considerable interest in statistically optimal reconstruction of cross-sectional images from tomographic data. In particular, a variety of such algorithms have been proposed for maximum a posteriori (MAP) reconstruction from emission tomographic data. While MAP estimation requires the solution of an optimization problem, most existing reconstruction algorithms take an indirect approach based on the expectation maximization (EM) algorithm. We propose a new approach to statistically optimal image reconstruction based on direct optimization of the MAP criterion. The key to this direct optimization approach is greedy pixel-wise computations known as iterative coordinate decent (ICD). We propose a novel method for computing the ICD updates, which we call ICD/Newton-Raphson. We show that ICD/Newton-Raphson requires approximately the same amount of computation per iteration as EM-based approaches, but the new method converges much more rapidly (in our experiments, typically five to ten iterations). Other advantages of the ICD/Newton-Raphson method are that it is easily applied to MAP estimation of transmission tomograms, and typical convex constraints, such as positivity, are easily incorporated.  相似文献   

15.
A three-dimensional image reconstruction for fluorescence-enhanced frequency-domain photon migration (FDPM) measurements in turbid media is developed and investigated for three different simulated measurement types: 1) absolute emission measurement, or emission measurements of phase and amplitude attenuation made for a given incident point source of excitation light; 2) referenced emission measurements made relative to an excitation measurement conducted at a single reference point away from the incident source; and 3) referenced emission measurements made relative to the excitation measurement conducted at identical points of detection. The image reconstruction algorithm employs a gradient-based constrained truncated Newton (CONTN) method which implements a bounding parameter, which can be used to govern the level of contrast used to discriminate tissue volumes from heterogeneous background tissues. Reverse differentiation technique is used to calculate the gradients. Using simulated data with superimposed noise to achieve a signal-to-noise ratio of 55 and 35 dB to mimic experimental excitation and emission FDPM measurements, respectively, we show the robustness of emission measurements referenced to excitation light. We investigate the performance of algorithm CONTN using these measurement techniques and show that the absorption coefficients due to fluorophore are reconstructed by CONTN accurately and efficiently. Furthermore, we demonstrate the performance of the bounding parameter for rejection of background artifacts owing to background tissue heterogeneity.  相似文献   

16.
The authors present the fusion of anatomical data as a method for improving the reconstruction in single photon emission computed tomography (SPECT). Anatomical data is used to deduce a parameterized model of organs in a reconstructed slice using spline curves. This model allows the authors to define the imaging process, i.e., the direct problem, more adequately, and furthermore to restrict the reconstruction to the emitting zones. Instead of the usual square pixels, the authors use a new kind of discretization pixel, which fits to the contour in the region of interest. In the reconstruction phase, the authors estimate the activity in the emitting zones and also the optimum parameters of their model. Concentrating on the left ventricular (LV) wall activity, the simulation and phantom results show an accurate estimation of both the myocardial shape and the radioactive emission  相似文献   

17.
This paper investigates an accurate reconstruction method to invert the attenuated Radon transform in nonparallel beam (NPB) geometries. The reconstruction method contains three major steps: 1) performing one-dimensional phase-shift rebinning; 2) implementing nonuniform Hilbert transform; and 3) applying Novikov's explicit inversion formula. The method seems to be adaptive to different settings of fan-beam geometry from very long to very short focal lengths without sacrificing reconstruction accuracy. Compared to the conventional bilinear rebinning technique, the presented method showed a better spatial resolution, as measured by modulation transfer function. Numerical experiments demonstrated its computational efficiency and stability to different levels of Poisson noise. Even with complicated geometries such as varying focal-length and asymmetrical fan-beam collimation, the presented method achieved nearly the same reconstruction quality of parallel-beam geometry. This effort can facilitate quantitative reconstruction of single photon emission computed tomography for cardiac imaging, which may need NPB collimation geometries and require high computational efficiency.  相似文献   

18.
In view of the gradient effect caused by the gradient effect of the Total Variation (TV) algorithm and the environmental noise in the single pixel imaging system, an image reconstruction based on the Gaussian Smooth compressed sensing Fractional Order Total Variation algorithm (FOTVGS) is proposed. Fractional differential loss of low-frequency components of the image increases the high-frequency components of the image to achieve the purpose of enhancing image details. The Gaussian smoothing filter operator updates the Lagrangian gradient operator to filter out the additive white Gaussian noise caused by the differential operator. Simulation results show that, compared with other four similar algorithms, the algorithm can achieve the maximum Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity(SSIM) at the same sampling rate and noise level. When the sampling rate is 0.2, compared with the Fractional Order Total Variation (FOTV) algorithm, the maximum PSNR and SSIM increase by 1.39 dB (0.035) and 3.91 dB (0.098) respectively. It can be proved that this algorithm can improve the reconstruction quality of the image in the absence of noise and noise, especially in the case of noise, the quality of image reconstruction is greatly improved. The proposed algorithm provides a feasible solution for image reconstruction of noise caused by environment in single-pixel imaging and other computing imaging system.  相似文献   

19.
Deterministic edge-preserving regularization in computed imaging   总被引:55,自引:0,他引:55  
Many image processing problems are ill-posed and must be regularized. Usually, a roughness penalty is imposed on the solution. The difficulty is to avoid the smoothing of edges, which are very important attributes of the image. In this paper, we first give conditions for the design of such an edge-preserving regularization. Under these conditions, we show that it is possible to introduce an auxiliary variable whose role is twofold. First, it marks the discontinuities and ensures their preservation from smoothing. Second, it makes the criterion half-quadratic. The optimization is then easier. We propose a deterministic strategy, based on alternate minimizations on the image and the auxiliary variable. This leads to the definition of an original reconstruction algorithm, called ARTUR. Some theoretical properties of ARTUR are discussed. Experimental results illustrate the behavior of the algorithm. These results are shown in the field of 2D single photon emission tomography, but this method can be applied in a large number of applications in image processing.  相似文献   

20.
In single photon emission computed tomography (SPECT), every reconstruction algorithm must use some model for the response of the gamma camera to emitted gamma-rays. The true camera response is both spatially variant and object dependent. These two properties result from the effects of scatter, septal penetration, and attenuation, and they forestall determination of the true response with any precision. This motivates the investigation of the performance of reconstruction algorithms when there are errors between the camera response used in the reconstruction algorithm and the true response of the gamma camera. In this regard, the authors compare the filtered backprojection algorithm, the expectation-maximization maximum likelihood algorithm, and the generalized expectation maximization (GEM) maximum a posteriori (MAP) algorithm, a Bayesian algorithm which uses a Markov random field prior.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号