首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The problem of reconstruction in positron emission tomography (PET) is basically estimating the number of photon pairs emitted from the source. Using the concept of the maximum-likelihood (ML) algorithm, the problem of reconstruction is reduced to determining an estimate of the emitter density that maximizes the probability of observing the actual detector count data over all possible emitter density distributions. A solution using this type of expectation maximization (EM) algorithm with a fixed grid size is severely handicapped by the slow convergence rate, the large computation time, and the nonuniform correction efficiency of each iteration, which makes the algorithm very sensitive to the image pattern. An efficient knowledge-based multigrid reconstruction algorithm based on the ML approach is presented to overcome these problems.  相似文献   

2.
A maximum-likelihood (ML) expectation-maximization (EM) algorithm (called EM-IntraSPECT) is presented for simultaneously estimating single photon emission computed tomography (SPECT) emission and attenuation parameters from emission data alone. The algorithm uses the activity within the patient as transmission tomography sources, with which attenuation coefficients can be estimated. For this initial study, EM-IntraSPECT was tested on computer-simulated attenuation and emission maps representing a simplified human thorax as well as on SPECT data obtained from a physical phantom. Two evaluations were performed. First, to corroborate the idea of reconstructing attenuation parameters from emission data, attenuation parameters (mu) were estimated with the emission intensities (lambda) fixed at their true values. Accurate reconstructions of attenuation parameters were obtained. Second, emission parameters lambda and attenuation parameters mu were simultaneously estimated from the emission data alone. In this case there was crosstalk between estimates of lambda and mu and final estimates of lambda and mu depended on initial values. Estimates degraded significantly as the support extended out farther from the body, and an explanation for this is proposed. In the EM-IntraSPECT reconstructed attenuation images, the lungs, spine, and soft tissue were readily distinguished and had approximately correct shapes and sizes. As compared with standard EM reconstruction assuming a fix uniform attenuation map, EM-IntraSPECT provided more uniform estimates of cardiac activity in the physical phantom study and in the simulation study with tight support, but less uniform estimates with a broad support. The new EM algorithm derived here has additional applications, including reconstructing emission and transmission projection data under a unified statistical model.  相似文献   

3.
Rotating multisegment slant-hole (RMSSH) single photon emission computed tomography (SPECT) is suitable for detecting small and low-contrast breast lesions since it has much higher detection efficiency than conventional SPECT with a parallel-hole collimator and can image the breast at a closer distance. Our RMSSH SPECT reconstruction extends a previous rotation-shear transformation-based method to include nonuniform attenuation and collimator-detector response (CDR) compensation. To evaluate our reconstruction method, we performed two phantom simulation studies with 1) an isolated breast and 2) a breast phantom attached to the body torso. The reconstructed RMSSH SPECT images with attenuation and CDR compensation showed improved quantitative accuracy and less image artifacts than without. To evaluate the clinical efficacy of RMSSH SPECT mammography, we used a simulation study to compare with planar scintimammography in terms of the signal-to-noise ratio (SNR) value of a breast lesion. The RMSSH SPECT reconstruction images showed higher SNR value than the planar scintimammography images and even more so as we applied compensation for attenuation and collimator detector response. We conclude that attenuation and CDR compensation provide RMSSH SPECT mammography images with improved quality and quantitative accuracy.  相似文献   

4.
The imaging characteristics of maximum likelihood (ML) reconstruction using the EM algorithm for emission tomography have been extensively evaluated. There has been less study of the precision and accuracy of ML estimates of regional radioactivity concentration. The authors developed a realistic brain slice simulation by segmenting a normal subject's MRI scan into gray matter, white matter, and CSF and produced PET sinogram data with a model that included detector resolution and efficiencies, attenuation, scatter, and randoms. Noisy realizations at different count levels were created, and ML and filtered backprojection (FBP) reconstructions were performed. The bias and variability of ROI values were determined. In addition, the effects of ML pixel size, image smoothing and region size reduction were assessed. Hit estimates at 3,000 iterations (0.6 sec per iteration on a parallel computer) for 1-cm(2) gray matter ROIs showed negative biases of 6%+/-2% which can be reduced to 0%+/-3% by removing the outer 1-mm rim of each ROI. FBP applied to the full-size ROIs had 15%+/-4% negative bias with 50% less noise than hit. Shrinking the FBP regions provided partial bias compensation with noise increases to levels similar to ML. Smoothing of ML images produced biases comparable to FBP with slightly less noise. Because of its heavy computational requirements, the ML algorithm will be most useful for applications in which achieving minimum bias is important.  相似文献   

5.
Scatter correction is an important factor in single photon emission computed tomography (SPECT). Many scatter correction techniques, such as multiple-window subtraction and intrinsic modeling with iterative algorithms, have been under study for many years. Previously, we developed an efficient slice-to-slice blurring technique to model attenuation and system geometric response in a projector/backprojector pair, which was used in an ML-EM algorithm to reconstruct SPECT data. This paper proposes a projector/backprojector that models the three-dimensional (3-D) first-order scatter in SPECT, also using an efficient slice-to-slice blurring technique. The scatter response is estimated from a known nonuniform attenuation distribution map. It is assumed that the probability of detection of a first-order scattered photon from a photon that is emitted in a given source voxel and scattered in a given scatter voxel is proportional to the attenuation coefficient value at that voxel. Monte Carlo simulations of point sources and an MCAT torso phantom were used to verify the accuracy of the proposed projector/backprojector model. An experimental Jaszczak torso/cardiac phantom SPECT study was also performed. For a 64 x 64 x 64 image volume, it took 8.7 s to perform each iteration per slice on a Sun ULTRA Enterprise 3000 (167 MHz, 1 Gbyte RAM) computer, when modeling 3-D scatter, attenuation, and system geometric response functions. The main advantage of the proposed method is its easy implementation and the possibility of performing reconstruction in clinically acceptable time.  相似文献   

6.
7.
Iterative maximum likelihood (ML) transmission computed tomography algorithms have distinct advantages over Fourier-based reconstruction, but unfortunately require increased computation time. The convex algorithm [1] is a relatively fast iterative ML algorithm but it is nevertheless too slow for many applications. Therefore, an acceleration of this algorithm by using ordered subsets of projections is proposed [ordered subsets convex algorithm (OSC)]. OSC applies the convex algorithm sequentially to subsets of projections. OSC was compared with the convex algorithm using simulated and physical thorax phantom data. Reconstructions were performed for OSC using eight and 16 subsets (eight and four projections/subset, respectively). Global errors, image noise, contrast recovery, and likelihood increase were calculated. Results show that OSC is faster than the convex algorithm, the amount of acceleration being approximately proportional to the number of subsets in OSC, and it causes only a slight increase of noise and global errors in the reconstructions. Images and image profiles of the reconstructions were in good agreement. In conclusion, OSC and the convex algorithm result in similar image quality but OSC is more than an order of magnitude faster.  相似文献   

8.
Quantitative accuracy of single photon emission computed tomography (SPECT) images is highly dependent on the photon scatter model used for image reconstruction. Monte Carlo simulation (MCS) is the most general method for detailed modeling of scatter, but to date, fully three-dimensional (3-D) MCS-based statistical SPECT reconstruction approaches have not been realized, due to prohibitively long computation times and excessive computer memory requirements. MCS-based reconstruction has previously been restricted to two-dimensional approaches that are vastly inferior to fully 3-D reconstruction. Instead of MCS, scatter calculations based on simplified but less accurate models are sometimes incorporated in fully 3-D SPECT reconstruction algorithms. We developed a computationally efficient fully 3-D MCS-based reconstruction architecture by combining the following methods: 1) a dual matrix ordered subset (DM-OS) reconstruction algorithm to accelerate the reconstruction and avoid massive transition matrix precalculation and storage; 2) a stochastic photon transport calculation in MCS is combined with an analytic detector modeling step to reduce noise in the Monte Carlo (MC)-based reprojection after only a small number of photon histories have been tracked; and 3) the number of photon histories simulated is reduced by an order of magnitude in early iterations, or photon histories calculated in an early iteration are reused. For a 64 x 64 x 64 image array, the reconstruction time required for ten DM-OS iterations is approximately 30 min on a dual processor (AMD 1.4 GHz) PC, in which case the stochastic nature of MCS modeling is found to have a negligible effect on noise in reconstructions. Since MCS can calculate photon transport for any clinically used photon energy and patient attenuation distribution, the proposed methodology is expected to be useful for obtaining highly accurate quantitative SPECT images within clinically acceptable computation times.  相似文献   

9.
Although real imaging problems involve objects that have variations in three dimensions, a majority of work examining inverse scattering methods for ultrasonic tomography considers 2-D imaging problems. Therefore, the study of 3-D inverse scattering methods is necessary for future applications of ultrasonic tomography. In this work, 3-D reconstructions using different arrays of rectangular elements focused on elevation were studied when reconstructing spherical imaging targets by producing a series of 2-D image slices using the 2-D distorted Born iterative method (DBIM). The effects of focal number f/#, speed of sound contrast Deltac, and scatterer size were considered. For comparison, the 3-D wave equation was also inverted using point-like transducers to produce fully 3-D DBIM image reconstructions. In 2-D slicing, blurring in the vertical direction was highly correlated with the transmit/receive elevation point-spread function of the transducers for low Deltac. The eventual appearance of overshoot artifacts in the vertical direction were observed with increasing Deltac. These diffraction-related artifacts were less severe for smaller focal number values and larger spherical target sizes. When using 3-D DBIM, the overshoot artifacts were not observed and spatial resolution was improved. However, results indicate that array configuration in 3-D reconstructions is important for good image reconstruction. Practical arrays were designed and assessed for image reconstruction using 3-D DBIM.  相似文献   

10.
Most methods that have been proposed for attenuation compensation in single-photon emission computed tomography (SPECT) either rely on simplifying assumptions, or use slow iteration to achieve accuracy. Recently, hybrid methods which combine iteration with simple multiplicative correction have been proposed by Chang and by Moore et al. In this study we evaluated these methods using both simulated and real phantom data from a rotating gamma camera. Of special concern were the effects of assuming constant attenuation distributions for correction and of using only 180 degrees of projection data in the reconstructions. Results were compared by means of image contrast, %RMS error, and a chi-square error statistic. Simulations showed the hybrid methods to converge after 1-2 iterations when 360 degrees data were used, less rapidly for 180 degrees data. The Moore method was more accurate than our modified Chang method for 180 degrees data. Phantom data indicated the importance of using an accurate attenuation map for both methods. The speed of convergence of the hybrid algorithms compared to traditional iterative techniques, and their accuracy in reconstructing photon activity, even with 180 degrees data, makes them attractive for use in quantitative analysis of SPECT reconstructions.  相似文献   

11.
Local shape function imaging uses far-field microwave scattering data to reconstruct the presence or absence of small metal cylinders throughout space, in order to model arbitrary metallic objects. The reconstructed images represent the scattering amplitude at discrete locations in space with multiple scattering effects incorporated. Super-resolution is demonstrated for monochromatic image reconstructions. Even better reconstructions are obtained with multiple frequency data. The speed of computation is increased with a fast forward solver algorithm. Also, measured data is used in the local shape function imaging algorithm and the resolution is improved over diffraction tomography  相似文献   

12.
多帧短曝光图像近视解卷积   总被引:1,自引:0,他引:1       下载免费PDF全文
邵慧  汪建业  徐鹏  杨明翰  周春 《电子学报》2014,42(10):2110-2116
本文提出一种频域压缩多帧近视解卷积算法.首先,根据波前传感器获取的大气湍流相位估计短曝光图像的初始点扩展函数,并逐步将点扩展函数调整到准确形式.通过共轭梯度(CG)算法交替解卷积频域代价函数估计目标图像和点扩展函数,为了有效压缩信息量,将图像频谱比增加到代价函数中,同时施加基本约束保证迭代快速收敛于全局最小解,采用结构自适应滤波器(SAAF)达到消除噪声的同时保护重建图像细节.实验结果表明算法能够得到高质量的重建图像,性能优于所对比多帧Tikhonov正则化解卷积和多帧Richardson-Lucy的盲解卷积算法.  相似文献   

13.
Imaging systems that form estimates using a statistical approach generally yield images with nonuniform resolution properties. That is, the reconstructed images possess resolution properties marked by space-variant and/or anisotropic responses. We have previously developed a space-variant penalty for penalized-likelihood (PL) reconstruction that yields nearly uniform resolution properties. We demonstrated how to calculate this penalty efficiently and apply it to an idealized positron emission tomography (PET) system whose geometric response is space-invariant. In this paper, we demonstrate the efficient calculation and application of this penalty to space-variant systems. (The method is most appropriate when the system matrix has been precalculated.) We apply the penalty to a large field of view PET system where crystal penetration effects make the geometric response space-variant, and to a two-dimensional single photon emission computed tomography system whose detector responses are modeled by a depth-dependent Gaussian with linearly varying full-width at half-maximum. We perform a simulation study comparing reconstructions using our proposed PL approach with other reconstruction methods and demonstrate the relative resolution uniformity, and discuss tradeoffs among estimators that yield nearly uniform resolution. We observe similar noise performance for the PL and post-smoothed maximum-likelihood (ML) approaches with carefully matched resolution, so choosing one estimator over another should be made on other factors like computational complexity and convergence rates of the iterative reconstruction. Additionally, because the postsmoothed ML and the proposed PL approach can outperform one another in terms of resolution uniformity depending on the desired reconstruction resolution, we present and discuss a hybrid approach adopting both a penalty and post-smoothing.  相似文献   

14.
Regularization is desirable for image reconstruction in emission tomography. A powerful regularization method is the penalized-likelihood (PL) reconstruction algorithm (or equivalently, maximum a posteriori reconstruction), where the sum of the likelihood and a noise suppressing penalty term (or Bayesian prior) is optimized. Usually, this approach yields position-dependent resolution and bias. However, for some applications in emission tomography, a shift-invariant point spread function would be advantageous. Recently, a new method has been proposed, in which the penalty term is tuned in every pixel to impose a uniform local impulse response. In this paper, an alternative way to tune the penalty term is presented. We performed positron emission tomography and single photon emission computed tomography simulations to compare the performance of the new method to that of the postsmoothed maximum-likelihood (ML) approach, using the impulse response of the former method as the postsmoothing filter for the latter. For this experiment, the noise properties of the PL algorithm were not superior to those of postsmoothed ML reconstruction.  相似文献   

15.
A novel adaptive neural network is proposed for image restoration using a nuclear medicine gamma camera based on the point spread function of measured system. The objective is to restore image degradation due to photon scattering and collimator photon penetration with the gamma camera and allow improved quantitative external measurements of radionuclides in-vivo. The specific clinical model proposed is the imaging of bremsstrahlung radiation using 32P and 90Y because of the enhanced image degradation effects of photon scattering, photon penetration and poor signal/noise ratio in measurements of this type with the gamma camera. This algorithm model avoids the common inverse problem associated with other image restoration filters such as the Wiener filter. The relative performance of the adaptive NN for image restoration is compared to a previously reported order statistic neural network hybrid (OSNNH) filter by these investigators, a traditional Weiner filter and a modified Hopfield neural network using simulated degraded images with different noise levels. Quantitative metrics such as the change of signal to noise ratio (SNR) are used to compare filter performance. The adaptive NN yields comparable results for image restoration with a slightly better performance for the images with higher noise level as often encountered in bremsstrahlung detection with the gamma camera. Experimental attenuation measurements were also performed in a water tank using two radionuclides, 32P and 90Y, typically used for antibody therapy. Similar values for an effective attenuation coefficient was observed for the restored images using the OSNNH filters and adaptive NN which demonstrate that the restoration filters preserves the total counts in the image as required for quantitative in-vivo measurements. The adaptive NN was computationally more efficient by a factor 4–6 compared to the OSNNH filter. The filter architecture, in turn, is also optimum for parallel processing or VLSI implementation as required for planar and particularly for tomographic mode of detection using the gamma camera. The proposed adaptive NN method should also prove to be useful for quantitative imaging of single photon emitters for other nuclear medicine tomographic imaging applications using positron emitters and direct X-ray photon detection.  相似文献   

16.
An analysis of a passive seismic method for subsurface imaging is presented, in which ambient seismic noise is employed as the source of illumination of underground scatterers. The imaging algorithm can incorporate new data into the image in a recursive fashion, which causes image background noise to diminish over time. Under the assumption of spatially-incoherent ambient noise, an analytical expression for the point-spread function of the imaging algorithm is derived. The point-spread function (PDF) characterizes the resolution of the image, which is a function of the receiving array length and the ambient noise bandwidth. Results of a Monte Carlo simulation are presented to illustrate the theory  相似文献   

17.
Simultaneous reconstruction of activity and attenuation for PET/MR   总被引:1,自引:0,他引:1  
Medical investigations targeting a quantitative analysis of the position emission tomography (PET) images require the incorporation of additional knowledge about the photon attenuation distribution in the patient. Today, energy range adapted attenuation maps derived from computer tomography (CT) scans are used to effectively compensate for image quality degrading effects, such as attenuation and scatter. Replacing CT by magnetic resonance (MR) is considered as the next evolutionary step in the field of hybrid imaging systems. However, unlike CT, MR does not measure the photon attenuation and thus does not provide an easy access to this valuable information. Hence, many research groups currently investigate different technologies for MR-based attenuation correction (MR-AC). Typically, these approaches are based on techniques such as special acquisition sequences (alone or in combination with subsequent image processing), anatomical atlas registration, or pattern recognition techniques using a data base of MR and corresponding CT images. We propose a generic iterative reconstruction approach to simultaneously estimate the local tracer concentration and the attenuation distribution using the segmented MR image as anatomical reference. Instead of applying predefined attenuation values to specific anatomical regions or tissue types, the gamma attenuation at 511 keV is determined from the PET emission data. In particular, our approach uses a maximum-likelihood estimation for the activity and a gradient-ascent based algorithm for the attenuation distribution. The adverse effects of scattered and accidental gamma coincidences on the quantitative accuracy of PET, as well as artifacts caused by the inherent crosstalk between activity and attenuation estimation are efficiently reduced using enhanced decay event localization provided by time-of-flight PET, accurate correction for accidental coincidences, and a reduced number of unknown attenuation coefficients. First results achieved with measured whole body PET data and reference segmentation from CT showed an absolute mean difference of 0.005 cm?1 (< 20%) in the lungs, 0.0009 cm?1 (< 2%) in case of fat, and 0.0015 cm?1 (< 2%) for muscles and blood. The proposed method indicates a robust and reliable alternative to other MR-AC approaches targeting patient specific quantitative analysis in time-of-flight PET/MR.  相似文献   

18.
Fast maximum entropy approximation in SPECT using the RBI-MAP algorithm   总被引:3,自引:0,他引:3  
In this work, we present a method for approximating constrained maximum entropy (ME) reconstructions of SPECT data with modifications to a block-iterative maximum a posteriori (MAP) algorithm. Maximum likelihood (ML)-based reconstruction algorithms require some form of noise smoothing. Constrained ME provides a more formal method of noise smoothing without requiring the user to select parameters. In the context of SPECT, constrained ME seeks the minimum-information image estimate among those whose projections are a given distance from the noisy measured data, with that distance determined by the magnitude of the Poisson noise. Images that meet the distance criterion are referred to as feasible images. We find that modeling of all principal degrading factors (attenuation, detector response, and scatter) in the reconstruction is critical because feasibility is not meaningful unless the projection model is as accurate as possible. Because the constrained ME solution is the same as a MAP solution for a particular value of the MAP weighting parameter, beta, the constrained ME solution can be found with a MAP algorithm if the correct value of beta is found. We show that the RBI-MAP algorithm, if used with a dynamic scheme for estimating beta, can approximate constrained ME solutions in 20 or fewer iterations. We compare results for various methods of achieving feasible images on a simulation of Tl-201 cardiac SPECT data. Results show that the RBI-MAP ME approximation provides images and quantitative estimates close to those from a slower algorithm that gives the true ME solution. Also, we find that the ME results have higher spatial resolution and greater high-frequency noise content than a feasibility-based stopping rule, feasibility-based low-pass filtering, and a quadratic Gibbs prior with beta selected according to the feasibility criterion. We conclude that fast ME approximation is possible using either RBI-MAP with the dynamic procedure or a feasibility-based stopping rule, and that such reconstructions may be particularly useful in applications where resolution is critical.  相似文献   

19.
In order to perform attenuation correction in emission tomography an attenuation map is required. We propose a new method to compute this map directly from the emission sinogram, eliminating the transmission scan from the acquisition protocol. The problem is formulated as an optimization task where the objective function is a combination of the likelihood and an a priori probability. The latter uses a Gibbs prior distribution to encourage local smoothness and a multimodal distribution for the attenuation coefficients. Since the attenuation process is different in positron emission tomography (PET) and single photon emission tomography (SPECT), a separate algorithm for each case is derived. The method has been tested on mathematical phantoms and on a few clinical studies. For PET, good agreement was found between the images obtained with transmission measurements and those produced by the new algorithm in an abdominal study. For SPECT, promising simulation results have been obtained for nonhomogeneous attenuation due to the presence of the lungs.  相似文献   

20.
Using a theory of list-mode maximum-likelihood (ML) source reconstruction presented recently by Barrett et al. (1997), this paper formulates a corresponding expectation-maximization (EM) algorithm, as well as a method for estimating noise properties at the ML estimate. List-mode ML is of interest in cases where the dimensionality of the measurement space impedes a binning of the measurement data. It can be advantageous in cases where a better forward model can be obtained by including more measurement coordinates provided by a given detector. Different figures of merit for the detector performance can be computed from the Fisher information matrix (FIM). This paper uses the observed FIM, which requires a single data set, thus, avoiding costly ensemble statistics. The proposed techniques are demonstrated for an idealized two-dimensional (2-D) positron emission tomography (PET) [2-D PET] detector. The authors compute from simulation data the improved image quality obtained by including the time of flight of the coincident quanta  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号