首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The maximum likelihood (ML) approach to estimating the radioactive distribution in the body cross section has become very popular among researchers in emission computed tomography (ECT) since it has been shown to provide very good images compared to those produced with the conventional filtered backprojection (FBP) algorithm. The expectation maximization (EM) algorithm is an often-used iterative approach for maximizing the Poisson likelihood in ECT because of its attractive theoretical and practical properties. Its major disadvantage is that, due to its slow rate of convergence, a large amount of computation is often required to achieve an acceptable image. Here, the authors present a row-action maximum likelihood algorithm (RAMLA) as an alternative to the EM algorithm for maximizing the Poisson likelihood in ECT. The authors deduce the convergence properties of this algorithm and demonstrate by way of computer simulations that the early iterates of RAMLA increase the Poisson likelihood in ECT at an order of magnitude faster that the standard EM algorithm. Specifically, the authors show that, from the point of view of measuring total radionuclide uptake in simulated brain phantoms, iterations 1, 2, 3, and 4 of RAMLA perform at least as well as iterations 45, 60, 70, and 80, respectively, of EM. Moreover, the authors show that iterations 1, 2, 3, and 4 of RAMLA achieve comparable likelihood values as iterations 45, 60, 70, and 80, respectively, of EM. The authors also present a modified version of a recent fast ordered subsets EM (OS-EM) algorithm and show that RAMLA is a special case of this modified OS-EM. Furthermore, the authors show that their modification converges to a ML solution whereas the standard OS-EM does not.  相似文献   

2.
Accelerated image reconstruction using ordered subsets of projection data   总被引:58,自引:0,他引:58  
The authors define ordered subset processing for standard algorithms (such as expectation maximization, EM) for image restoration from projections. Ordered subsets methods group projection data into an ordered sequence of subsets (or blocks). An iteration of ordered subsets EM is defined as a single pass through all the subsets, in each subset using the current estimate to initialize application of EM with that data subset. This approach is similar in concept to block-Kaczmarz methods introduced by Eggermont et al. (1981) for iterative reconstruction. Simultaneous iterative reconstruction (SIRT) and multiplicative algebraic reconstruction (MART) techniques are well known special cases. Ordered subsets EM (OS-EM) provides a restoration imposing a natural positivity condition and with close links to the EM algorithm. OS-EM is applicable in both single photon (SPECT) and positron emission tomography (PET). In simulation studies in SPECT, the OS-EM algorithm provides an order-of-magnitude acceleration over EM, with restoration quality maintained.  相似文献   

3.
The maximum-likelihood (ML) expectation-maximization (EM) [ML-EM] algorithm is being widely used for image reconstruction in positron emission tomography. The algorithm is strictly valid if the data are Poisson distributed. However, it is also often applied to processed sinograms that do not meet this requirement. This may sometimes lead to suboptimal results: streak artifacts appear and the algorithm converges toward a lower likelihood value. As a remedy, we propose two simple pixel-by-pixel methods [noise equivalent counts (NEC)-scaling and NEC-shifting] in order to transform arbitrary sinogram noise into noise which is approximately Poisson distributed (the first and second moments of the distribution match those of the Poisson distribution). The convergence speed associated with both transformation methods is compared, and the NEC-scaling method is validated with both simulations and clinical data. These new methods extend the ML-EM algorithm to a general purpose nonnegative reconstruction algorithm.  相似文献   

4.
The problem of reconstruction in positron emission tomography (PET) is basically estimating the number of photon pairs emitted from the source. Using the concept of the maximum-likelihood (ML) algorithm, the problem of reconstruction is reduced to determining an estimate of the emitter density that maximizes the probability of observing the actual detector count data over all possible emitter density distributions. A solution using this type of expectation maximization (EM) algorithm with a fixed grid size is severely handicapped by the slow convergence rate, the large computation time, and the nonuniform correction efficiency of each iteration, which makes the algorithm very sensitive to the image pattern. An efficient knowledge-based multigrid reconstruction algorithm based on the ML approach is presented to overcome these problems.  相似文献   

5.
This paper describes a statistical multiscale modeling and analysis framework for linear inverse problems involving Poisson data. The framework itself is founded upon a multiscale analysis associated with recursive partitioning of the underlying intensity, a corresponding multiscale factorization of the likelihood (induced by this analysis), and a choice of prior probability distribution made to match this factorization by modeling the “splits” in the underlying partition. The class of priors used here has the interesting feature that the “noninformative” member yields the traditional maximum-likelihood solution; other choices are made to reflect prior belief as to the smoothness of the unknown intensity. Adopting the expectation-maximization (EM) algorithm for use in computing the maximum a posteriori (MAP) estimate corresponding to our model, we find that our model permits remarkably simple, closed-form expressions for the EM update equations. The behavior of our EM algorithm is examined, and it is shown that convergence to the global MAP estimate can be guaranteed. Applications in emission computed tomography and astronomical energy spectral analysis demonstrate the potential of the new approach  相似文献   

6.
We give a recursive algorithm to calculate submatrices of the Cramer-Rao (CR) matrix bound on the covariance of any unbiased estimator of a vector parameter &thetas;_. Our algorithm computes a sequence of lower bounds that converges monotonically to the CR bound with exponential speed of convergence. The recursive algorithm uses an invertible “splitting matrix” to successively approximate the inverse Fisher information matrix. We present a statistical approach to selecting the splitting matrix based on a “complete-data-incomplete-data” formulation similar to that of the well-known EM parameter estimation algorithm. As a concrete illustration we consider image reconstruction from projections for emission computed tomography  相似文献   

7.
The maximum a posteriori (MAP) Bayesian iterative algorithm using priors that are gamma distributed, due to Lange, Bahn and Little, is extended to include parameter choices that fall outside the gamma distribution model. Special cases of the resulting iterative method include the expectation maximization maximum likelihood (EMML) method based on the Poisson model in emission tomography, as well as algorithms obtained by Parra and Barrett and by Huesman et al. that converge to maximum likelihood and maximum conditional likelihood estimates of radionuclide intensities for list-mode emission tomography. The approach taken here is optimization-theoretic and does not rely on the usual expectation maximization (EM) formalism. Block-iterative variants of the algorithms are presented. A self-contained, elementary proof of convergence of the algorithm is included.  相似文献   

8.
The maximum likelihood (ML) expectation maximization (EM) approach in emission tomography has been very popular in medical imaging for several years. In spite of this, no satisfactory convergent modifications have been proposed for the regularized approach. Here, a modification of the EM algorithm is presented. The new method is a natural extension of the EM for maximizing likelihood with concave priors. Convergence proofs are given.  相似文献   

9.
Multicast-based loss inference with missing data   总被引:4,自引:0,他引:4  
Network tomography using multicast probes enables inference of loss characteristics of internal network links from reports of end-to-end loss seen at multicast receivers. We develop estimators for internal loss rates when reports are not available on all probes or from all receivers. This problem is motivated by the use of unreliable transport protocols, such as reliable transport protocol, to transmit loss reports to a collector for inference. We use a maximum-likelihood (ML) approach in which we apply the expectation maximization (EM) algorithm to provide an approximating solution to the the ML estimator for the incomplete data problem. We present a concrete realization of the algorithm that can be applied to measured data. For classes of models, we establish identifiability of the probe and report loss parameters, and convergence of the EM sequence to the maximum-likelihood estimator (MLE). Numerical results suggest that these properties hold more generally. We derive convergence rates for the EM iterates, and the estimation error of the MLE. Finally, we evaluate the accuracy and convergence rate through extensive simulations  相似文献   

10.
A maximum-likelihood (ML) expectation-maximization (EM) algorithm (called EM-IntraSPECT) is presented for simultaneously estimating single photon emission computed tomography (SPECT) emission and attenuation parameters from emission data alone. The algorithm uses the activity within the patient as transmission tomography sources, with which attenuation coefficients can be estimated. For this initial study, EM-IntraSPECT was tested on computer-simulated attenuation and emission maps representing a simplified human thorax as well as on SPECT data obtained from a physical phantom. Two evaluations were performed. First, to corroborate the idea of reconstructing attenuation parameters from emission data, attenuation parameters (mu) were estimated with the emission intensities (lambda) fixed at their true values. Accurate reconstructions of attenuation parameters were obtained. Second, emission parameters lambda and attenuation parameters mu were simultaneously estimated from the emission data alone. In this case there was crosstalk between estimates of lambda and mu and final estimates of lambda and mu depended on initial values. Estimates degraded significantly as the support extended out farther from the body, and an explanation for this is proposed. In the EM-IntraSPECT reconstructed attenuation images, the lungs, spine, and soft tissue were readily distinguished and had approximately correct shapes and sizes. As compared with standard EM reconstruction assuming a fix uniform attenuation map, EM-IntraSPECT provided more uniform estimates of cardiac activity in the physical phantom study and in the simulation study with tight support, but less uniform estimates with a broad support. The new EM algorithm derived here has additional applications, including reconstructing emission and transmission projection data under a unified statistical model.  相似文献   

11.
This paper reviews and compares three maximum likelihood algorithms for transmission tomography. One of these algorithms is the EM algorithm, one is based on a convexity argument devised by De Pierro (see IEEE Trans. Med. Imaging, vol.12, p.328-333, 1993) in the context of emission tomography, and one is an ad hoc gradient algorithm. The algorithms enjoy desirable local and global convergence properties and combine gracefully with Bayesian smoothing priors. Preliminary numerical testing of the algorithms on simulated data suggest that the convex algorithm and the ad hoc gradient algorithm are computationally superior to the EM algorithm. This superiority stems from the larger number of exponentiations required by the EM algorithm. The convex and gradient algorithms are well adapted to parallel computing.  相似文献   

12.
An evaluation of maximum likelihood reconstruction for SPECT   总被引:2,自引:0,他引:2  
A reconstruction method for SPECT (single photon emission computerized tomography) that uses the maximum likelihood (ML) criterion and an iterative expectation-maximization (EM) algorithm solution is examined. The method is based on a model that incorporates the physical effects of photon statistics, nonuniform photon attenuation, and a camera-dependent point-spread response function. Reconstructions from simulation experiments are presented which illustrate the ability of the ML algorithm to correct for attenuation and point-spread. Standard filtered backprojection method reconstructions, using experimental and simulated data, are included for reference. Three studies were designed to focus on the effects of noise and point-spread, on the effect of nonuniform attenuation, and on the combined effects of all three. The last study uses a chest phantom and simulates Tl-201 imaging of the myocardium. A quantitative analysis of the reconstructed images is used to support the conclusion that the ML algorithm produces reconstructions that exhibit improved signal-to-noise ratios, improved image resolution, and image quantifiability.  相似文献   

13.
Convergence of EM image reconstruction algorithms with Gibbs smoothing   总被引:4,自引:0,他引:4  
P.J. Green has defined an OSL (one-step late) algorithm that retains the E-step of the EM algorithm (for image reconstruction in emission tomography) but provides an approximate solution to the M-step. Further modifications of the OSL algorithm guarantee convergence to the unique maximum of the log posterior function. Convergence is proved under a specific set of sufficient conditions. Several of these conditions concern the potential function of the Gibb's prior, and a number of candidate potential functions are identified. Generalization of the OSL algorithm to transmission tomography is also considered.  相似文献   

14.
For positron emission tomography (PET) imaging, different reconstruction methods can be applied, including maximum likelihood (ML ) and maximum a posteriori (MAP) reconstruction. Postsmoothed ML images have approximately position and object independent spatial resolution, which is advantageous for (semi-) quantitative analysis. However, the complex object dependent smoothing obtained with MAP might yield improved noise characteristics, beneficial for lesion detection. In this contribution, MAP and postsmoothed ML are compared for hot spot detection by human observers and by the channelized Hotelling observer (CHO). The study design was based on the “multiple alternative forced choice” approach. For the MAP reconstruction, the relative difference prior was used. For postsmoothed ML, a Gaussian smoothing kernel was used. Both the human observers and the CHO performed slightly better on MAP images than on postsmoothed ML images. The average CHO performance was similar to the best human performance. The CHO was then applied to evaluate the performance of priors with reduced penalty for large differences. For these priors, a poorer detection performance was obtained.   相似文献   

15.
A major drawback of statistical iterative image reconstruction for emission computed tomography is its high computational cost. The ill-posed nature of tomography leads to slow convergence for standard gradient-based iterative approaches such as the steepest descent or the conjugate gradient algorithm. Here, new theory and methods for a class of preconditioners are developed for accelerating the convergence rate of iterative reconstruction. To demonstrate the potential of this class of preconditioners, a preconditioned conjugate gradient (PCG) iterative algorithm for weighted least squares reconstruction (WLS) was formulated for emission tomography. Using simulated positron emission tomography (PET) data of the Hoffman brain phantom, it was shown that the convergence rate of the PCG can reduce the number of iterations of the standard conjugate gradient algorithm by a factor of 2-8 times depending on the convergence criterion  相似文献   

16.
潘小飞  刘爱军  张邦宁  方华 《电视技术》2007,31(4):12-14,18
针对非贝叶斯迭代载波同步联合译码算法的收敛条件进行研究,以EM迭代估计模型为例,重点研究了多种频偏预估计算法的估计性能,并结合最大似然相位估计算法,仿真得到了使EM迭代估计器收敛的概率,分析无法收敛对系统译码性能的恶化程度,以及为满足最终误码性能所需要选择的预估计方案,并得到了一些有参考价值的结论.  相似文献   

17.
In this work, the convergence rates of direction of arrival (DOA) estimates using the expectation-maximization (EM) and space alternating generalized EM (SAGE) algorithms are investigated. The EM algorithm is a well-known iterative method for locating modes of a likelihood function and is characterized by simple implementation and stability. Unfortunately, the slow convergence associated with EM makes it less attractive for practical applications. The SAGE algorithm proposed by Fessler and Hero (1994), based on the same idea of data augmentation, has the potential to speed up convergence and preserves the advantage of simple implementation. We study both algorithms within the framework of array processing. Theoretical analysis shows that SAGE has faster convergence speed than EM under certain conditions on observed and augmented information matrices. The analytical results are supported by numerical simulations carried out over a wide range of signal-to-noise ratios (SNRs) and various source locations  相似文献   

18.
We provide a general form for many reconstruction estimators of emission tomography. These estimators include Shepp and Vardi's maximum likelihood (ML) estimator, the quadratic weighted least squares (WLS) estimator, Anderson's WLS estimator, and Liu and Wang's multi-objective estimator, and others. We derive a generic update rule by constructing a surrogate function. This work is inspired by the ML-EM (EM, expectation maximization), where the latter naturally arises as a special case. A regularization with a specific form can also be incorporated by De Pierro's trick. We provide a general and quite different convergence proof compared with the proofs of the ML-EM and De Pierro. Theoretical analysis shows that the proposed algorithm monotonically decreases the cost function and automatically meets nonnegativity constraints. We have introduced a mechanism to provide monotonic, self-constraining, and convergent algorithms, from which some interesting existing and new algorithms can be derived. Simulation results illustrate the behavior of these algorithms in term of image quality and resolution-noise tradeoff.  相似文献   

19.
The imaging characteristics of maximum likelihood (ML) reconstruction using the EM algorithm for emission tomography have been extensively evaluated. There has been less study of the precision and accuracy of ML estimates of regional radioactivity concentration. The authors developed a realistic brain slice simulation by segmenting a normal subject's MRI scan into gray matter, white matter, and CSF and produced PET sinogram data with a model that included detector resolution and efficiencies, attenuation, scatter, and randoms. Noisy realizations at different count levels were created, and ML and filtered backprojection (FBP) reconstructions were performed. The bias and variability of ROI values were determined. In addition, the effects of ML pixel size, image smoothing and region size reduction were assessed. Hit estimates at 3,000 iterations (0.6 sec per iteration on a parallel computer) for 1-cm(2) gray matter ROIs showed negative biases of 6%+/-2% which can be reduced to 0%+/-3% by removing the outer 1-mm rim of each ROI. FBP applied to the full-size ROIs had 15%+/-4% negative bias with 50% less noise than hit. Shrinking the FBP regions provided partial bias compensation with noise increases to levels similar to ML. Smoothing of ML images produced biases comparable to FBP with slightly less noise. Because of its heavy computational requirements, the ML algorithm will be most useful for applications in which achieving minimum bias is important.  相似文献   

20.
Using a theory of list-mode maximum-likelihood (ML) source reconstruction presented recently by Barrett et al. (1997), this paper formulates a corresponding expectation-maximization (EM) algorithm, as well as a method for estimating noise properties at the ML estimate. List-mode ML is of interest in cases where the dimensionality of the measurement space impedes a binning of the measurement data. It can be advantageous in cases where a better forward model can be obtained by including more measurement coordinates provided by a given detector. Different figures of merit for the detector performance can be computed from the Fisher information matrix (FIM). This paper uses the observed FIM, which requires a single data set, thus, avoiding costly ensemble statistics. The proposed techniques are demonstrated for an idealized two-dimensional (2-D) positron emission tomography (PET) [2-D PET] detector. The authors compute from simulation data the improved image quality obtained by including the time of flight of the coincident quanta  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号