首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 9 毫秒
1.
An evaluation of maximum likelihood reconstruction for SPECT   总被引:2,自引:0,他引:2  
A reconstruction method for SPECT (single photon emission computerized tomography) that uses the maximum likelihood (ML) criterion and an iterative expectation-maximization (EM) algorithm solution is examined. The method is based on a model that incorporates the physical effects of photon statistics, nonuniform photon attenuation, and a camera-dependent point-spread response function. Reconstructions from simulation experiments are presented which illustrate the ability of the ML algorithm to correct for attenuation and point-spread. Standard filtered backprojection method reconstructions, using experimental and simulated data, are included for reference. Three studies were designed to focus on the effects of noise and point-spread, on the effect of nonuniform attenuation, and on the combined effects of all three. The last study uses a chest phantom and simulates Tl-201 imaging of the myocardium. A quantitative analysis of the reconstructed images is used to support the conclusion that the ML algorithm produces reconstructions that exhibit improved signal-to-noise ratios, improved image resolution, and image quantifiability.  相似文献   

2.
Algorithms that calculate maximum likelihood (ML) and maximum a posteriori solutions using expectation-maximization have been successfully applied to SPECT and PET. These algorithms are appealing because of their solid theoretical basis and their guaranteed convergence. A major drawback is the slow convergence, which results in-long processing times. The authors present 2 new heuristic acceleration methods for maximum likelihood reconstruction of ECT images. The first method incorporates a frequency-dependent amplification in the calculations, to compensate for the low pass filtering of the backprojection operation. In the second method, an amplification factor is incorporated that suppresses the effect of attenuation on the updating factors. Both methods are compared to the 1-dimensional line search method proposed by Lewitt. All 3 methods accelerate the ML algorithm. On the authors' test images, Lewitt's method produced the strongest acceleration of the three individual methods. However, the combination of the frequency amplification with the line search method results in a new algorithm with still better performance. Under certain conditions, an effective frequency amplification can be already achieved by skipping some of the calculations required for ML.  相似文献   

3.
A predictive compression technique is examined, using maximum likelihood prediction of the image pixel based on the Markov mesh model, that encodes the differences via Gordon block-bit-plane (GBBP) encoding. The procedure is very efficient in that it requires a bit rate near the entropy of the source. For images with many quantization levels, maximum likelihood prediction can be cumbersome to implement. Thus, a suboptimal procedure called differential bit-plane coding (DBPC) is investigated. This is easily implemented, even for a large number of quantization levels, and is reasonably efficient.  相似文献   

4.
The imaging characteristics of maximum likelihood (ML) reconstruction using the EM algorithm for emission tomography have been extensively evaluated. There has been less study of the precision and accuracy of ML estimates of regional radioactivity concentration. The authors developed a realistic brain slice simulation by segmenting a normal subject's MRI scan into gray matter, white matter, and CSF and produced PET sinogram data with a model that included detector resolution and efficiencies, attenuation, scatter, and randoms. Noisy realizations at different count levels were created, and ML and filtered backprojection (FBP) reconstructions were performed. The bias and variability of ROI values were determined. In addition, the effects of ML pixel size, image smoothing and region size reduction were assessed. Hit estimates at 3,000 iterations (0.6 sec per iteration on a parallel computer) for 1-cm(2) gray matter ROIs showed negative biases of 6%+/-2% which can be reduced to 0%+/-3% by removing the outer 1-mm rim of each ROI. FBP applied to the full-size ROIs had 15%+/-4% negative bias with 50% less noise than hit. Shrinking the FBP regions provided partial bias compensation with noise increases to levels similar to ML. Smoothing of ML images produced biases comparable to FBP with slightly less noise. Because of its heavy computational requirements, the ML algorithm will be most useful for applications in which achieving minimum bias is important.  相似文献   

5.
In order to interpret ultrasound images, it is important to understand their formation and the properties that affect them, especially speckle noise. This image texture, or speckle, is a correlated and multiplicative noise that inherently occurs in all types of coherent imaging systems. Indeed, its statistics depend on the density and on the type of scatterers in the tissues. This paper presents a new method for echocardiographic images segmentation in a variational level set framework. A partial differential equation-based flow is designed locally in order to achieve a maximum likelihood segmentation of the region of interest. A Rayleigh probability distribution is considered to model the local B-mode ultrasound images intensities. In order to confront more the speckle noise and local changes of intensity, the proposed local region term is combined with a local phase-based geodesic active contours term. Comparison results on natural and simulated images show that the proposed model is robust to attenuations and captures well the low-contrast boundaries.  相似文献   

6.
The authors describe the implementation of a maximum likelihood (ML) algorithm using expectation maximization (EM) for pinhole SPECT with a displaced center-of-rotation. A ray-tracing technique is used in implementing the ML-EM algorithm. The proposed ML-EM algorithm is able to correct the center of rotation displacement which can be characterized by two orthogonal components. The algorithm is tested using experimentally acquired data, and the results demonstrate that the pinhole ML-EM algorithm is able to correct artifacts associated with the center-of-rotation displacement.  相似文献   

7.
Multiple target tracking using maximum likelihood principle   总被引:4,自引:0,他引:4  
Proposes a method (tracking algorithm (TAL)) based on the maximum likelihood (ML) principle for multiple target tracking in near-field using outputs from a large uniform linear array of passive sensors. The targets are assumed to be narrowband signals and modeled as sample functions of a Gaussian stochastic process. The phase delays of these signals are expressed as functions of both range and bearing angle (“track parameters”) of respective targets. A new simplified likelihood function for ML estimation of these parameters is derived from a second-order approximation on the inverse of the data covariance matrix. Maximization of this likelihood function does not involve inversion of the M×M data covariance matrix, where M denotes number of sensors in the array. Instead, inversion of only a D×D matrix is required, where D denotes number of targets. In practice, D≪M and, hence, TAL is computationally efficient. Tracking is achieved by estimating track parameters at regular time intervals wherein targets move to new positions in the neighborhood of their previous positions. TAL preserves ordering of track parameter estimates of the D targets over different time intervals. Performance results of TAL are presented, and it is also compared with methods by Sword and by Swindlehurst and Kailath (1988). Almost exact asymptotic expressions for the Cramer-Rao bound (CRB) on the variance of angle and range estimates are derived, and their utility is discussed  相似文献   

8.
This work estimates component reliability from masked series-system life data, viz, data where the exact component causing system failure might be unknown. The authors extend the results of Usher and Hodgson (1988) by deriving exact maximum likelihood estimators (MLE) for the general case of a series system of three exponential components with independent masking. Their previous work shows that closed-form MLE are intractable, and they propose an iterative method for the solution of a system of three nonlinear likelihood equations  相似文献   

9.
We examined the application of an iterative penalized maximum likelihood (PML) reconstruction method for improved detectability of microcalcifications (MCs) in digital breast tomosynthesis (DBT). Localized receiver operating characteristic (LROC) psychophysical studies with human observers and 2-D image slices were conducted to evaluate the performance of this reconstruction method and to compare its performance against the commonly used Feldkamp FBP algorithm. DBT projections were generated using rigorous computer simulations that included accurate modeling of the noise and detector blur. Acquisition dose levels of 0.7, 1.0, and 1.5 mGy in a 5-cm-thick compressed breast were tested. The defined task was to localize and detect MC clusters consisting of seven MCs. The individual MC diameter was 150 μm. Compressed-breast phantoms derived from CT images of actual mastectomy specimens provided realistic background structures for the detection task. Four observers each read 98 test images for each combination of reconstruction method and acquisition dose. All observers performed better with the PML images than with the FBP images. With the acquisition dose of 0.7 mGy, the average areas under the LROC curve (A(L)) for the PML and FBP algorithms were 0.69 and 0.43, respectively. For the 1.0-mGy dose, the values of A(L) were 0.93 (PML) and 0.7 (FBP), while the 1.5-mGy dose resulted in areas of 1.0 and 0.9, respectively, for the PML and FBP algorithms. A 2-D analysis of variance applied to the individual observer areas showed statistically significant differences (at a significance level of 0.05) between the reconstruction strategies at all three dose levels. There were no significant differences in observer performance for any of the dose levels.  相似文献   

10.
A technique that is able to reconstruct highly sloped and discontinuous terrain height profiles, starting from multifrequency wrapped phase acquired by interferometric synthetic aperture radar (SAR) systems, is presented. We propose an innovative unwrapping method, based on a maximum likelihood estimation technique, which uses multifrequency independent phase data, obtained by filtering the interferometric SAR raw data pair through nonoverlapping band-pass filters, and approximating the unknown surface by means of local planes. Since the method does not exploit the phase gradient, it assures the uniqueness of the solution, even in the case of highly sloped or piecewise continuous elevation patterns with strong discontinuities.  相似文献   

11.
The work presented evaluates the statistical characteristics of regional bias and expected error in reconstructions of real positron emission tomography (PET) data of human brain fluoro-deoxiglucose (FDG) studies carried out by the maximum likelihood estimator (MLE) method with a robust stopping rule, and compares them with the results of filtered backprojection (FBP) reconstructions and with the method of sieves. The task of evaluating radioisotope uptake in regions-of-interest (ROIs) is investigated. An assessment of bias and variance in uptake measurements is carried out with simulated data. Then, by using three different transition matrices with different degrees of accuracy and a components of variance model for statistical analysis, it is shown that the characteristics obtained from real human FDG brain data are consistent with the results of the simulation studies.  相似文献   

12.
In this paper, we present techniques for deriving inversion algorithms in 3-D computer tomography. To this end, we introduce the mathematical model and apply a general strategy, the so-called approximate inverse, for deriving both exact and numerical inversion formulas. Using further approximations, we derive a 2-D shift-invariant filter for circular-orbit cone-beam imaging. Results from real data are presented.   相似文献   

13.
Reconstruction of SPECT images using generalized matrix inverses   总被引:1,自引:0,他引:1  
Generalized matrix inverses are used to estimate source activity distributions from single photon emission computed tomography (SPECT) projection measurements. Image reconstructions for a numerical simulation and a clinical brain study are examined. The photon flux from the source region and photon detection by the gamma camera are modeled by matrices which are computed by Monte Carlo methods. The singular value decompositions (SVDs) of these matrices give considerable insight into the SPECT image reconstruction problem and the SVDs are used to form approximate generalized matrix inverses. Tradeoffs between resolution and error in estimating source voxel intensities are discussed, and estimates of these errors provide a robust means of stabilizing the solution to the ill-posed inverse problem. In addition to its quantitative clinical applications, the generalized matrix inverse method may be a useful research tool for tasks such as evaluating collimator design and optimizing gamma camera motion.  相似文献   

14.
Fan-beam collimators are designed to improve the system sensitivity and resolution for imaging small objects such as the human brain and breasts in single photon emission computed tomography (SPECT). Many reconstruction algorithms have been studied and applied to this geometry to deal with every kind of degradation factor. This paper presents a new reconstruction approach for SPECT with circular orbit, which demonstrated good performance in terms of both accuracy and efficiency. The new approach compensates for degradation factors including noise, scatter, attenuation, and spatially variant detector response. Its uniform attenuation approximation strategy avoids the additional transmission scan for the brain attenuation map, hence reducing the patient radiation dose and furthermore simplifying the imaging procedure. We evaluate and compare this new approach with the well-established ordered-subset expectation-maximization iterative method, using Monte Carlo simulations. We perform quantitative analysis with regional bias-variance, receiver operating characteristics, and Hotelling trace studies for both methods. The results demonstrate that our reconstruction strategy has comparable performance with a significant reduction of computing time.  相似文献   

15.
随着X线探测板数据采集速度的快速发展,研究者开始利用C臂机采集投影数据并重建断层图像,用于手术导航或者放射治疗.但是普通PC的重建速度慢,很难匹配硬件数据采集速度,限制了其在实时临床环境中的应用.本文提出一种基于CUDA(Compute Unified Device Architecture)架构的改进FDK算法,利用GPU(Graphic Porcessing Unit)显卡的并行计算能力实现了实时CT重建,并通过B样条插值提高重建图像的质量,在实时临床环境中具有很好的应用价值.  相似文献   

16.
With the introduction of cone beam (CB) scanners, cardiac volumetric computed tomography (CT) imaging has the potential to become a noninvasive imaging tool in clinical routine for the diagnosis of various heart diseases. Heart rate adaptive reconstruction schemes enable the reconstruction of high-resolution volumetric data sets of the heart. Artifacts, caused by strong heart rate variations, high heart rates and obesity, decrease the image quality and the diagnostic value of the images. The image quality suffers from streak artifacts if suboptimal scan and reconstruction parameters are chosen, demanding improved gating techniques. In this paper, an artifact analysis is carried out which addresses the artifacts due to the gating when using a three-dimensional CB cardiac reconstruction technique. An automatic and patient specific cardiac weighting technique is presented in order to improve the image quality. Based on the properties of the reconstruction algorithm, several assessment techniques are introduced which enable the quantitative determination of the cycle-to-cycle transition smoothness and phase homogeneity of the image reconstruction. Projection data of four patients were acquired using a 16-slice CBCT system in low pitch helical mode with parallel electrocardiogram recording. For each patient, image results are presented and discussed in combination with the assessment criteria.  相似文献   

17.
Quantitative accuracy of single photon emission computed tomography (SPECT) images is highly dependent on the photon scatter model used for image reconstruction. Monte Carlo simulation (MCS) is the most general method for detailed modeling of scatter, but to date, fully three-dimensional (3-D) MCS-based statistical SPECT reconstruction approaches have not been realized, due to prohibitively long computation times and excessive computer memory requirements. MCS-based reconstruction has previously been restricted to two-dimensional approaches that are vastly inferior to fully 3-D reconstruction. Instead of MCS, scatter calculations based on simplified but less accurate models are sometimes incorporated in fully 3-D SPECT reconstruction algorithms. We developed a computationally efficient fully 3-D MCS-based reconstruction architecture by combining the following methods: 1) a dual matrix ordered subset (DM-OS) reconstruction algorithm to accelerate the reconstruction and avoid massive transition matrix precalculation and storage; 2) a stochastic photon transport calculation in MCS is combined with an analytic detector modeling step to reduce noise in the Monte Carlo (MC)-based reprojection after only a small number of photon histories have been tracked; and 3) the number of photon histories simulated is reduced by an order of magnitude in early iterations, or photon histories calculated in an early iteration are reused. For a 64 x 64 x 64 image array, the reconstruction time required for ten DM-OS iterations is approximately 30 min on a dual processor (AMD 1.4 GHz) PC, in which case the stochastic nature of MCS modeling is found to have a negligible effect on noise in reconstructions. Since MCS can calculate photon transport for any clinically used photon energy and patient attenuation distribution, the proposed methodology is expected to be useful for obtaining highly accurate quantitative SPECT images within clinically acceptable computation times.  相似文献   

18.
Multiple-input multiple-output (MIMO) wireless systems increase spectral efficiency by transmitting independent signals on multiple transmit antennas in the same channel bandwidth. The key to using MIMO is in building a receiver that can decorrelate the spatial signatures on the receiver antenna array. Original MIMO detection schemes such as the vertical Bell Labs layered space-time (VBLAST) detector use a ing and cancellation process for detection that is sub-optimal as compared to constrained maximum likelihood (ML) techniques. This paper presents a silicon complexity analysis of ML search techniques for MIMO as applied to the HSDPA extension of UMTS. For MIMO constellations of 4/spl times/4 QPSK or lower, it is possible to perform an exhaustive ML search in today's silicon technologies. When the search complexity exceeds technology limits for high complexity MIMO constellations, it is possible to apply spherical decoding techniques to achieve near-ML performance. The paper presents an architecture for a 4/spl times/4 16QAM MIMO spherical decoder with soft outputs that achieves 38.8 Mb/s over a 5-MHz channel using only approximately 10 mm/sup 2/ in a 0.18-/spl mu/m CMOS process.  相似文献   

19.
This paper introduces and evaluates a block-iterative Fisher scoring (BFS) algorithm. The algorithm provides regularized estimation in tomographic models of projection data with Poisson variability. Regularization is achieved by penalized likelihood with a general quadratic penalty. Local convergence of the block-iterative algorithm is proven under conditions that do not require iteration dependent relaxation. We show that, when the algorithm converges, it converges to the unconstrained maximum penalized likelihood (MPL) solution. Simulation studies demonstrate that, with suitable choice of relaxation parameter and restriction of the algorithm to respect nonnegative constraints, the BFS algorithm provides convergence to the constrained MPL solution. Constrained BFS often attains a maximum penalized likelihood faster than other block-iterative algorithms which are designed for nonnegatively constrained penalized reconstruction.   相似文献   

20.
An alternative degradation reliability modeling approach is presented in this paper. This approach extends the graphical approach used by several authors by considering the natural ordering of performance degradation data using a truncated Weibull distribution. Maximum Likelihood Estimation is used to provide a one-step method to estimate the model's parameters. A closed form expression of the likelihood function is derived for a two-parameter truncated Weibull distribution with time-independent shape parameter. A semi-numerical method is presented for the truncated Weibull distribution with a time-dependent shape parameter. Numerical studies of generated data suggest that the proposed approach provides reasonable estimates even for small sample sizes. The analysis of fatigue data shows that the proposed approach yields a good match of the crack length mean value curve obtained using the path curve approach and better results than those obtained using the graphical approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号