首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A collimator consisting of a series of highly attenuating parallel slats has been constructed and used in conjunction with a gamma-camera to approximately measure planar projections of a given radionuclide distribution. The enlarged solid angle of acceptance afforded by the slat collimator gave rise to an increased geometric efficiency of between 12 and 28 times that observed with a low-energy high-resolution (LEHR) parallel-hole collimator. When the slats rotated over the face of the detector and the camera gantry turned about the object, sufficient projections were acquired to reconstruct a three-dimensional (3-D) image using the inversion of the 3-D radon transform. The noise behavior of an algorithm for implementing this inversion was studied analytically and the resulting relationship has been verified by computer simulation. The substantially improved geometric efficiency of the slat collimator translated to improvements in reconstructed signal-to-noise ratio (SNR) by, at best, up to a factor of 2.0 with respect to standard parallel-hole collimation. The spatial resolution achieved with the slat collimator was comparable to that obtained with a LEHR collimator and no significant differences were observed in terms of scatter response. Accurate image quantification was hindered by the spatially variant response of the slat collimator.  相似文献   

2.
A restoration scheme for single photon emission computed tomography (SPECT) images that performs restoration before reconstruction (preconstruction restoration) from planar (projection) images is presented. A comparison is performed between results obtained in this study and those obtained by a method reported previously where the restoration is performed after reconstruction (postreconstruction restoration). The filters investigated are the Wiener and power spectrum equalization filters. These filters are applied to SPECT images of a hollow cylinder phantom and a cardiac phantom acquired on a Siemens Rota camera. Quantitative analyses of the results are performed through measurements of contrast ratios and root mean squared errors. The preconstruction restored images show a significant decrease in the root mean squared error and an increase in contrast over the postconstruction restored images.  相似文献   

3.
An analysis is presented of the information transfer from emitter-space to detector-space in single photon emission computed tomography (SPECT) systems. The analysis takes into account the fact that count loss side information is generally not available at the detector. Side information corresponds to the number gamma-rays lost deleted due to lack of interaction with the detector data. It is shown that the information transfer depends on the structure of the likelihood function of the emitter locations associated with the detector data. This likelihood function is the average of a set of ideal-detection likelihood functions, each matched to a particular set of possible deleted gamma-ray paths. A lower bound is derived for the information gain due to incorporating the count loss side information at the detector. This is shown to be significant when the mean emission rate is small or when the gamma-ray deletion probability is strongly dependent on emitter location. Numerical evaluations of the mutual information, with and without side information, associated with information-optimal apertures and uniform parallel-hole collimators are presented.  相似文献   

4.
In this work, a computer-based algorithm is proposed for the initial interpretation of human cardiac images. Reconstructed single photon emission computed tomography images are used to differentiate between subjects with normal value and abnormal value of ejection fraction. The method analyses pixel intensities that correspond to blood flow in the left ventricular region. The algorithm proceeds through three main stages: the initial stage does a pre-processing task to reduce noise as well as blur in the image. The second stage extracts features from the images. Classification is done in the final stage. The pre-processing stage consists of a de-noising part and a de-blurring part. Novel features are used for classification. Features are extracted as three different sets based on: the pixel intensity distribution in different regions, spatial relationship of pixels and multi-scale image information. Two supervised algorithms are proposed for classification: one algorithm is based on a threshold value computed from the features extracted from the training images and the other algorithm is based on sequential minimal optimization-based support vector machine approach. Experimental studies were performed on real cardiac SPECT images obtained from hospital. The result of classification has been verified by an expert nuclear medicine physician and by the ejection fraction value obtained from quantitative gated SPECT, the most widely used software package for quantifying gated SPECT images.  相似文献   

5.
A filtered backprojection reconstruction algorithm was developed for cardiac single photon emission computed tomography with cone-beam geometry. The algorithm reconstructs cone-beam projections collected from ;short scan' acquisitions of a detector traversing a noncircular planar orbit. Since the algorithm does not correct for photon attenuation, it is designed to reconstruct data collected over an angular range of slightly more than 180 degrees with the assumption that the range of angles is oriented so as not to acquire the highly attenuated posterior projections of emissions from cardiac radiopharmaceuticals. This sampling scheme is performed to minimize the attenuation artifacts that result from reconstructing posterior projections. From computer simulations, it is found that reconstruction of attenuated projections has a greater effect upon quantitation and image quality than any potential cone-beam reconstruction artifacts resulting from insufficient sampling of cone-beam projections. With nonattenuated projection data, cone beam reconstruction errors in the heart are shown to be small (errors of at most 2%).  相似文献   

6.
The maximum-likelihood (ML) approach in emission tomography provides images with superior noise characteristics compared to conventional filtered backprojection (FBP) algorithms. The expectation-maximization (EM) algorithm is an iterative algorithm for maximizing the Poisson likelihood in emission computed tomography that became very popular for solving the ML problem because of its attractive theoretical and practical properties. Recently, (Browne and DePierro, 1996 and Hudson and Larkin, 1994) block sequential versions of the EM algorithm that take advantage of the scanner's geometry have been proposed in order to accelerate its convergence. In Hudson and Larkin, 1994, the ordered subsets EM (OS-EM) method was applied to the ML problem and a modification (OS-GP) to the maximum a posteriori (MAP) regularized approach without showing convergence. In Browne and DePierro, 1996, we presented a relaxed version of OS-EM (RAMLA) that converges to an ML solution. In this paper, we present an extension of RAMLA for MAP reconstruction. We show that, if the sequence generated by this method converges, then it must converge to the true MAP solution. Experimental evidence of this convergence is also shown. To illustrate this behavior we apply the algorithm to positron emission tomography simulated data comparing its performance to OS-GP.  相似文献   

7.
The expectation maximization method for maximum likelihood image reconstruction in emission tomography, based on the Poisson distribution of the statistically independent components of the image and measurement vectors, is extended to a maximum aposteriori image reconstruction using a multivariate Gaussian a priori probability distribution of the image vector. The approach is equivalent to a penalized maximum likelihood estimation with a special choice of the penalty function. The expectation maximization method is applied to find the a posteriori probability maximizer. A simple iterative formula is derived for a penalty function that is a weighted sum of the squared deviations of image vector components from their a priori mean values. The method is demonstrated to be superior to pure likelihood maximization, in that the penalty function prevents the occurrence of irregular high amplitude patterns in the image with a large number of iterations (the so-called "checkerboard effect" or "noise artifact").  相似文献   

8.
The discrete filtered backprojection (DFBP) algorithm used for the reconstruction of single photon emission computed tomography (SPECT) images affects image quality because of the operations of filtering and discretization. The discretization of the filtered backprojection process can cause the modulation transfer function (MTF) of the SPECT imaging system to be anisotropic and nonstationary, especially near the edges of the camera's field of view. The use of shift-invariant restoration techniques fails to restore large images because these techniques do not account for such variations in the MTF. This study presents the application of a two-dimensional (2D) shift-variant Kalman filter for post-reconstruction restoration of SPECT slices. This filter was applied to SPECT images of a hollow cylinder phantom; a resolution phantom; and a large, truncated cone phantom containing two types of cold spots, a sphere, and a triangular prism. The images were acquired on an ADAC GENESYS camera. A comparison was performed between results obtained by the Kalman filter and those obtained by shift-invariant filters. Quantitative analysis of the restored images performed through measurement of root mean squared errors shows a considerable reduction in error of Kalman-filtered images over images restored using shift-invariant methods.  相似文献   

9.
A new class of fast maximum-likelihood estimation (MLE) algorithms for emission computed tomography (ECT) is developed. In these cyclic iterative algorithms, vector extrapolation techniques are integrated with the iterations in gradient-based MLE algorithms, with the objective of accelerating the convergence of the base iterations. This results in a substantial reduction in the effective number of base iterations required for obtaining an emission density estimate of specified quality. The mathematical theory behind the minimal polynomial and reduced rank vector extrapolation techniques, in the context of emission tomography, is presented. These extrapolation techniques are implemented in a positron emission tomography system. The new algorithms are evaluated using computer experiments, with measurements taken from simulated phantoms. It is shown that, with minimal additional computations, the proposed approach results in substantial improvement in reconstruction.  相似文献   

10.
Single photon emission computed tomography (SPECT) reconstructions performed using maximum a posteriori (penalized likelihood) estimation with the expectation maximization algorithm are discussed. Due to the large number of computations, the algorithms were performed on a massively parallel single-instruction multiple-data computer. Computation times for 200 iterations, using I.J. Good and R.A. Gaskins's (1971) roughness as a rotationally invariant roughness penalty, are shown to be on the order of 5 min for a 64x64 image with 96 view angles on an AMT-DAP 4096 processor machine and 1 min on a MasPar 4096 processor machine. Computer simulations performed using parameters for the Siemens gamma camera and clinical brain scan parameters are presented to compare two regularization techniques-regularization by kernel sieves and penalized likelihood with Good's rotationally invariant roughness measure-to filtered backprojection. Twenty-five independent sets of data are reconstructed for the pie and Hoffman brain phantoms. The average variance and average deviation are examined in various areas of the brain phantom. It is shown that while the geometry of the area examined greatly affects the observed results, in all cases the reconstructions using Good's roughness give superior variance and bias results to the two alternative methods.  相似文献   

11.
We have evaluated the performance of two three-dimensional (3-D) reconstruction algorithms with data acquired from microPET, a high resolution tomograph dedicated to small animal imaging. The first was a linear filtered-backprojection algorithm (FBP) with reprojection of the missing data, and the second was a statistical maximum a posteriori probability algorithm (MAP). The two algorithms were evaluated in terms of their resolution performance, both in phantoms and in vivo. Sixty independent realizations of a phantom simulating the brain of a baby monkey were acquired, each containing three million counts. Each of these realizations was reconstructed independently with both algorithms. The ensemble of the 60 reconstructed realizations was used to estimate the standard deviation as a measure of the noise for each reconstruction algorithm. More detail was recovered in the MAP reconstruction without an increase in noise relative to FBP. Studies in a simple cylindrical compartment phantom demonstrated improved recovery of known activity ratios with MAP. Finally, in vivo studies also demonstrated a clear improvement in spatial resolution using the MAP algorithm. The quantitative accuracy of the MAP reconstruction was also evaluated by comparison with autoradiography and direct well counting of tissue samples and was shown to be superior.  相似文献   

12.
The EM method that was originally developed for maximum likelihood estimation in the context of mathematical statistics may be applied to a stochastic model of positron emission tomography (PET). The result is an iterative algorithm for image reconstruction that is finding increasing use in PET, due to its attractive theoretical and practical properties. Its major disadvantage is the large amount of computation that is often required, due to the algorithm's slow rate of convergence. This paper presents an accelerated form of the EM algorithm for PET in which the changes to the image, as calculated by the standard algorithm, are multiplied at each iteration by an overrelaxation parameter. The accelerated algorithm retains two of the important practical properties of the standard algorithm, namely the selfnormalization and nonnegativity of the reconstructed images. Experimental results are presented using measured data obtained from a hexagonal detector system for PET. The likelihood function and the norm of the data residual were monitored during the iterative process. According to both of these measures, the images reconstructed at iterations 7 and 11 of the accelerated algorithm are similar to those at iterations 15 and 30 of the standard algorithm, for two different sets of data. Important theoretical properties remain to be investigated, namely the convergence of the accelerated algorithm and its performance as a maximum likelihood estimator.  相似文献   

13.
A series of computer experiments was performed to determine the relative performance of simulated annealing, quenched annealing, and a least-squares iterative technique for image reconstruction for single photon emission computed tomography (SPECT). The simulated SPECT geometry was of the pinhole aperture type, with 32 pinholes and 128 or 512 detectors. To test the robustness of the reconstruction techniques upon arbitrary geometries, a 360-detector geometry with a random pixel-detector-factor matrix was tested. Eight computer-simulated, 10-cm-diameter planar phantoms were used with 1961 2-mm(2) reconstruction bins and a range of 3000 to 50,000,000 detected photon counts. Reconstruction quality was measured by a normalized, squared error picture distance measure. Over a wide range of noise, the simulated annealing method had slightly better reconstruction quality than the iterative method, although requiring greater reconstruction time. Quenched annealing was faster than simulated annealing, with comparable reconstruction quality. Methods of efficiently controlling the simulated annealing algorithm are presented.  相似文献   

14.
Computed tomography (CT) has a trend towards higher resolution and higher noise. This development has increased the interest in anisotropic smoothing techniques for CT, which aim to reduce noise while preserving structures of interest. However, existing smoothing techniques are slow, which makes clinical application difficult. Furthermore, the published methods have limitations with respect to preserving small details in CT data. This paper presents a widely applicable speed optimized framework for anisotropic smoothing techniques. A second contribution of this paper is an extension to an existing smoothing technique aimed at better preserving small structures of interest in CT data. Based on second-order image structure, the method first determines an importance map, which indicates potentially relevant structures that should be preserved. Subsequently an anisotropic diffusion process is started. The diffused data is used in most parts of the images, while structures with significant second-order information are preserved. The method is qualitatively evaluated against an anisotropic diffusion method without structure preservation in an observer study to assess the improvement of 3-D visualizations of CT series and quantitatively by determining the reduction of the difference between low and high dose CT scans of in vitro carotid plaques.   相似文献   

15.
In order to perform attenuation correction in emission tomography an attenuation map is required. We propose a new method to compute this map directly from the emission sinogram, eliminating the transmission scan from the acquisition protocol. The problem is formulated as an optimization task where the objective function is a combination of the likelihood and an a priori probability. The latter uses a Gibbs prior distribution to encourage local smoothness and a multimodal distribution for the attenuation coefficients. Since the attenuation process is different in positron emission tomography (PET) and single photon emission tomography (SPECT), a separate algorithm for each case is derived. The method has been tested on mathematical phantoms and on a few clinical studies. For PET, good agreement was found between the images obtained with transmission measurements and those produced by the new algorithm in an abdominal study. For SPECT, promising simulation results have been obtained for nonhomogeneous attenuation due to the presence of the lungs.  相似文献   

16.
太赫兹波层析成像技术是利用太赫兹波的穿透性对样品进行旋转投影数据采集,通过层析重建算法获得样品的二维横截面图,并实现样品的三维内部结构图的重构。本文介绍了基于面阵式探测器连续太赫兹波层析成像系统。该成像系统的优势在于利用了面阵式探测器记录二维投影数据,相比扫描层析成像系统,提高了投影采集速率。为了获得高精确度的二维投影数据,利用角谱衍射传播算法将二维投影图传播至样品后表面,实现抑制太赫兹波在物体外部衍射效应,最终利用滤波反投影算法重建出高保真的样品三维内部结构图,并进一步探索了基于面阵式探测器的连续太赫兹波层析成像技术在无损检测和安全检测上的可行性。  相似文献   

17.
A retrieval technique for estimating rainfall rate and precipitating cloud parameters from spaceborne multifrequency microwave radiometers is described. The algorithm is based on the maximum a posteriori probability criterion (MAP) applied to a simulated data base of cloud structures and related upward brightness temperatures. The cloud data base is randomly generated by imposing the mean values, the variances, and the correlations among the hydrometeor contents at each layer of the cloud vertical structure, derived from the outputs of a time-dependent microphysical cloud model. The simulated upward brightness temperatures are computed by applying a plane-parallel radiative transfer scheme. Given a multifrequency brightness temperature measurement, the MAP criterion is used to select the most probable cloud structure within the cloud-radiation data base. The algorithm is computationally efficient and has been numerically tested and compared against other methods. Its potential to retrieve rainfall over land has been explored by means of Special Sensor Microwave/Imager measurements for a rainfall event over Central Italy. The comparison of estimated rain rates with available raingauge measurements is also shown  相似文献   

18.
并行子带HMM最大后验概率自适应非线性类估计算法   总被引:1,自引:0,他引:1  
目前,自动语音识别(ASR)系统在实验室环境下获得了较高的识别率,但是在实际环境中,由于受到背景噪声和传输信道的影响,系统的识别性能急剧恶化.本文以听觉试验为基础,提出一种新的独立子带并行最大后验概率的非线性类估计算法,用以提高识别系统的鲁棒性.本算法利用多种噪声和识别内容功率谱差异,以及噪声在不同频带上对HMM影响的不同,采用多层感知机(MLP)对噪声环境下最大后验概率进行非线性映射,以减少识别系统由于环境不匹配而导致的识别性能下降.实验表明:该算法性能明显优于最大后验线性回归算法和Sangita提出的子带语音识别算法.  相似文献   

19.
At present, the choice of bandwidth in emission computed tomography (ECT) reconstruction is done by subjective means. The authors develop an automated objective selection technique for linear reconstruction algorithms such as filtered backprojection. The approach is based on the method of unbiased risk estimation. A set of 2-D validation studies using computer simulated and physical phantom data from the Hoffman et al. (1990) brain phantom are carried out. These 2-D studies incorporate measured corrections for object attenuation and lack of uniformity in detector sensitivity. It is found that the unbiased risk approach works very well. Over a range of count rates and brain slice source distributions, the root mean square (RMS) error of the fully automated reconstruction, with the data-dependent choice of bandwidth, is around 5% greater than the RMS error for the reconstruction with an ideal choice of the bandwidth.  相似文献   

20.
The design and characterization of an imaging system is presented for depth information capture of arbitrary three-dimensional (3-D) objects. The core of the system is an array of 32 /spl times/ 32 rangefinding pixels that independently measure the time-of-flight of a ray of light as it is reflected back from the objects in a scene. A single cone of pulsed laser light illuminates the scene, thus no complex mechanical scanning or expensive optical equipment are needed. Millimetric depth accuracies can be reached thanks to the rangefinder's optical detectors that enable picosecond time discrimination. The detectors, based on a single photon avalanche diode operating in Geiger mode, utilize avalanche multiplication to enhance light detection. On-pixel high-speed electrical amplification can therefore be eliminated, thus greatly simplifying the array and potentially reducing its power dissipation. Optical power requirements on the light source can also be significantly relaxed, due to the array's sensitivity to single photon events. A number of standard performance measurements, conducted on the imager, are discussed in the paper. The 3-D imaging system was also tested on real 3-D subjects, including human facial models, demonstrating the suitability of the approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号