首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Reconstruction algorithms for transmission tomography have generally assumed that the photons reaching a particular detector bin at a particular angle originate from a single point source. In this paper, we highlight several cases of extended transmission sources, in which it may be useful to approach the estimation of attenuation coefficients as a problem involving multiple transmission point sources. Examined in detail is the case of a fixed transmission line source with a fan-beam collimator. This geometry can result in attenuation images that have significant axial blur. Herein it is also shown, empirically, that extended transmission sources can result in biased estimates of the average attenuation, and an explanation is proposed. The finite axial resolution of the transmission line source configuration is modeled within iterative reconstruction using an expectation-maximization algorithm that was previously derived for estimating attenuation coefficients from single photon emission computed tomography (SPECT) emission data. The same algorithm is applicable to both problems because both can be thought of as involving multiple transmission sources. It is shown that modeling axial blur within reconstruction removes the bias in the average estimated attenuation and substantially improves the axial resolution of attenuation images.  相似文献   

2.
A new algorithm for three-dimensional reconstruction of two-dimensional crystals from projections is presented, and its applicability to biological macromolecules imaged using transmission electron microscopy (TEM) is investigated. Its main departures from the traditional approach is that it works in real space, rather than in Fourier space, and it is iterative. This has the advantage of making it convenient to introduce additional constraints (such as the support of the function to be reconstructed, which may be known from alternative measurements) and has the potential of more accurately modeling the TEM image formation process. Phantom experiments indicate the superiority of the new approach even without the introduction of constraints in addition to the projection data.  相似文献   

3.
Attenuation compensation for cone beam single-photon emission computed tomography (SPECT) imaging is performed by cone beam maximum likelihood reconstruction with attenuation included in the transition matrix. Since the transition matrix is too large to be stored in conventional computers, the E-M maximum likelihood estimator is implemented with a ray-tracing algorithm, efficiently recalculating each matrix element as needed. The method was applied and tested in both uniform and nonuniform density phantoms. Test projections sets were obtained from Monte Carlo simulations and experiments using a commercially available cone beam collimator. For representative regions of interest. reconstruction of a uniform sphere is accurate to within 3% throughout, in comparison to a reference image simulated and reconstructed without attenuation. High- and low-activity regions in a uniform density are reconstructed accurately, except that low-activity regions in a more active background have a small error. This error is explainable by the nonnegativity constraints of the E-M estimator and the image statistical noise  相似文献   

4.
Iterative tomographic reconstruction methods have been developed which can enforce various physical constraints on the reconstructed image. An integral part of most of these methods is the repro. jection of the reconstructed image. These estimated projections are compared to the original projection data and modified according to some criteria based on a priori constraints. In this paper, the errors generated by such reprojection schemes are investigated. Bounds for these errors are derived under simple signal energy assumptions and using probabilistic assumptions on the distribution of discontinuities. These bounds can be used in the enforcement of constraints, in the determination of convergence of the iterative methods, and in the detection of artifacts.  相似文献   

5.
Algorithms belonging to the class of pixel-based reconstruction (PBR) algorithms, which are similar to simultaneous iterative reconstruction techniques (SIRTs) for reconstruction of objects from their fan beam projections in X-ray transmission tomography, are discussed. The general logic of these algorithms is discussed. Simulation studies indicate that, contrary to previous results with parallel beam projections, the iterative algebraic algorithms do not diverge when a more logical technique of obtaining the pseudoprojections is used. These simulations were carried out under conditions in which the number of object pixels exceeded (double) the number of detector pixel readings, i.e., the equations were highly underdetermined. The effect of the number of projections on the reconstruction and the convergence (empirical) to the exact solution is shown. For comparison, the reconstructions obtained by convolution backprojection are also given.  相似文献   

6.
We study the application of Fourier rebinning methods to dual-planar cone-beam SPECT. Dual-planar cone-beam SPECT involves the use of a pair of dissimilar cone-beam collimators on a dual-camera SPECT system. Each collimator has its focus in a different axial plane. While dual-planar data is best reconstructed with fully three-dimensional (3-D) iterative methods, these methods are slow and have prompted a search for faster reconstruction techniques. Fourier rebinning was developed to estimate equivalent parallel projections from 3-D PET data, but it simply expresses a relationship between oblique projections taken in planes not perpendicular to the axis of rotation and direct projections taken in those that are. We find that it is possible to put cone-beam data in this context as well. The rebinned data can then be reconstructed using either filtered backprojection (FBP) or parallel iterative algorithms such as OS-EM. We compare the Feldkamp algorithm and fully 3-D OSEM reconstruction with Fourier-rebinned reconstructions on realistically-simulated Tc-99m HMPAO brain SPECT data. We find that the Fourier-rebinned reconstructions exhibit much less image noise and lower variance in region-of-interest (ROI) estimates than Feldkamp. Also, Fourier-rebinning followed by OSEM with nonuniform attenuation correction exhibits less bias in ROI estimates than Feldkamp with Chang attenuation correction. The Fourier-rebinned ROI estimates exhibit bias and variance comparable to those from fully 3-D OSEM and require considerably less processing time. However, in areas off the axis of rotation, the axial-direction resolution of FORE-reconstructed images is poorer than that of images reconstructed with 3-D OSEM. We conclude that Fourier rebinning is a practical and potentially useful approach to reconstructing data from dual-planar circular-orbit cone-beam systems.  相似文献   

7.
This paper introduces and evaluates a block-iterative Fisher scoring (BFS) algorithm. The algorithm provides regularized estimation in tomographic models of projection data with Poisson variability. Regularization is achieved by penalized likelihood with a general quadratic penalty. Local convergence of the block-iterative algorithm is proven under conditions that do not require iteration dependent relaxation. We show that, when the algorithm converges, it converges to the unconstrained maximum penalized likelihood (MPL) solution. Simulation studies demonstrate that, with suitable choice of relaxation parameter and restriction of the algorithm to respect nonnegative constraints, the BFS algorithm provides convergence to the constrained MPL solution. Constrained BFS often attains a maximum penalized likelihood faster than other block-iterative algorithms which are designed for nonnegatively constrained penalized reconstruction.   相似文献   

8.
In this paper,1 we examine the problem of robust power control in a downlink beamforming environment under uncertain channel state information (CSI). We suggest that the method of power control using the lower bounds of signal-to-interference-and-noise ratio (SINR) is too pessimistic and will require significantly higher power in transmission than is necessary in practice. Here, a new robust downlink power control solution based on worst-case performance optimization is developed. Our approach employs the explicit modeling of uncertainties in the downlink channel correlation (DCC) matrices and optimizes the amount of transmission power while guaranteeing the worst-case performance to satisfy the quality of service (QoS) constraints for all users. This optimization problem is non-convex and intractable. In order to arrive at an optimal solution to the problem, we propose an iterative algorithm to find the optimum power allocation and worst-case uncertainty matrices. The iterative algorithm is based on the efficient solving of the worst-case uncertainty matrices once the transmission power is given. This can be done by finding the solutions for two cases: (a) when the uncertainty on the DCC matrices is small, for which a closed-form optimum solution can be obtained and (b) when the uncertainty is substantial, for which the intractable problem is transformed into a convex optimization problem readily solvable by an interior point method. Simulation results show that the proposed robust downlink power control using the approach of worst-case performance optimization converges in a few iterations and reduces the transmission power effectively under imperfect knowledge of the channel condition.  相似文献   

9.
Developments with maximum likelihood X-ray computed tomography   总被引:1,自引:0,他引:1  
An approach to the maximum-likelihood estimation of attenuation coefficients in transmission tomography is presented as an extension of earlier theoretical work by K. Lange and R. Carson (J. Comput. Assist. Tomography, vol.8, p.306-16, 1984). The reconstruction algorithm is based on the expectation-maximization (EM) algorithm. Several simplifying approximations are introduced which make the maximization step of the algorithm available. Computer simulations are presented using noise-free and Poisson randomized projections. The images obtained with the EM-type method are compared to those reconstructed with the EM method of Lange and Carson and with filtered backprojection. Preliminary results show that there are potential advantages in using the maximum likelihood approaches in situations where a high-contrast object, such as bone, is embedded in low-contrast soft tissue.  相似文献   

10.
Reconstruction Algorithm for Fan Beam with a Displaced Center-of-Rotation   总被引:3,自引:0,他引:3  
A convolutional backprojection algorithm is derived for a fan beam geometry that has its center-of-rotation displaced from the midline of the fan beam. In single photon emission computed tomography (SPECT), where a transaxial converging collimator is used with a rotating gamma camera, it is difficult to precisely align the collimator so that the mechanical center-of-rotation is colinear with the midline of the fan beam. A displacement of the center-of-rotation can also occur in X-ray CT when the X-ray source is mispositioned. Standard reconstruction algorithms which directly filter and backproject the fan beam data without rebinning into parallel beam geometry have been derived for a geometry having its center-of-rotation at the midline of the fan beam. However, in the case of a misalignment of the center-of-rotation, if these conventional reconstruction algorithms are used to reconstruct the fan beam projections, structured artifacts and a loss of resolution will result. We illustrate these artifacts with simulations and demonstrate how the news algorithm corrects for this misalignment. We also show a method to estimate the parameters of the fan beam geometry including the shift in the center-of-rotation.  相似文献   

11.
Quantitative positron emission tomography (PET) imaging relies on accurate attenuation correction. Predicting attenuation values from magnetic resonance (MR) images is difficult because MR signals are related to proton density and relaxation properties of tissues. Here, we propose a method to derive the attenuation map from a transmission scan. An annulus transmission source is positioned inside the field-of-view of the PET scanner. First a blank scan is acquired. The patient is injected with FDG and placed inside the scanner. 511-keV photons coming from the patient and the transmission source are acquired simultaneously. Time-of-flight information is used to extract the coincident photons originating from the annulus. The blank and transmission data are compared in an iterative reconstruction method to derive the attenuation map. Simulations with a digital phantom were performed to validate the method. The reconstructed attenuation coefficients differ less than 5% in volumes of interest inside the lungs, bone, and soft tissue. When applying attenuation correction in the reconstruction of the emission data a standardized uptake value error smaller than 9% was obtained for all tissues. In conclusion, our method can reconstruct the attenuation map and the emission data from a simultaneous scan without prior knowledge about the anatomy or the attenuation coefficients of the tissues.  相似文献   

12.
We present an algorithm to reconstruct helical cone beam computed tomography (CT) data acquired at variable pitch. The algorithm extracts a halfscan segment of projections using an extended version of the advanced single slice rebinning (ASSR) algorithm. ASSR rebins constant pitch cone beam data to fan beam projections that approximately lie on a plane that is tilted to optimally fit the source helix. For variable pitch, the error between the tilted plane chosen by ASSR and the source helix increases, resulting in increased image artifacts. To reduce the artifacts, we choose a reconstruction plane, which is tilted and shifted relative to the source trajectory. We then correct rebinned fan beam data using John's equation to virtually move the source into the tilted and shifted reconstruction plane. Results obtained from simulated phantom images and scanner images demonstrate the applicability of the proposed algorithm.   相似文献   

13.
Component averaging (CAV) was recently introduced by Censor, Gordon, and Gordon as a new iterative parallel technique suitable for large and sparse unstructured systems of linear equations. Based on earlier work of Byrne and Censor, it uses diagonal weighting matrices, with pixel-related weights determined by the sparsity of the system matrix. CAV is inherently parallel (similar to the very slowly converging Cimmino method) but its practical convergence on problems of image reconstruction from projections is similar to that of the algebraic reconstruction technique (ART). Parallel techniques are becoming more important for practical image reconstruction since they are relevant not only for supercomputers but also for the increasingly prevalent multiprocessor workstations. This paper reports on experimental results with a block-iterative version of component averraging (BICAV). When BICAV is optimized for block size and relaxation parameters, its very first iterates are far superior to those of and more or less on a par with ART. Similar to CAV, BICAV is also inherently parallel. The fast convergence is demonstrated on problems of image reconstruction from projections, using the SNARK93 image reconstruction software package. Detailed plots of various measures of convergence, and reconstructed images are presented.  相似文献   

14.
For the past few decades there has been tremendous innovation and development of Terahertz (THz) science and imaging. In particular, the technique of 3-D computed tomography has been adapted from the X-Ray to the THz range. However, the finite refractive index of materials in the THz range can severally refract probing THz beams during the acquisition of tomography data. Due to Fresnel reflection power losses at the boundaries as well as steering of the THz beam through the sample, refractive effects lead to anomalously high local attenuation coefficients near the material boundaries of a reconstructed image. These boundary phenomena can dominate the reconstructed THz-CT images making it difficult to distinguish structural defect(s) inside the material. In this paper an algorithm has been developed to remove the effects of refraction in THz-CT reconstructed images. The algorithm is successfully implemented on cylindrical shaped objects.  相似文献   

15.
A filtered backprojection reconstruction algorithm was developed for cardiac single photon emission computed tomography with cone-beam geometry. The algorithm reconstructs cone-beam projections collected from ;short scan' acquisitions of a detector traversing a noncircular planar orbit. Since the algorithm does not correct for photon attenuation, it is designed to reconstruct data collected over an angular range of slightly more than 180 degrees with the assumption that the range of angles is oriented so as not to acquire the highly attenuated posterior projections of emissions from cardiac radiopharmaceuticals. This sampling scheme is performed to minimize the attenuation artifacts that result from reconstructing posterior projections. From computer simulations, it is found that reconstruction of attenuated projections has a greater effect upon quantitation and image quality than any potential cone-beam reconstruction artifacts resulting from insufficient sampling of cone-beam projections. With nonattenuated projection data, cone beam reconstruction errors in the heart are shown to be small (errors of at most 2%).  相似文献   

16.
A novel image reconstruction algorithm has been developed and demonstrated for fluorescence-enhanced frequency-domain photon migration (FDPM) tomography from measurements of area illumination with modulated excitation light and area collection of emitted fluorescence light using a gain modulated image-intensified charge-coupled device (ICCD) camera. The image reconstruction problem was formulated as a nonlinear least-squares-type simple bounds constrained optimization problem based upon the penalty/modified barrier function (PMBF) method and the coupled diffusion equations. The simple bounds constraints are included in the objective function of the PMBF method and the gradient-based truncated Newton method with trust region is used to minimize the function for the large-scale problem (39919 unknowns, 2973 measurements). Three-dimensional (3-D) images of fluorescence absorption coefficients were reconstructed using the algorithm from experimental reflectance measurements under conditions of perfect and imperfect distribution of fluorophore within a single target. To our knowledge, this is the first time that targets have been reconstructed in three-dimensions from reflectance measurements with a clinically relevant phantom.  相似文献   

17.
We present an analytical scatter correction, based upon the Klein-Nishina formula, for singles-mode transmission data in positron emission tomography (PET) and its implementation as part of an iterative image reconstruction algorithm. We compared our analytically-calculated scatter sinogram data with previously validated simulation data for a small animal PET scanner with 68 Ge (a positron emitter) and 57 Co (approximately 122-keV photon emitter) transmission sources using four different phantom configurations (three uniform water cylinders with radii of 25, 30, and 45 mm and a nonuniform phantom consisting of water, Teflon, and air). Our scatter calculation correctly predicts the contribution from single-scattered (one incoherent scatter interaction) photons to the simulated sinogram data and provides good agreement for the percent scatter fraction (SF) per sinogram for all phantoms and both transmission sources. We then applied our scatter correction as part of an iterative reconstruction algorithm for PET transmission data for simulated and experimental data using uniform and nonuniform phantoms. For both simulated and experimental data, the reconstructed linear attenuation coefficients (mu-values-values) agreed with expected values to within 4% when scatter corrections were applied, for both the 68 Ge and 57 Co transmission sources. We also tested our reconstruction and scatter correction procedure for two experimental rodent studies (a mouse and rat). For the rodent studies, we found that the average mu-values for soft-tissue regions of interest agreed with expected values to within 4%. Using a 2.2-GHz processor, each scatter correction iteration required between 6-27 min of CPU time (without any code optimization) depending on the phantom size and source used. This extra calculation time does not seem unreasonable considering that, without scatter corrections, errors in the reconstructed mu-values were between 18%-45% depending on the phantom size and transmission source used.  相似文献   

18.
Given an mxm image I and a smaller nxn image P, the computation of an (m-n+1)x(m-n+1) matrix C where C(i, j) is of the form C(i,j)=Sigma(k=0)(n-1)Sigma(k'=0) (n-1)f(I(i+k,j+k'), P(k,k')), 0=/相似文献   

19.
Both digital subtraction and recursive filtering schemes have been employed successfully for intravenous and intraarterial arteriography. Either processing method results in an image(s), S, which is a linear combination of discrete images Ij acquired during the flow of iodinated contrast material, i.e., S = Sum of k(j)l(j) from j = 0 to N where k(j) are the weighting coefficients for the N+1 samples. It is shown that for a given set of images {l(j)} there exists a set of weighting coefficients {k(j)} which maximizes the iodine signal to noise ratio and simultaneously removes stationary background anatomy. The k(j) are related to the contrast dilution curve measured over an artery of interest, k(j) = s[j]-Mean(s), where {s[j]} is the set of measured image variations due to the flow of contrast material, and Mean(s), is the mean value of the s[j]. This choice of k(j) defines a matched filter. Compared to subtraction angiography, matched filtering is 4-6 times more dose efficient.  相似文献   

20.
We investigate an image recovery method for sparse‐view computed tomography (CT) using an iterative shrinkage algorithm based on a second‐order approach. The two‐step iterative shrinkage‐thresholding (TwIST) algorithm including a total variation regularization technique is elucidated to be more robust than other first‐order methods; it enables a perfect restoration of an original image even if given only a few projection views of a parallel‐beam geometry. We find that the incoherency of a projection system matrix in CT geometry sufficiently satisfies the exact reconstruction principle even when the matrix itself has a large condition number. Image reconstruction from fan‐beam CT can be well carried out, but the retrieval performance is very low when compared to a parallel‐beam geometry. This is considered to be due to the matrix complexity of the projection geometry. We also evaluate the image retrieval performance of the TwIST algorithm using measured projection data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号