首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Respiratory motion during the collection of computed tomography (CT) projections generates structured artifacts and a loss of resolution that can render the scans unusable. This motion is problematic in scans of those patients who cannot suspend respiration, such as the very young or intubated patients. Here, the authors present an algorithm that can be used to reduce motion artifacts in CT scans caused by respiration. An approximate model for the effect of respiration is that the object cross section under interrogation experiences time-varying magnification and displacement along two axes. Using this model an exact filtered backprojection algorithm is derived for the case of parallel projections. The result is extended to generate an approximate reconstruction formula for fan-beam projections. Computer simulations and scans of phantoms on a commercial CT scanner validate the new reconstruction algorithms for parallel and fan-beam projections. Significant reduction in respiratory artifacts is demonstrated clinically when the motion model is satisfied. The method can be applied to projection data used in CT, single photon emission computed tomography (SPECT), positron emission tomography (PET), and magnetic resonance imaging (MRI).  相似文献   

2.
Reports on a new method in which spatially correlated magnetic resonance (MR) or X-ray computed tomography (CT) images are employed as a source of prior information in the Bayesian reconstruction of positron emission tomography (PET) images. This new method incorporates the correlated structural images as anatomic templates which can be used for extracting information about boundaries that separate regions exhibiting different tissue characteristics. In order to avoid the possible introduction of artifacts caused by discrepancies between functional and anatomic boundaries, the authors propose a new method called the "weighted line site" method, in which a prior structural image is employed in a modified updating scheme for the boundary variable used in the iterative Bayesian reconstruction. This modified scheme is based on the joint probability of structural and functional boundaries. As to the structural information provided by CT or MR images, only those which have high joint probability with the corresponding PET data are used; whereas other boundary information that is not supported by the PET image is suppressed. The new method has been validated by computer simulation and phantom studies. The results of these validation studies indicate that this new method offers significant improvements in image quality when compared to other reconstruction algorithms, including the filtered backprojection method and the maximum likelihood approach, as well as the Bayesian method without the use of the prior boundary information.  相似文献   

3.
Reconstruction Algorithm for Fan Beam with a Displaced Center-of-Rotation   总被引:3,自引:0,他引:3  
A convolutional backprojection algorithm is derived for a fan beam geometry that has its center-of-rotation displaced from the midline of the fan beam. In single photon emission computed tomography (SPECT), where a transaxial converging collimator is used with a rotating gamma camera, it is difficult to precisely align the collimator so that the mechanical center-of-rotation is colinear with the midline of the fan beam. A displacement of the center-of-rotation can also occur in X-ray CT when the X-ray source is mispositioned. Standard reconstruction algorithms which directly filter and backproject the fan beam data without rebinning into parallel beam geometry have been derived for a geometry having its center-of-rotation at the midline of the fan beam. However, in the case of a misalignment of the center-of-rotation, if these conventional reconstruction algorithms are used to reconstruct the fan beam projections, structured artifacts and a loss of resolution will result. We illustrate these artifacts with simulations and demonstrate how the news algorithm corrects for this misalignment. We also show a method to estimate the parameters of the fan beam geometry including the shift in the center-of-rotation.  相似文献   

4.
We present a statistically principled sinogram smoothing approach for X-ray computed tomography (CT) with the intent of reducing noise-induced streak artifacts. These artifacts arise in CT when some subset of the transmission measurements capture relatively few photons because of high attenuation along the measurement lines. Attempts to reduce these artifacts have focused on the use of adaptive filters that strive to tailor the degree of smoothing to the local noise levels in the measurements. While these approaches involve loose consideration of the measurement statistics to determine smoothing levels, they do not explicitly model the statistical distributions of the measurement data. In this paper, we present an explicitly statistical approach to sinogram smoothing in the presence of photon-starved measurements. It is an extension of a nonparametric sinogram smoothing approach using penalized Poisson-likelihood functions that we have previously developed for emission tomography. Because the approach explicitly models the data statistics, it is naturally adaptive--it will smooth more variable measurements more heavily than it does less variable measurements. We find that it significantly reduces streak artifacts and noise levels without comprising image resolution.  相似文献   

5.
The circular scanning trajectory is one of the most widely adopted data-acquisition configurations in computed tomography (CT). The Feldkamp, Davis, Kress (FDK) algorithm and its various modifications have been developed for reconstructing approximately three-dimensional images from circular cone-beam data. When data contain transverse truncations, however, these algorithms may reconstruct images with significant truncation artifacts. It is of practical significance to develop algorithms that can reconstruct region-of-interest (ROI) images from truncated circular cone-beam data that are free of truncation artifacts and that have an accuracy comparable to that obtained from nontruncated cone-beam data. In this work, we have investigated and developed a backprojection-filtration (BPF)-based algorithm for ROI-image reconstruction from circular cone-beam data containing transverse truncations. Furthermore, we have developed a weighted BPF algorithm to exploit "redundant" information in data for improving image quality. In an effort to validate and evaluate the proposed BPF algorithms for circular cone-beam CT, we have performed numerical studies by using both computer-simulation data and experimental data acquired with a radiotherapy cone-beam CT system. Quantitative results in these studies demonstrate that the proposed BPF algorithms for circular cone-beam CT can reconstruct ROI images free of truncation artifacts.  相似文献   

6.
The article looks at reconstruction in 2-D and 3-D tomography. We have not dealt with some of the issues in reconstruction such as sampling and aliasing artifacts, finite detector aperture artifacts, beam hardening artifacts, etc., in greater detail since these are beyond the scope of an introductory tutorial. We examine the physical and mathematical concepts of the Radon (1917) transform, and the basic parallel beam reconstruction algorithms are discussed. We also develop the algorithms for fan-beam CT, and discuss the mathematical principles of cone-beam CT  相似文献   

7.
Tomographic reconstruction for tilted helical multislice CT   总被引:2,自引:0,他引:2  
One of the most recent technical advancements in computed tomography (CT) is the introduction of multislice CT (MCT). Because multiple detector rows are used for data acquisition, MCT offers higher volume coverage, faster scan speed, and reduced X-ray tube loading. Recognizing its unique data-sampling pattern, several image reconstruction algorithms were developed. These algorithms have been shown to be adequate in producing clinically acceptable images. Recent studies, however, have revealed that the image quality of MCT can be significantly degraded when helical data are acquired with a tilted gantry. The degraded image quality has rendered this feature unacceptable for clinical usage. In this paper, we first present a detailed investigation on the cause of the image quality degradation. An analytical model is derived to provide a mathematical basis for correction. Several compensation schemes are subsequently presented, and a detailed performance comparison is provided in terms of spatial resolution, noise, computation efficiency, and image artifacts.  相似文献   

8.
This paper has the dual purpose of introducing some new algorithms for emission and transmission tomography and proving mathematically that these algorithms and related antecedent algorithms converge. Like the EM algorithms for positron, single-photon, and transmission tomography, the algorithms provide maximum likelihood estimates of pixel concentration or linear attenuation parameters. One particular innovation we discuss is a computationally practical scheme for modifying the EM algorithms to include a Bayesian prior. The Bayesian versions of the EM algorithms are shown to have superior convergence properties in a vicinity of the maximum. We anticipate that some of the other algorithms will also converge faster than the EM algorithms.  相似文献   

9.
After a brief discussion of the algebraic reconstruction techniques (ART), we introduce the attenuation problem in positron emission tomography (PET). We anticipate that a generalization of ART, the so-called cyclic subgradient projection (CSP) method, may be useful for solving this problem. This, however, has not been successfully realized, due to the fact that data collected by our proposed stationary PET detector ring are too sparsely sampled. That this is, in fact, a major problem is demonstrated by showing that ordinary ART produces reconstructions with unacceptably strong artifacts even on perfect (no attenuation) data collected according to the PET geometry. We demonstrate that the source of this artifact is the sparse sampling, and we propose the use of interpolated rays to overcome the problem. This approach is successful, as is illustrated by showing reconstructions from sparsely sampled data by ART with interpolated rays.  相似文献   

10.
In tomographic medical devices such as single photon emission computed tomography or positron emission tomography cameras, image reconstruction is an unstable inverse problem, due to the presence of additive noise. A new family of regularization methods for reconstruction, based on a thresholding procedure in wavelet and wavelet packet (WP) decompositions, is studied. This approach is based on the fact that the decompositions provide a near-diagonalization of the inverse Radon transform and of prior information in medical images. A WP decomposition is adaptively chosen for the specific image to be restored. Corresponding algorithms have been developed for both two-dimensional and full three-dimensional reconstruction. These procedures are fast, noniterative, and flexible. Numerical results suggest that they outperform filtered back-projection and iterative procedures such as ordered-subset-expectation-maximization.  相似文献   

11.
The authors explore the application of volume rendering in medical ultrasonic imaging. Several volume rendering methods have been developed for X-ray computed tomography (X-CT), magnetic resonance imaging (MRI) and positron emission tomography (PET). Limited research has been done on applications of volume rendering techniques in medical ultrasound imaging because of a general lack of adequate equipment for 3D acquisitions. Severe noise sources and other limitations in the imaging system make volume rendering of ultrasonic data a challenge compared to rendering of MRI and X-CT data. Rendering algorithms that rely on an initial classification of the data into different tissue categories have been developed for high quality X-CT and MR-data. So far, there is a lack of general and reliable methods for tissue classification in ultrasonic imaging. The authors focus on volume rendering methods which are not dependent on any classification into different tissue categories. Instead, features are extracted from the original 3D data-set, and projected onto the view plane. The authors found that some of these methods may give clinically useful information which is very difficult to get from ordinary 2D ultrasonic images, and in some cases renderings with very fine structural details. The authors have applied the methods to 3D ultrasound images from fetal examinations. The methods are now in use as clinical tools at the National Center of Fetal Medicine in Trondheim, Norway.  相似文献   

12.
Lung motion correction on respiratory gated 3-D PET/CT images   总被引:3,自引:0,他引:3  
Motion is a source of degradation in positron emission tomography (PET)/computed tomography (CT) images. As the PET images represent the sum of information over the whole respiratory cycle, attenuation correction with the help of CT images may lead to false staging or quantification of the radioactive uptake especially in the case of small tumors. We present an approach avoiding these difficulties by respiratory-gating the PET data and correcting it for motion with optical flow algorithms. The resulting dataset contains all the PET information and minimal motion and, thus, allows more accurate attenuation correction and quantification.  相似文献   

13.
Recently, there has been much progress in algorithm development for image reconstruction in cone-beam computed tomography (CT). Current algorithms, including the chord-based algorithms, now accept minimal data sets for obtaining images on volume regions-of-interest (ROIs) thereby potentially allowing for reduction of X-ray dose in diagnostic CT. As these developments are relatively new, little effort has been directed at investigating the response of the resulting algorithm implementations to physical factors such as data noise. In this paper, we perform an investigation on the noise properties of ROI images reconstructed by using chord-based algorithms for different scanning configurations. We find that, for the cases under study, the chord-based algorithms yield images with comparable quality. Additionally, it is observed that, in many situations, large data sets contain extraneous data that may not reduce the ROI-image variances.  相似文献   

14.
Image registration   总被引:5,自引:0,他引:5  
In order to demonstrate the growth of the medical image registration field over the past decades, this paper presents the number of journal publications on this topic since 1988 until 2002. In a similar manner, trends in topics within the field of medical image registration are detected. Publications on computed tomography (CT) and magnetic resonance imaging (MRI) are rather constant through the years. Positron emission tomography (PET) and single photon emission computed tomography (SPECT), on the other hand, seem to loose ground to newly emerging functional imaging techniques, such as functional MRI (fMRI) whereas an increase in interest in registration of ultrasound (US) images was observed. Two topics in image registration that are currently considered hot are intraoperative and elastic registration. Although the interest in intraoperative registration strongly increased in the late 1990s, there seems to be a slight relative decrease in recent years. On the other hand, elastic registration has become a popular topic, reaching the highest numbers so far in 2002.  相似文献   

15.
16.
A new class of fast maximum-likelihood estimation (MLE) algorithms for emission computed tomography (ECT) is developed. In these cyclic iterative algorithms, vector extrapolation techniques are integrated with the iterations in gradient-based MLE algorithms, with the objective of accelerating the convergence of the base iterations. This results in a substantial reduction in the effective number of base iterations required for obtaining an emission density estimate of specified quality. The mathematical theory behind the minimal polynomial and reduced rank vector extrapolation techniques, in the context of emission tomography, is presented. These extrapolation techniques are implemented in a positron emission tomography system. The new algorithms are evaluated using computer experiments, with measurements taken from simulated phantoms. It is shown that, with minimal additional computations, the proposed approach results in substantial improvement in reconstruction.  相似文献   

17.
The problem of motion is well known in positron emission tomography (PET) studies. The PET images are formed over an elongated period of time. As the patients cannot hold breath during the PET acquisition, spatial blurring and motion artifacts are the natural result. These may lead to wrong quantification of the radioactive uptake. We present a solution to this problem by respiratory-gating the PET data and correcting the PET images for motion with optical flow algorithms. The algorithm is based on the combined local and global optical flow algorithm with modifications to allow for discontinuity preservation across organ boundaries and for application to 3-D volume sets. The superiority of the algorithm over previous work is demonstrated on software phantom and real patient data.   相似文献   

18.
A review of cardiac image registration methods   总被引:16,自引:0,他引:16  
In this paper, the current status of cardiac image registration methods is reviewed. The combination of information from multiple cardiac image modalities, such as magnetic resonance imaging, computed tomography, positron emission tomography, single-photon emission computed tomography, and ultrasound, is of increasing interest in the medical community for physiologic understanding and diagnostic purposes. Registration of cardiac images is a more complex problem than brain image registration because the heart is a nonrigid moving organ inside a moving body. Moreover, as compared to the registration of brain images, the heart exhibits much fewer accurate anatomical landmarks. In a clinical context, physicians often mentally integrate image information from different modalities. Automatic registration, based on computer programs, might, however, offer better accuracy and repeatability and save time.  相似文献   

19.
F-information measures in medical image registration   总被引:8,自引:0,他引:8  
A measure for registration of medical images that currently draws much attention is mutual information. The measure originates from information theory, but has been shown to be successful for image registration as well. Information theory, however, offers many more measures that may be suitable for image registration. These all measure the divergence of the joint distribution of the images' grey values from the joint distribution that would have been found had the images been completely independent. This paper compares the performance of mutual information as a registration measure with that of other F-information measures. The measures are applied to rigid registration of positron emission tomography (PET)/magnetic resonance (MR) and MR/computed tomography (CT) images, for 35 and 41 image pairs, respectively. An accurate gold standard transformation is available for the images, based on implanted markers. The registration performance, robustness and accuracy of the measures are studied. Some of the measures are shown to perform poorly on all aspects. The majority of measures produces results similar to those of mutual information. An important finding, however, is that several measures, although slightly more difficult to optimize, can potentially yield significantly more accurate results than mutual information.  相似文献   

20.
Presents a new class of algorithms for penalized-likelihood reconstruction of attenuation maps from low-count transmission scans. We derive the algorithms by applying to the transmission log-likelihood a version of the convexity technique developed by De Pierro for emission tomography. The new class includes the single-coordinate ascent (SCA) algorithm and Lange's convex algorithm for transmission tomography as special cases. The new grouped-coordinate ascent (GCA) algorithms in the class overcome several limitations associated with previous algorithms. (1) Fewer exponentiations are required than in the transmission maximum likelihood-expectation maximization (ML-EM) algorithm or in the SCA algorithm. (2) The algorithms intrinsically accommodate nonnegativity constraints, unlike many gradient-based methods. (3) The algorithms are easily parallelizable, unlike the SCA algorithm and perhaps line-search algorithms. We show that the GCA algorithms converge faster than the SCA algorithm, even on conventional workstations. An example from a low-count positron emission tomography (PET) transmission scan illustrates the method  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号