共查询到20条相似文献,搜索用时 0 毫秒
1.
Dynamic X-ray computed tomography 总被引:1,自引:0,他引:1
Bonnet S. Koenig A. Roux S. Hugonnard P. Guillemaud R. Grangeat P. 《Proceedings of the IEEE. Institute of Electrical and Electronics Engineers》2003,91(10):1574-1587
Dynamic computed tomography (CT) imaging aims at reconstructing image sequences where the dynamic nature of the living human body is of primary interest. The main applications concerned are image-guided interventional procedures, functional studies and cardiac imaging. The introduction of ultra-fast rotating gantries along with multi-row detectors and in near future area detectors allows huge progress toward the imaging of moving organs with low-contrast resolution. This paper gives an overview of the different concepts used in dynamic CT. A new reconstruction algorithm based on a voxel-specific dynamic evolution compensation is also presented. It provides four-dimensional image sequences with accurate spatio-temporal information, where each frame is reconstructed using a long-scan acquisition mode on several half-turns. In the same time, this technique permits to reduce the dose delivered per rotation while keeping the same signal to noise ratio for every frame using an adaptive motion-compensated temporal averaging. Results are illustrated on simulated data. 相似文献
2.
3.
Jorgensen SM Eaker DR Vercnocke AJ Ritman EL 《IEEE transactions on medical imaging》2008,27(4):569-575
Variation in computed tomography (CT) image grayscale and spatial geometry due to specimen orientation, magnification, voxel size, differences in X-ray photon energy and limited field-of-view during the scan, were evaluated in repeated micro-CT scans of iliac crest biopsies and test phantoms. Using the micro-CT scanner on beamline X2B at the Brookhaven National Laboratory's National Synchrotron Light Source, 3-D micro-CT images were generated. They consisted of up to 1024 X 24002, 4-mum cubic voxels, each with 16-bit gray-scale. We also reconstructed the images at 16-, 32-, and 48-mum voxel resolution. Scan data were reconstructed from the complete profiles using filtered back-projection and from truncated profiles using profile-extension and with a Local reconstruction algorithm. Three biopsies and one bonelike test phantom were each rescanned at three different times at annual intervals. For the full-data-set reconstructions, the reproducibility of the estimates of mineral content of bone at mean bone opacity value, was plusmn28.8 mg/cm3 , i.e., 2.56%, in a 4-mum cubic voxel at the 95% confidence level. The reproducibility decreased with increased voxel size. The interscan difference in imaged bone volume ranged from 0.86 plusmn 0.64% at 4-mum voxel resolution, and 2.64 plusmn 2.48% at 48 mum. 相似文献
4.
Most X-ray CT scanners require a few seconds to produce a single two-dimensional (2-D) image of a cross section of the body. The accuracy of full three-dimensional (3-D) images of the body synthesized from a contiguous set of 2-D images produced by sequential CT scanning of adjacent body slices is limited by 1) slice-to-slice registration (positioning of patient); 2) slice thickness; and 3) motion, both voluntary and involuntary, which occurs during the total time required to scan all slices. Therefore, this method is inadequate for true dynamic 3-D imaging of moving organs like the heart, lungs, and circulation. To circumvent these problems, the Dynamic Spatial Reconstructor (DSR) was designed by the Biodynamics Research Unit at the Mayo Clinic to provide synchronous volume imaging, that is stop-action (1/100 s), high-repetition rate (up to 60/s), simultaneous scanning of many parallel thin cross sections (up to 240, each 0.45 mm thick, 0.9 mm apart) spanning the entire anatomic extent of the bodily organ(s)of interest. These capabilities are achieved by using multiple X-ray sources and multiple 2-D fluoroscopic video camera assemblies on a continually rotating gantry. Desired tradeoffs between temporal, spatial, and density resolution can be achieved by retrospective selection and processing of appropriate subsets of the total data recorded during a continuous DSR scan sequence. 相似文献
5.
The article looks at reconstruction in 2-D and 3-D tomography. We have not dealt with some of the issues in reconstruction such as sampling and aliasing artifacts, finite detector aperture artifacts, beam hardening artifacts, etc., in greater detail since these are beyond the scope of an introductory tutorial. We examine the physical and mathematical concepts of the Radon (1917) transform, and the basic parallel beam reconstruction algorithms are discussed. We also develop the algorithms for fan-beam CT, and discuss the mathematical principles of cone-beam CT 相似文献
6.
Extraction of the hepatic vasculature in rats using 3-D micro-CT images 总被引:11,自引:0,他引:11
High-resolution micro-computed tomography (CT) scanners now exist for imaging small animals. In particular, such a scanner can generate very large three-dimensional (3-D) digital images of the rat's hepatic vasculature. These images provide data on the overall structure and function of such complex vascular trees. Unfortunately, human operators have extreme difficulty in extracting the extensive vasculature contained in the images. Also, no suitable tree representation exists that permits straight-forward structural analysis and information retrieval. This work proposes an automatic procedure for extracting and representing such a vascular tree. The procedure is both computation and memory efficient and runs on current PCs. As the results demonstrate, the procedure faithfully follows human-defined measurements and provides far more information than can be defined interactively. 相似文献
7.
Developments with maximum likelihood X-ray computed tomography 总被引:1,自引:0,他引:1
An approach to the maximum-likelihood estimation of attenuation coefficients in transmission tomography is presented as an extension of earlier theoretical work by K. Lange and R. Carson (J. Comput. Assist. Tomography, vol.8, p.306-16, 1984). The reconstruction algorithm is based on the expectation-maximization (EM) algorithm. Several simplifying approximations are introduced which make the maximization step of the algorithm available. Computer simulations are presented using noise-free and Poisson randomized projections. The images obtained with the EM-type method are compared to those reconstructed with the EM method of Lange and Carson and with filtered backprojection. Preliminary results show that there are potential advantages in using the maximum likelihood approaches in situations where a high-contrast object, such as bone, is embedded in low-contrast soft tissue. 相似文献
8.
This paper describes a statistical image reconstruction method for X-ray computed tomography (CT) that is based on a physical model that accounts for the polyenergetic X-ray source spectrum and the measurement nonlinearities caused by energy-dependent attenuation. We assume that the object consists of a given number of nonoverlapping materials, such as soft tissue and bone. The attenuation coefficient of each voxel is the product of its unknown density and a known energy-dependent mass attenuation coefficient. We formulate a penalized-likelihood function for this polyenergetic model and develop an ordered-subsets iterative algorithm for estimating the unknown densities in each voxel. The algorithm monotonically decreases the cost function at each iteration when one subset is used. Applying this method to simulated X-ray CT measurements of objects containing both bone and soft tissue yields images with significantly reduced beam hardening artifacts. 相似文献
9.
Schaap M Schilham AM Zuiderveld KJ Prokop M Vonken EJ Niessen WJ 《IEEE transactions on medical imaging》2008,27(8):1120-1129
10.
The authors propose a Bayesian approach with maximum-entropy (ME) priors to reconstruct an object from either the Fourier domain data (the Fourier transform of diffracted field measurements) in the case of diffraction tomography, or directly from the original projection data in the case of X-ray tomography. The objective function obtained is composed of a quadratic term resulting from chi(2) statistics and an entropy term that is minimized using variational techniques and a conjugate-gradient iterative method. The computational cost and practical implementation of the algorithm are discussed. Some simulated results in X-ray and diffraction tomography are given to compare this method to the classical ones. 相似文献
11.
In this paper, we derive a monotonic penalized-likelihood algorithm for image reconstruction in X-ray fluorescence computed tomography (XFCT) when the attenuation maps at the energies of the fluorescence X-rays are unknown. In XFCT, a sample is irradiated with pencil beams of monochromatic synchrotron radiation that stimulate the emission of fluorescence X-rays from atoms of elements whose K- or L-edges lie below the energy of the stimulating beam. Scanning and rotating the object through the beam allows for acquisition of a tomographic dataset that can be used to reconstruct images of the distribution of the elements in question. XFCT is a stimulated emission tomography modality, and it is thus necessary to correct for attenuation of the incident and fluorescence photons. The attenuation map is, however, generally known only at the stimulating beam energy and not at the energies of the various fluorescence X-rays of interest. We have developed a penalized-likelihood image reconstruction strategy for this problem. The approach alternates between updating the distribution of a given element and updating the attenuation map for that element's fluorescence X-rays. The approach is guaranteed to increase the penalized likelihood at each iteration. Because the joint objective function is not necessarily concave, the approach may drive the solution to a local maximum. To encourage the algorithm to seek out a reasonable local maximum, we include in the objective function a prior that encourages a relationship, based on physical considerations, between the fluorescence attenuation map and the distribution of the element being reconstructed. 相似文献
12.
Roessl E Brendel B Engel KJ Schlomka JP Thran A Proksa R 《IEEE transactions on medical imaging》2011,30(9):1678-1690
The feasibility of K-edge imaging using energy-resolved, photon-counting transmission measurements in X-ray computed tomography (CT) has been demonstrated by simulations and experiments. The method is based on probing the discontinuities of the attenuation coefficient of heavy elements above and below the K-edge energy by using energy-sensitive, photon counting X-ray detectors. In this paper, we investigate the dependence of the sensitivity of K-edge imaging on the atomic number Z of the contrast material, on the object diameter D , on the spectral response of the X-ray detector and on the X-ray tube voltage. We assume a photon-counting detector equipped with six adjustable energy thresholds. Physical effects leading to a degradation of the energy resolution of the detector are taken into account using the concept of a spectral response function R(E,U) for which we assume four different models. As a validation of our analytical considerations and in order to investigate the influence of elliptically shaped phantoms, we provide CT simulations of an anthropomorphic Forbild-Abdomen phantom containing a gold-contrast agent. The dependence on the values of the energy thresholds is taken into account by optimizing the achievable signal-to-noise ratios (SNR) with respect to the threshold values. We find that for a given X-ray spectrum and object size the SNR in the heavy element's basis material image peaks for a certain atomic number Z. The dependence of the SNR in the high- Z basis-material image on the object diameter is the natural, exponential decrease with particularly deteriorating effects in the case where the attenuation from the object itself causes a total signal loss below the K-edge. The influence of the energy-response of the detector is very important. We observed that the optimal SNR values obtained with an ideal detector and with a CdTe pixel detector whose response, showing significant tailing, has been determined at a synchrotron differ by factors of about two to three. The potentially very important impact of scattered X-ray radiation and pulse pile-up occurring at high photon rates on the sensitivity of the technique is qualitatively discussed. 相似文献
13.
《Proceedings of the IEEE. Institute of Electrical and Electronics Engineers》1979,67(9):1245-1272
This paper reviews the major developments that have taken place during the last three years in imaging with computed tomography (CT) using X-ray, emission, and ultrasound sources. Space limitations have resulted in some selection of topics by the author. There are four major sections dealing with algorithms, X-ray CT, emission CT, and ultrosound CT. Since most of the currently used algorithms are of filtered-backprojection type, we have concentrated on these in the section on algorithms (with emphasis on their implementation aspects). In X-ray CT an important question raised during the last few years has concerned the parameter measured by a CT scanner, given the fact that the X-rays used in CT scanners are polychromatic and the fact that tissue attenuation coefficients are energy dependent. Answers to this question are reviewed in the section on X-ray CT where we have also discussed the artifacts caused by the polychromaticity of the X-ray photons. Methods for the removal of these artifacts have also been reviewed. In emission CT the biggest development of the last three years is the great interest in positron tomography, although space constraints have dictated an essentially introductory treatment and not all aspects of the single photon and positron tomography have been surveyed. Finally, we have reviewed recent developments in ultrasound CT. We have pointed out that because of the sensitivity of this technique to refraction, it is currently limited to soft tissue structures, with ultrasonic detection of tumors in the female breast a significant application. 相似文献
14.
We describe a new approach for the inversion of the generalized attenuated radon transform in X-ray fluorescence computed tomography (XFCT). The approach consists of using the radon inverse as an approximation for the actual one, followed by an iterative refinement. Also, we analyze the problem of retrieving the attenuation map directly from the emission data, giving rise to a novel alternating method for the solution. We applied our approach to real and simulated XFCT data and compared its performance to previous inversion algorithms for the problem, showing its main advantages: better images than those obtained by other analytic methods and much faster than iterative methods in the discrete setting. 相似文献
15.
We present a dual-energy (DE) transmission computed tomography (CT) reconstruction method. It is statistically motivated and features nonnegativity constraints in the density domain. A penalized weighted least squares (PWLS) objective function has been chosen to handle the non-Poisson noise added by amorphous silicon (aSi:H) detectors. A Gauss-Seidel algorithm has been used to minimize the objective function. The behavior of the method in terms of bias/standard deviation tradeoff has been compared to that of a DE method that is based on filtered back projection (FBP). The advantages of the DE PWLS method are largest for high noise and/or low flux cases. Qualitative results suggest this as well. Also, the reconstructed images of an object with opaque regions are presented. Possible applications of the method are: attenuation correction for positron emission tomography (PET) images, various quantitative computed tomography (QCT) methods such as bone mineral densitometry (BMD), and the removal of metal streak artifacts. 相似文献
16.
This work provides a comprehensive Monte Carlo study of X-ray fluorescence computed tomography (XFCT) and K-edge imaging system, including the system design, the influence of various imaging components, the sensitivity and resolution under various conditions. We modified the widely used EGSnrc/DOSXYZnrc code to simulate XFCT images of two acrylic phantoms loaded with various concentrations of gold nanoparticles and Cisplatin for a number of XFCT geometries. In particular, reconstructed signal as a function of the width of the detector ring, its angular coverage and energy resolution were studied. We found that XFCT imaging sensitivity of the modeled systems consisting of a conventional X-ray tube and a full 2-cm-wide energy-resolving detector ring was 0.061% and 0.042% for gold nanoparticles and Cisplatin, respectively, for a dose of ~ 10 cGy. Contrast-to-noise ratio (CNR) of XFCT images of the simulated acrylic phantoms was higher than that of transmission K-edge images for contrast concentrations below 0.4%. 相似文献
17.
General reconstruction theory for multislice X-ray computed tomography with a gantry tilt 总被引:3,自引:0,他引:3
This paper discusses image reconstruction with a tilted gantry in multislice computed tomography (CT) with helical (spiral) data acquisition. The reconstruction problem with gantry tilt is shown to be transformable into the problem of reconstructing a virtual object from multislice CT data with no gantry tilt, for which various algorithms exist in the literature. The virtual object is related to the real object by a simple affine transformation that transforms the tilted helical trajectory of the X-ray source into a nontilted helix, and the real object can be computed from the virtual object using one-dimensional interpolation. However, the interpolation may be skipped since the reconstruction of the virtual object on a Cartesian grid provides directly nondistorted images of the real object on slices parallel to the tilted plane of the gantry. The theory is first presented without any specification of the detector geometry, then applied to the curved detector geometry of third-generation CT scanners with the use of Katsevich's formula for example. Results from computer-simulated data of the FORBILD thorax phantom are given in support of the theory. 相似文献
18.
Salem KA Szymanski-Exner A Lazebnik RS Breen MS Gao J Wilson DL 《IEEE transactions on medical imaging》2002,21(10):1310-1316
Recent advances in drug delivery techniques have necessitated the development of tools for in vivo monitoring of drug distributions. Gamma emission imaging and magnetic resonance imaging suffer from problems of resolution and sensitivity, respectively. We propose that the combination of X-ray CT imaging and image analysis techniques provides an excellent method for the evaluation of the transport of platinum-containing drugs from a localized, controlled release source. We correlated local carboplatin concentration with CT intensity, producing a linear relationship with a sensitivity of 62.6 microg/mL per Hounsfield unit. As an example application, we evaluated the differences in drug transport properties between normal and ablated rabbit liver from implanted polymer millirods. The use of three-dimensional visualization provided a method of evaluating the placement of the drug delivery device in relation to the surrounding anatomy, and registration and reformatting allowed the accurate comparison of the sequence of temporal CT volumes acquired over a period of 24 h. Taking averages over radial lines extending away from the center of the implanted millirods and integrating over clinically appropriate regions, yielded information about drug release from the millirod and transport in biological tissues. Comparing implants in normal and ablated tissues, we found that ablation prior to millirod implantation greatly decreased the loss of drug from the immediate area, resulting in a higher average dose to the surrounding tissue. This work shows that X-ray CT imaging is a useful technique for the in vivo evaluation of the pharmacokinetics of platinated agents. 相似文献
19.
Reduction of noise-induced streak artifacts in X-ray computed tomography through spline-based penalized-likelihood sinogram smoothing 总被引:3,自引:0,他引:3
We present a statistically principled sinogram smoothing approach for X-ray computed tomography (CT) with the intent of reducing noise-induced streak artifacts. These artifacts arise in CT when some subset of the transmission measurements capture relatively few photons because of high attenuation along the measurement lines. Attempts to reduce these artifacts have focused on the use of adaptive filters that strive to tailor the degree of smoothing to the local noise levels in the measurements. While these approaches involve loose consideration of the measurement statistics to determine smoothing levels, they do not explicitly model the statistical distributions of the measurement data. In this paper, we present an explicitly statistical approach to sinogram smoothing in the presence of photon-starved measurements. It is an extension of a nonparametric sinogram smoothing approach using penalized Poisson-likelihood functions that we have previously developed for emission tomography. Because the approach explicitly models the data statistics, it is naturally adaptive--it will smooth more variable measurements more heavily than it does less variable measurements. We find that it significantly reduces streak artifacts and noise levels without comprising image resolution. 相似文献
20.
针对目前MEMS设计复杂 ,直观性差等问题 ,提出了工艺集成化设计和可视化的可行性解决方案 ,对其中的关键技术进行了研究 ,并基于所提出的方案研究了工艺设计的集成化和可视化的实现技术。首先对MEMS表面加工工艺进行了详细的分析 ,采用面向过程的方法建立了表面工艺过程的统一模型。基于这一模型 ,研究了MEMS工艺设计的集成化技术 ,实现了工艺设计中各种信息的集成化 ,并设计开发了工艺设计的集成化软件环境。最后 ,对工艺设计中的二维版图的三维重构算法进行了详细的研究 ,通过SolilWorksAPI接口的开发 ,在SolilWorks的环境下实现了工艺过程的三维可视化 相似文献