首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We present an analytical scatter correction, based upon the Klein-Nishina formula, for singles-mode transmission data in positron emission tomography (PET) and its implementation as part of an iterative image reconstruction algorithm. We compared our analytically-calculated scatter sinogram data with previously validated simulation data for a small animal PET scanner with 68 Ge (a positron emitter) and 57 Co (approximately 122-keV photon emitter) transmission sources using four different phantom configurations (three uniform water cylinders with radii of 25, 30, and 45 mm and a nonuniform phantom consisting of water, Teflon, and air). Our scatter calculation correctly predicts the contribution from single-scattered (one incoherent scatter interaction) photons to the simulated sinogram data and provides good agreement for the percent scatter fraction (SF) per sinogram for all phantoms and both transmission sources. We then applied our scatter correction as part of an iterative reconstruction algorithm for PET transmission data for simulated and experimental data using uniform and nonuniform phantoms. For both simulated and experimental data, the reconstructed linear attenuation coefficients (mu-values-values) agreed with expected values to within 4% when scatter corrections were applied, for both the 68 Ge and 57 Co transmission sources. We also tested our reconstruction and scatter correction procedure for two experimental rodent studies (a mouse and rat). For the rodent studies, we found that the average mu-values for soft-tissue regions of interest agreed with expected values to within 4%. Using a 2.2-GHz processor, each scatter correction iteration required between 6-27 min of CPU time (without any code optimization) depending on the phantom size and source used. This extra calculation time does not seem unreasonable considering that, without scatter corrections, errors in the reconstructed mu-values were between 18%-45% depending on the phantom size and transmission source used.  相似文献   

2.
A collision-free trajectory planning method based on speed alternation strategy for multijoint manipulators in overlapped working envelopes is proposed. Since the shape of a robot's link is usually rectangular or cylindrical approximately, the proposed method models a robot's link mathematically by quadric primitives, such as ellipsoids and spheres. The occurrence of collisions between links can be predicted easily by means of relative coordinate transformations and geometric deformations between those ellipsoids. Furthermore, the collision-trend index which is defined by projecting the ellipsoids geometrically onto the Gaussian distribution plays a significant role in searching the optimal resolution in the proposed collision-avoidance method. Experiments with two Motoman robots from the YASUKAWAI Company are conducted to demonstrate the performance of the proposed methods.  相似文献   

3.
A model of object shape by nets of medial and boundary primitives is justified as richly capturing multiple aspects of shape and yet requiring representation space and image analysis work proportional to the number of primitives. Metrics are described that compute an object representation's prior probability of local geometry by reflecting variabilities in the net's node and link parameter values, and that compute a likelihood function measuring the degree of match of an image to that object representation. A paradigm for image analysis of deforming such a model to optimize a posteriori probability is described, and this paradigm is shown to be usable as a uniform approach for object definition, object-based registration between images of the same or different imaging modalities, and measurement of shape variation of an abnormal anatomical object, compared with a normal anatomical object. Examples of applications of these methods in radiotherapy, surgery, and psychiatry are given.  相似文献   

4.
Scatter correction is an important factor in single photon emission computed tomography (SPECT). Many scatter correction techniques, such as multiple-window subtraction and intrinsic modeling with iterative algorithms, have been under study for many years. Previously, we developed an efficient slice-to-slice blurring technique to model attenuation and system geometric response in a projector/backprojector pair, which was used in an ML-EM algorithm to reconstruct SPECT data. This paper proposes a projector/backprojector that models the three-dimensional (3-D) first-order scatter in SPECT, also using an efficient slice-to-slice blurring technique. The scatter response is estimated from a known nonuniform attenuation distribution map. It is assumed that the probability of detection of a first-order scattered photon from a photon that is emitted in a given source voxel and scattered in a given scatter voxel is proportional to the attenuation coefficient value at that voxel. Monte Carlo simulations of point sources and an MCAT torso phantom were used to verify the accuracy of the proposed projector/backprojector model. An experimental Jaszczak torso/cardiac phantom SPECT study was also performed. For a 64 x 64 x 64 image volume, it took 8.7 s to perform each iteration per slice on a Sun ULTRA Enterprise 3000 (167 MHz, 1 Gbyte RAM) computer, when modeling 3-D scatter, attenuation, and system geometric response functions. The main advantage of the proposed method is its easy implementation and the possibility of performing reconstruction in clinically acceptable time.  相似文献   

5.
A maximum-likelihood (ML) expectation-maximization (EM) algorithm (called EM-IntraSPECT) is presented for simultaneously estimating single photon emission computed tomography (SPECT) emission and attenuation parameters from emission data alone. The algorithm uses the activity within the patient as transmission tomography sources, with which attenuation coefficients can be estimated. For this initial study, EM-IntraSPECT was tested on computer-simulated attenuation and emission maps representing a simplified human thorax as well as on SPECT data obtained from a physical phantom. Two evaluations were performed. First, to corroborate the idea of reconstructing attenuation parameters from emission data, attenuation parameters (mu) were estimated with the emission intensities (lambda) fixed at their true values. Accurate reconstructions of attenuation parameters were obtained. Second, emission parameters lambda and attenuation parameters mu were simultaneously estimated from the emission data alone. In this case there was crosstalk between estimates of lambda and mu and final estimates of lambda and mu depended on initial values. Estimates degraded significantly as the support extended out farther from the body, and an explanation for this is proposed. In the EM-IntraSPECT reconstructed attenuation images, the lungs, spine, and soft tissue were readily distinguished and had approximately correct shapes and sizes. As compared with standard EM reconstruction assuming a fix uniform attenuation map, EM-IntraSPECT provided more uniform estimates of cardiac activity in the physical phantom study and in the simulation study with tight support, but less uniform estimates with a broad support. The new EM algorithm derived here has additional applications, including reconstructing emission and transmission projection data under a unified statistical model.  相似文献   

6.
In this paper, we propose phantom cell analysis for dynamic channel assignment. This is an approximate analysis that can handle realistic planar systems with the three-cell channel-reuse pattern. To find the blocking probability of a particular cell, two phantom cells are used to represent its six neighboring cells. Then, by conditioning on the relative positions of the two phantom cells, the blocking probability of that particular cell can be found. We found that the phantom cell analysis is not only very accurate in predicting the blocking performance, but also very computationally efficient. Besides, it is applicable to any traffic and channel-reuse patterns  相似文献   

7.
This paper presents a critical analysis of the origin of majority and minority carrier substrate currents in tunneling MOS capacitors. For this purpose, a novel, physically-based model, which is comprehensive in terms of impact ionization and hot carrier photon emission and re-absorption in the substrate, is presented. The model provides a better quantitative understanding of the relative importance of different physical mechanisms on the origin of substrate currents in tunneling MOS capacitors featuring different oxide thickness. The results indicate that for thick oxides, the majority carrier substrate current is dominated by anode, hole injection, while the minority carrier current is consistent with a photon emission-absorption mechanism, at least in the range of oxide voltage and oxide thickness covered by the considered experiments. These two currents appear to be strictly correlated because of the relatively flat ratio between impact ionization and photon emission scattering rates and because of the weak dependence of hole transmission probability on oxide thickness and gate bias. Simulations also suggest that, for thinner oxides and smaller oxide voltage drop, the photon emission mechanism might become dominant in the generation of substrate holes.  相似文献   

8.
Attenuation correction for single-photon emission computed tomography (SPECT) usually assumes a uniform attenuation distribution within the body surface contour. Previous methods to estimate this contour have used thresholding of a reconstructed section image. This method is often very sensitive to the selection of a threshold value, especially for nonuniform activity distributions within the body. We have proposed the "fixed-point Hachimura-Kuwahara filter" to extract contour primitives from SPECT images. The Hachimura-Kuwahara filter, which preserves edges but smoothes nonedge regions, is applied repeatedly to identify the invariant set-the fixed-point image-which is unchanged by this nonlinear, two-dimensional filtering operation. This image usually becomes a piecewise constant array. In order to detect the contour, the tracing algorithm based on the minimum distance connection criterion is applied to the extracted contour primitives. This procedure does not require choice of a threshold value in determining the contour. SPECT data from a water-filled elliptical phantom containing three sources was obtained and scattered projections were reconstructed. The automatic edge detection procedure was applied to the scattered window reconstruction, resulting in a reasonable outline of the phantom.  相似文献   

9.
10.
The quality and quantitative accuracy of iteratively reconstructed SPECT images improves when better point spread function (PSF) models of the gamma camera are used during reconstruction. Here, inclusion in the PSF model of photon crosstalk between different slices caused by limited gamma camera resolution and scatter is examined. A three-dimensional (3-D) projector back-projector (proback) has been developed which models both the distance dependent detector point spread function and the object shape-dependent scatter point spread function of single photon emission computed tomography (SPECT). A table occupying only a few megabytes of memory is sufficient to represent this scatter model. The contents of this table are obtained by evaluating an analytical expression for object shape-dependent scatter. The proposed approach avoids the huge memory requirements of storing the full transition matrix needed for 3-D reconstruction including object shape-dependent scatter. In addition, the method avoids the need for lengthy Monte Carlo simulations to generate such a matrix. In order to assess the quantitative accuracy of the method, reconstructions of a water filled cylinder containing regions of different activity levels and of simulated 3-D brain projection data have been evaluated for technetium-99m. It is shown that fully 3-D reconstruction including complete detector response and object shape-dependent scatter modeling clearly outperforms simpler methods that lack a complete detector response and/or a complete scatter response model. Fully 3-D scatter correction yields the best quantitation of volumes of interest and the best contrast-to-noise curves.  相似文献   

11.
The authors present the fusion of anatomical data as a method for improving the reconstruction in single photon emission computed tomography (SPECT). Anatomical data is used to deduce a parameterized model of organs in a reconstructed slice using spline curves. This model allows the authors to define the imaging process, i.e., the direct problem, more adequately, and furthermore to restrict the reconstruction to the emitting zones. Instead of the usual square pixels, the authors use a new kind of discretization pixel, which fits to the contour in the region of interest. In the reconstruction phase, the authors estimate the activity in the emitting zones and also the optimum parameters of their model. Concentrating on the left ventricular (LV) wall activity, the simulation and phantom results show an accurate estimation of both the myocardial shape and the radioactive emission  相似文献   

12.
Methods of quantitative emission computed tomography require compensation for linear photon attenuation. A current trend in single-photon emission computed tomography (SPECT) and positron emission tomography (PET) is to employ transmission scanning to reconstruct the attenuation map. Such an approach, however, considerably complicates both the scanner design and the data acquisition protocol. A dramatic simplification could be made if the attenuation map could be obtained directly from the emission projections, without the use of a transmission scan. This can be done by applying the consistency conditions that enable us to identify the operator of the problem and, thus, to reconstruct the attenuation map. In this paper, we propose a new approach based on the discrete consistency conditions. One of the main advantages of the suggested method over previously used continuous conditions is that it can easily be applied in various scanning configurations, including fully three-dimensional (3-D) data acquisition protocols. Also, it provides a stable numerical implementation, allowing us to avoid the crosstalk between the attenuation map and the source function. A computationally efficient algorithm is implemented by using the QR and Cholesky decompositions. Application of the algorithm to computer-generated and experimentally measured SPECT data is considered.  相似文献   

13.
An evaluation of maximum likelihood reconstruction for SPECT   总被引:2,自引:0,他引:2  
A reconstruction method for SPECT (single photon emission computerized tomography) that uses the maximum likelihood (ML) criterion and an iterative expectation-maximization (EM) algorithm solution is examined. The method is based on a model that incorporates the physical effects of photon statistics, nonuniform photon attenuation, and a camera-dependent point-spread response function. Reconstructions from simulation experiments are presented which illustrate the ability of the ML algorithm to correct for attenuation and point-spread. Standard filtered backprojection method reconstructions, using experimental and simulated data, are included for reference. Three studies were designed to focus on the effects of noise and point-spread, on the effect of nonuniform attenuation, and on the combined effects of all three. The last study uses a chest phantom and simulates Tl-201 imaging of the myocardium. A quantitative analysis of the reconstructed images is used to support the conclusion that the ML algorithm produces reconstructions that exhibit improved signal-to-noise ratios, improved image resolution, and image quantifiability.  相似文献   

14.
In order to perform attenuation correction in emission tomography an attenuation map is required. We propose a new method to compute this map directly from the emission sinogram, eliminating the transmission scan from the acquisition protocol. The problem is formulated as an optimization task where the objective function is a combination of the likelihood and an a priori probability. The latter uses a Gibbs prior distribution to encourage local smoothness and a multimodal distribution for the attenuation coefficients. Since the attenuation process is different in positron emission tomography (PET) and single photon emission tomography (SPECT), a separate algorithm for each case is derived. The method has been tested on mathematical phantoms and on a few clinical studies. For PET, good agreement was found between the images obtained with transmission measurements and those produced by the new algorithm in an abdominal study. For SPECT, promising simulation results have been obtained for nonhomogeneous attenuation due to the presence of the lungs.  相似文献   

15.
A key limitation for achieving deep imaging in biological structures lies in photon absorption and scattering leading to attenuation of fluorescence. In particular, neurotransmitter imaging is challenging in the biologically relevant context of the intact brain for which photons must traverse the cranium, skin, and bone. Thus, fluorescence imaging is limited to the surface cortical layers of the brain, only achievable with craniotomy. Herein, this study describes optimal excitation and emission wavelengths for through‐cranium imaging, and demonstrates that near‐infrared emissive nanosensors can be photoexcited using a two‐photon 1560 nm excitation source. Dopamine‐sensitive nanosensors can undergo two‐photon excitation, and provide chirality‐dependent responses selective for dopamine with fluorescent turn‐on responses varying between 20% and 350%. The two‐photon absorption cross‐section and quantum yield of dopamine nanosensors are further calculated, and a two‐photon power law relationship for the nanosensor excitation process is confirmed. Finally, the improved image quality of the nanosensors embedded 2‐mm‐deep into a brain‐mimetic tissue phantom is shown, whereby one‐photon excitation yields 42% scattering, in contrast to 4% scattering when the same object is imaged under two‐photon excitation. The approach overcomes traditional limitations in deep‐tissue fluorescence microscopy, and can enable neurotransmitter imaging in the biologically relevant milieu of the intact and living brain.  相似文献   

16.
SuperNEC: antenna and indoor-propagation simulation program   总被引:1,自引:0,他引:1  
SuperNEC is a hybrid MoM-UTD antenna and electromagnetic simulation program, developed by Poynting Software (Pty) Ltd. The UTD primitives available in the code are dielectrically coated, multi-faceted plates and elliptical cylinders. The MoM primitives supported are wire segments. The program is capable of running in parallel on a heterogeneous network of processors. A Matlab-based, interactive graphical user interface is used to define the geometry to be simulated, as well as to view the simulation results. The program has been extensively verified using a multitude of test cases, which include comparison to published results and measurements  相似文献   

17.
Accurate quantitation of small lesions with positron emission tomography (PET) requires correction for the partial volume effect. Traditional methods that use Gaussian models of the PET system were found to be insufficient. A new approach that models the non-Gaussian object-dependent scatter was developed. The model consists of eight simple functions with a total of 24 parameters. Images of line and disk sources in circular and elliptical cylinders, and an anthropomorphic chest phantom were used to determine the parameter values. Empirical rules to determine these parameter values for various objects based on those for a reference object, a 21.5-cm circular cylinder, were also proposed. For seven spheroids and a 3.4-cm cylinder, pixel values predicted by the model were compared with the measured values. The model-to-measurement-ratio was 1.03±0.07 near the center of the spheroids and 0.99±0.03 near the center of the 3.4-cm cylinder. In comparison, the standard single Gaussian model had corresponding ratios of 1.27±0.09 and 1.24±0.03, respectively, and the corresponding ratios for a double Gaussian model were 1.13±0.09 and 1.05±0.01. Scatter fraction (28.5%) for a line source in the 21.5-cm cylinder was correctly estimated by our model. Because of scatter. The authors found that errors in the measurement of activity in spheroids with diameters from 0.6 to 3.4 cm were more significant than previously appreciated  相似文献   

18.
Previously, we developed a method to determine the acquisition geometry of a pinhole camera. This information is needed for the correct reconstruction of pinhole single photon emission computed tomography images. The method uses a calibration phantom consisting of three point sources and their positions in the field of view (FOV) influence the accuracy of the geometry estimate. This paper proposes two particular configurations of point sources with specific positions and orientations in the FOV for optimal image reconstruction accuracy. For the proposed calibration setups, inaccuracies of the geometry estimate due to noise in the calibration data, only cause subresolution inaccuracies in reconstructed images. The calibration method also uses a model of the point source configuration, which is only known with limited accuracy. The study demonstrates, however, that, with the proposed calibration setups, the error in reconstructed images is comparable to the error in the phantom model.  相似文献   

19.
X-ray scanners are considered one of the best technologies for detecting illicit materials because of their ability to characterize a material at the molecular and atomic levels, and also because of their relatively inexpensive cost. Using X-ray technology, it is possible to determine a material's density and effective atomic number, or Zeff -related information. In theory, an illicit material can be identified using those two pieces of information. The R-L technology developed at Virginia Tech is the first true multisensing technology for explosive detection. It uses X-ray dual-energy transmission and X-ray scatter technologies to obtain characteristic values of an object; i.e., R and L. The material type of this object can then be determined using the R-L plane. R is related to Zeff and is obtained from dual-energy transmission signals. L is related to density, and is obtained using transmission and scatter signals. Compared to single-sensing technologies and pseudo-multisensing technologies, R-L technology should provide a much higher level of detection accuracy. However, the R and L values can only be computed from an object's true gray levels, which are defined as the measured gray levels of an object in different sensing modalities when it is not overlapped with any other objects. Because an object in a bag is always overlapped with many other objects, being able to identify the object of interest and remove the overlap effects becomes the key issue in determining the true gray levels of that object. This paper focuses on the development of an image processing system to determine an object's true gray levels in all the sensing modalities used in this work  相似文献   

20.
The theory of stochastic processes as applied to photon emission and absorption events is used to calculate the distribution of delay in switch-on from a sub-threshold condition in directly modulated semiconductor lasers down to a probability of 10-10. This involves the derivation of the relative probability distribution of photon number in the laser late enough in the switch-on process such that deterministic relations can be applied thereafter. This distribution, assumed constant in some treatments, is found to change only a little from its initial form, which is a negative binomial, From this one deduces a delay distribution whose width is proportional to the period of the switch-on transient, relatively independently of the precise starting point, but which can be narrowed by injection of additional spontaneous emission. Experiment satisfactorily supports the theory  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号