首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Multipinhole single photon emission computed tomography (SPECT) imaging has several advantages over single pinhole SPECT imaging, including an increased sensitivity and an improved sampling. However, the quest for a good design is challenging, due to the large number of design parameters. The effect of one of these, the amount of overlap in the projection images, on the reconstruction image quality, is examined in this paper. The evaluation of the quality is based on efficient approximations for the linearized local impulse response and the covariance in a voxel, and on the bias of the reconstruction of the noiseless projection data. Two methods are proposed that remove the overlap in the projection image by blocking certain projection rays with the use of extra shielding between the pinhole plate and the detector. Also two measures to quantify the amount of overlap are suggested. First, the approximate method, predicting the contrast-to-noise ratio (CNR), is validated using postsmoothed maximum likelihood expectation maximization (MLEM) reconstructions with an imposed target resolution. Second, designs with different amounts of overlap are evaluated to study the effect of multiplexing. In addition, the CNR of each pinhole design is also compared with that of the same design where overlap is removed. Third, the results are interpreted with the overlap quantification measures. Fourth, the two proposed overlap removal methods are compared. From the results we can conclude that, once the complete detector area has been used, the extra sensitivity due to multiplexing is only able to compensate for the loss of information, not to improve the CNR. Removing the overlap, however, improves the CNR. The gain is most prominent in the central field of view, though often at the cost of the CNR of some voxels at the edges, since after overlap removal very little information is left for their reconstruction. The reconstruction images provide insight in the multiplexing and truncation artifacts.  相似文献   

2.
3.
In single photon emission computed tomography images, the differences between brains of different subjects require the normalisation of the images with respect to a reference template. The general affine model with 12 parameters is usually chosen as a first normalisation procedure. Usually, the Levenberg-Marquardt or mostly the Gauss-Newton method are used in order to optimise a cost function, which presents an extreme value when the image matches with the template. In this reported work, these optimisation algorithms are compared with two alternative versions of the Gauss-Newton method. Both proposed alternatives include an additional parameter, which allows the adaptive change of the step length along the descent direction. Experimental and simulated results show that the inclusion of this parameter improves the convergence rate considerably  相似文献   

4.
Reports on a new method in which spatially correlated magnetic resonance (MR) or X-ray computed tomography (CT) images are employed as a source of prior information in the Bayesian reconstruction of positron emission tomography (PET) images. This new method incorporates the correlated structural images as anatomic templates which can be used for extracting information about boundaries that separate regions exhibiting different tissue characteristics. In order to avoid the possible introduction of artifacts caused by discrepancies between functional and anatomic boundaries, the authors propose a new method called the "weighted line site" method, in which a prior structural image is employed in a modified updating scheme for the boundary variable used in the iterative Bayesian reconstruction. This modified scheme is based on the joint probability of structural and functional boundaries. As to the structural information provided by CT or MR images, only those which have high joint probability with the corresponding PET data are used; whereas other boundary information that is not supported by the PET image is suppressed. The new method has been validated by computer simulation and phantom studies. The results of these validation studies indicate that this new method offers significant improvements in image quality when compared to other reconstruction algorithms, including the filtered backprojection method and the maximum likelihood approach, as well as the Bayesian method without the use of the prior boundary information.  相似文献   

5.
It is shown that the noise level in EPI (echo planar imaging) can increase very significantly if the final image is deconvolved to remove the ghost artifacts. This noise amplification is caused by divisions of small numbers, which scale the noise variance up. The scaling factor is a function of the frequency variable in the uniform sampling direction, and is small for low-frequency components and large for high-frequency components. A window function to provide an inverse scaling of data so that the noise scaling factor is canceled exactly is proposed. Since the signal energy is concentrated in the low-frequency range, it is not reduced significantly when windowing is applied. The window function is compared to the commonly used Hamming window and is shown to have good frequency response. The algorithm has been tested with computer simulations and has been verified to be able to raise the signal-to-noise ratio by 50% in the final image.  相似文献   

6.
Resolution and covariance predictors have been derived previously for penalized-likelihood estimators. These predictors can provide accurate approximations to the local resolution properties and covariance functions for tomographic systems given a good estimate of the mean measurements. Although these predictors may be evaluated iteratively, circulant approximations are often made for practical computation times. However, when numerous evaluations are made repeatedly (as in penalty design or calculation of variance images), these predictors still require large amounts of computing time. In Stayman and Fessler (2000), we discussed methods for precomputing a large portion of the predictor for shift-invariant system geometries. In this paper, we generalize the efficient procedure discussed in Stayman and Fessler (2000) to shift-variant single photon emission computed tomography (SPECT) systems. This generalization relies on a new attenuation approximation and several observations on the symmetries in SPECT systems. These new general procedures apply to both two-dimensional and fully three-dimensional (3-D) SPECT models, that may be either precomputed and stored, or written in procedural form. We demonstrate the high accuracy of the predictions based on these methods using a simulated anthropomorphic phantom and fully 3-D SPECT system. The evaluation of these predictors requires significantly less computation time than traditional prediction techniques, once the system geometry specific precomputations have been made.  相似文献   

7.
Attenuation compensation for cone beam single-photon emission computed tomography (SPECT) imaging is performed by cone beam maximum likelihood reconstruction with attenuation included in the transition matrix. Since the transition matrix is too large to be stored in conventional computers, the E-M maximum likelihood estimator is implemented with a ray-tracing algorithm, efficiently recalculating each matrix element as needed. The method was applied and tested in both uniform and nonuniform density phantoms. Test projections sets were obtained from Monte Carlo simulations and experiments using a commercially available cone beam collimator. For representative regions of interest. reconstruction of a uniform sphere is accurate to within 3% throughout, in comparison to a reference image simulated and reconstructed without attenuation. High- and low-activity regions in a uniform density are reconstructed accurately, except that low-activity regions in a more active background have a small error. This error is explainable by the nonnegativity constraints of the E-M estimator and the image statistical noise  相似文献   

8.
The 11 papers in this special issue focus on 3-D reconstruction of medical images. Two main classes of reconstructive algorithms are covered in this issue: analytic and iterative. The papers are briefly summarized here.  相似文献   

9.
We present a method of performing fast and accurate three-dimensional (3-D) backprojection using only Fourier transform operations for line-integral data acquired by planar detector arrays in positron emission tomography. This approach is a 3-D extension of the two-dimensional (2-D) linogram technique of Edholm. By using a special choice of parameters to index a line of response (LOR) for a pair of planar detectors, rather than the conventional parameters used to index a LOR for a circular tomograph, all the LORs passing through a point in the field of view (FOV) lie on a 2-D plane in the four-dimensional (4-D) data space. Thus, backprojection of all the LORs passing through a point in the FOV corresponds to integration of a 2-D plane through the 4-D "planogram." The key step is that the integration along a set of parallel 2-D planes through the planogram, that is, backprojection of a plane of points, can be replaced by a 2-D section through the origin of the 4-D Fourier transform of the data. Backprojection can be performed as a sequence of Fourier transform operations, for faster implementation. In addition, we derive the central-section theorem for planogram format data, and also derive a reconstruction filter for both backprojection-filtering and filtered-backprojection reconstruction algorithms. With software-based Fourier transform calculations we provide preliminary comparisons of planogram backprojection to standard 3-D backprojection and demonstrate a reduction in computation time by a factor of approximately 15.  相似文献   

10.
The problem of recovering a high-resolution image from a sequence of low-resolution DCT-based compressed observations is considered in this paper. The introduction of compression complicates the recovery problem. We analyze the DCT quantization noise and propose to model it in the spatial domain as a colored Gaussian process. This allows us to estimate the quantization noise at low bit-rates without explicit knowledge of the original image frame, and we propose a method that simultaneously estimates the quantization noise along with the high-resolution data. We also incorporate a nonstationary image prior model to address blocking and ringing artifacts while still preserving edges. To facilitate the simultaneous estimate, we employ a regularization functional to determine the regularization parameter without any prior knowledge of the reconstruction procedure. The smoothing functional to be minimized is then formulated to have a global minimizer in spite of its nonlinearity by enforcing convergence and convexity requirements. Experiments illustrate the benefit of the proposed method when compared to traditional high-resolution image reconstruction methods. Quantitative and qualitative comparisons are provided.  相似文献   

11.
为了在2维直方图上用Otsu方法更好地分割红外图像、提高抗噪能力,提出了一种改进的方法。首先分析在2维灰度-邻域均值直方图上的分割存在不准确性,采用2维灰度-梯度直方图,且改进对邻域均值的求取算法;然后对Otsu法的阈值函数进行研究,引入类内的分离信息改进阈值函数,并简化该阈值函数以降低运算复杂度,通过实验给出了相应的实验对比。结果表明,改进的方法能更好地分割目标,运行时间较少、抗噪性更强。  相似文献   

12.
While the ML-EM algorithm for reconstruction for emission tomography is unstable due to the ill-posed nature of the problem. Bayesian reconstruction methods overcome this instability by introducing prior information, often in the form of a spatial smoothness regularizer. More elaborate forms of smoothness constraints may be used to extend the role of the prior beyond that of a stabilizer in order to capture actual spatial information about the object. Previously proposed forms of such prior distributions were based on the assumption of a piecewise constant source distribution. Here, the authors propose an extension to a piecewise linear model-the weak plate-which is more expressive than the piecewise constant model. The weak plate prior not only preserves edges but also allows for piecewise ramplike regions in the reconstruction. Indeed, for the authors' application in SPECT, such ramplike regions are observed in ground-truth source distributions in the form of primate autoradiographs of rCBF radionuclides. To incorporate the weak plate prior in a MAP approach, the authors model the prior as a Gibbs distribution and use a GEM formulation for the optimization. They compare quantitative performance of the ML-EM algorithm, a GEM algorithm with a prior favoring piecewise constant regions, and a GEM algorithm with their weak plate prior. Pointwise and regional bias and variance of ensemble image reconstructions are used as indications of image quality. The authors' results show that the weak plate and membrane priors exhibit improved bias and variance relative to ML-EM techniques.  相似文献   

13.
For multipinhole single-photon emission computed tomography (SPECT), iterative reconstruction algorithms are preferred over analytical methods, because of the often complex multipinhole geometries and the ability of iterative algorithms to compensate for effects like spatially variant sensitivity and resolution. Ideally, such compensation methods are based on accurate knowledge of the position-dependent point spread functions (PSFs) specifying the response of the detectors to a point source at every position in the instrument. This paper describes a method for model-based generation of complete PSF lookup tables from a limited number of point-source measurements for stationary SPECT systems and its application to a submillimeter resolution stationary small-animal SPECT system containing 75 pinholes (U-SPECT-I). The method is based on the generalization over the entire object to be reconstructed, of a small number of properties of point-source responses which are obtained at a limited number of measurement positions. The full shape of measured point-source responses can almost be preserved in newly created PSF tables. We show that these PSFs can be used to obtain high-resolution SPECT reconstructions: the reconstructed resolutions judged by rod visibility in a micro-Derenzo phantom are 0.45 mm with 0.6-mm pinholes and below 0.35 mm with 0.3-mm pinholes. In addition, we show that different approximations, such as truncating the PSF kernel, with significant reduction of reconstruction time, can still lead to acceptable reconstructions.   相似文献   

14.
单幅高分辨率SAR图像建筑物三维模型重构   总被引:1,自引:0,他引:1  
提出了一种利用高分辨率SAR图像进行建筑物提取和三维重构的方法.首先,分析了高分辨率SAR图像建筑物产生的电磁散射的类型,给出了不同类型散射区域的后向散射计算方法,并在此基础上给出了一种利用建筑物三维CAD模型进行SAR建筑物特征区域图像仿真的方法;其次,给出了利用建筑物的二次散射结构确定建筑物底部轮廓位置和方向的方法,并提出了一种基于分布密度函数差异的仿真图像迭代匹配方法,进行建筑物高度的反演.仿真SAR图像后向散射系数用来划分建筑物不同的散射区域,通过计算特征区域之间的分布密度函数差异,以取得最大匹配度值的仿真图像对应的检验高度作为建筑物的反演高度;最后,选用了两幅不同屋顶类型的实际机载高分辨率SAR图像进行建筑物提取和三维重构实验,试验结果较为理想,验证了所提方法的可行性和有效性.  相似文献   

15.
A genetic algorithm has been applied to the line profile reconstruction from the signals of the standard secondary electron (SE) and/or backscattered electron detectors in a scanning electron microscope. This method solves the topographical surface reconstruction problem as one of combinatorial optimization. To extend this optimization approach for three-dimensional (3-D) surface topography, this paper considers the use of a string coding where a 3-D surface topography is represented by a set of coordinates of vertices. We introduce the Delaunay triangulation, which attains the minimum roughness for any set of height data to capture the fundamental features of the surface being probed by an electron beam. With this coding, the strings are processed with a class of hybrid optimization algorithms that combine genetic algorithms and simulated annealing algorithms. Experimental results on SE images are presented.  相似文献   

16.
为确保识别的准确性,提出一种改进的虹膜图像质量评价算法。根据图像总体清晰度和可见度的粗评估,可以快速而有效地剔除质量较差的图像,并利用虹膜纹理清晰度和可见度精评估来量化评价指标。实验结果表明,该方法可准确地判断虹膜图像的质量,提高系统的工作效率,其评价结果和人眼主观评价相吻合,具有一定的应用价值。  相似文献   

17.
The authors demonstrate the feasibility of an approach, dual-frequency subtraction imaging, for suppressing artifacts produced by reverberation of strong echoes among specular reflectors. This method is based upon the principle that specularly reflected echoes from flat boundaries are frequency-independent whereas the diffusely scattered echoes from small scatterers are frequency-dependent. The approach was assessed on phantoms including one consisting of two parallel plastic plates between layers of foam sponges using a prototype experimental system. Preliminary results show that this method is superior to simple thresholding techniques or signal compression and holds great promise for suppressing reverberation artifacts in ultrasonic images.  相似文献   

18.
The EM algorithm for PET image reconstruction has two major drawbacks that have impeded the routine use of the EM algorithm: the long computation time due to slow convergence and a large memory required for the image, projection, and probability matrix. An attempt is made to solve these two problems by parallelizing the EM algorithm on multiprocessor systems. An efficient data and task partitioning scheme, called partition-by-box, based on the message passing model is proposed. The partition-by-box scheme and its modified version have been implemented on a message passing system, Intel iPSC/2, and a shared memory system, BBN Butterfly GP1000. The implementation results show that, for the partition-by-box scheme, a message passing system of complete binary tree interconnection with fixed connectivity of three at each node can have similar performance to that with the hypercube topology, which has a connectivity of log(2) N for N PEs. It is shown that the EM algorithm can be efficiently parallelized using the (modified) partition-by-box scheme with the message passing model.  相似文献   

19.
近年来不断发展成熟的合成孔径雷达技术将获取的图像分辨率提高到分米级.在高分辨率条件下,建筑物在SAR图像上表现出的空间信息更加丰富,结构特征更加明显.首先提出了分解模型对高分辨SAR图像中矩形建筑物的特性进行详细分析.在此模型中,散射效应根据不同的贡献来源被细分,以便于解析建筑物图像特征在不同的SAR成像条件下的几何结构和空间分布规律.然后基于建筑物图像表征的结构先验,提出了一种新的单幅高分辨率SAR图像建筑物检测和3-D重建算法,其中包括模型匹配的图像特征的提取,以及先验引导的重建过程.最后,选用了实际高分辨率SAR图像进行建筑物检测和三维重建实验并对重建结果进行了讨论.  相似文献   

20.
This paper presents an integrated method to identify an object pattern from an image, and track its movement over a sequence of images. The sequence of images comes from a single perspective video source, which is capturing data from a precalibrated scene. This information is used to reconstruct the scene in three-dimension (3-D) within a virtual environment where a user can interact and manipulate the system. The steps that are performed include the following: i) Identify an object pattern from a two-dimensional perspective video source. The user outlines the region of interest (ROI) in the initial frame; the procedure builds a refined mask of the dominant object within the ROI using the morphological watershed algorithm. ii) The object pattern is tracked between frames using object matching within the mask provided by the previous and next frame, computing the motion parameters. iii) The identified object pattern is matched with a library of shapes to identify a corresponding 3-D object. iv) A virtual environment is created to reconstruct the scene in 3-D using the 3-D object and the motion parameters. This method can be applied to real-life application problems, such as traffic management and material flow congestion analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号