首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A Bayesian method is presented for simultaneously segmenting and reconstructing emission computed tomography (ECT) images and for incorporating high-resolution, anatomical information into those reconstructions. The anatomical information is often available from other imaging modalities such as computed tomography (CT) or magnetic resonance imaging (MRI). The Bayesian procedure models the ECT radiopharmaceutical distribution as consisting of regions, such that radiopharmaceutical activity is similar throughout each region. It estimates the number of regions, the mean activity of each region, and the region classification and mean activity of each voxel. Anatomical information is incorporated by assigning higher prior probabilities to ECT segmentations in which each ECT region stays within a single anatomical region. This approach is effective because anatomical tissue type often strongly influences radiopharmaceutical uptake. The Bayesian procedure is evaluated using physically acquired single-photon emission computed tomography (SPECT) projection data and MRI for the three-dimensional (3-D) Hoffman brain phantom. A clinically realistic count level is used. A cold lesion within the brain phantom is created during the SPECT scan but not during the MRI to demonstrate that the estimation procedure can detect ECT structure that is not present anatomically.  相似文献   

2.
Reports on a new method in which spatially correlated magnetic resonance (MR) or X-ray computed tomography (CT) images are employed as a source of prior information in the Bayesian reconstruction of positron emission tomography (PET) images. This new method incorporates the correlated structural images as anatomic templates which can be used for extracting information about boundaries that separate regions exhibiting different tissue characteristics. In order to avoid the possible introduction of artifacts caused by discrepancies between functional and anatomic boundaries, the authors propose a new method called the "weighted line site" method, in which a prior structural image is employed in a modified updating scheme for the boundary variable used in the iterative Bayesian reconstruction. This modified scheme is based on the joint probability of structural and functional boundaries. As to the structural information provided by CT or MR images, only those which have high joint probability with the corresponding PET data are used; whereas other boundary information that is not supported by the PET image is suppressed. The new method has been validated by computer simulation and phantom studies. The results of these validation studies indicate that this new method offers significant improvements in image quality when compared to other reconstruction algorithms, including the filtered backprojection method and the maximum likelihood approach, as well as the Bayesian method without the use of the prior boundary information.  相似文献   

3.
The multiframe super-resolution (SR) technique aims to obtain a high-resolution (HR) image by using a set of observed low-resolution (LR) images. In the reconstruction process, artifacts may be possibly produced due to the noise, especially in presence of stronger noise. In order to suppress artifacts while preserving discontinuities of images, in this paper a multiframe SR method is proposed by involving the reconstruction properties of the half-quadratic prior model together with the quadratic prior model using a convex combination. Moreover, by analyzing local features of the underlined HR image, these two prior models are combined by using an automatically calculated weight function, making both smooth and discontinuous pixels handled properly. A variational Bayesian inference (VBF) based algorithm is designed to efficiently and effectively seek the solution of the proposed method. With the VBF framework, motion parameters and hyper-parameters are all determined automatically, leading to an unsupervised SR method. The efficiency of the hybrid prior model is demonstrated theoretically and practically, which shows that our SR method can obtain better results from LR images even with stronger noise. Extensive experiments on several visual data have demonstrated the efficacy and superior performance of the proposed algorithm, which can not only preserve image details but also suppress artifacts.  相似文献   

4.
The use of anatomical information to improve the quality of reconstructed images in positron emission tomography (PET) has been extensively studied. A common strategy has been to include spatial smoothing within boundaries defined from the anatomical data. The authors present an alternative method for the incorporation of anatomical information into PET image reconstruction, in which they use segmented magnetic resonance (MR) images to assign tissue composition to PET image pixels. The authors model the image as a sum of activities for each tissue type, weighted by the assigned tissue composition. The reconstruction is performed as a maximum a posteriori (MAP) estimation of the activities of each tissue type. Two prior functions, defined for tissue-type activities, are considered. The algorithm is tested in realistic simulations employing a full physical model of the PET scanner  相似文献   

5.
Inverse halftoning via MAP estimation   总被引:11,自引:0,他引:11  
There has been a tremendous amount of research in the area of image halftoning, where the goal has been to find the most visually accurate representation given a limited palette of gray levels (often just two, black and white). This paper focuses on the inverse problem, that of finding efficient techniques for reconstructing high-quality continuous-tone images from their halftoned versions. The proposed algorithms are based on a maximum a posteriori (MAP) estimation criteria using a Markov random field (MRF) model for the prior image distribution. Image estimates obtained with the proposed model accurately reconstruct both the smooth regions of the image and the discontinuities along image edges. Algorithms are developed and example gray-level reconstructions are presented generated from both dithered and error-diffused halftone originals. Application of the technique to the problems of rescreening and the processing of halftone images are shown.  相似文献   

6.
We introduce a probabilistic computer vision technique to track monotonically advancing boundaries of objects within image sequences. Our method incorporates a novel technique for including statistical prior shape information into graph-cut based segmentation, with the aid of a majorization-minimization algorithm. Extension of segmentation from single images to image sequences then follows naturally using sequential Bayesian estimation. Our methodology is applied to two unrelated sets of real biomedical imaging data, and a set of synthetic images. Our results are shown to be superior to manual segmentation.  相似文献   

7.
The 3D reconstruction algorithm in a stereo image pair for realizing mutual occlusion and interactions between the real and virtual world in an image synthesis is proposed. A two-stage algorithm, consisting of disparity estimation and regularization is used to locate a smooth and precise disparity vector. The hierarchical disparity estimation technique increases the efficiency and reliability of the estimation process, and edge-preserving disparity field regularization produces smooth disparity fields while preserving discontinuities that result from object boundaries. Depth information concerning the real scene is then recovered from the estimated disparity fields by stereo camera geometry. Simulation results show that the proposed algorithm provides accurate and spatially correlated disparity vector fields in various types of images, and the reconstructed 3D model produces a natural space in which the real world and virtual objects interact with each other as if they were in the same world.  相似文献   

8.
Diagnostic and operational tasks based on dental radiology often require three-dimensional (3-D) information that is not available in a single X-ray projection image. Comprehensive 3-D information about tissues can be obtained by computerized tomography (CT) imaging. However, in dental imaging a conventional CT scan may not be available or practical because of high radiation dose, low-resolution or the cost of the CT scanner equipment. In this paper, we consider a novel type of 3-D imaging modality for dental radiology. We consider situations in which projection images of the teeth are taken from a few sparsely distributed projection directions using the dentist's regular (digital) X-ray equipment and the 3-D X-ray attenuation function is reconstructed. A complication in these experiments is that the reconstruction of the 3-D structure based on a few projection images becomes an ill-posed inverse problem. Bayesian inversion is a well suited framework for reconstruction from such incomplete data. In Bayesian inversion, the ill-posed reconstruction problem is formulated in a well-posed probabilistic form in which a priori information is used to compensate for the incomplete information of the projection data. In this paper we propose a Bayesian method for 3-D reconstruction in dental radiology. The method is partially based on Kolehmainen et al. 2003. The prior model for dental structures consist of a weighted /spl lscr//sup 1/ and total variation (TV)-prior together with the positivity prior. The inverse problem is stated as finding the maximum a posteriori (MAP) estimate. To make the 3-D reconstruction computationally feasible, a parallelized version of an optimization algorithm is implemented for a Beowulf cluster computer. The method is tested with projection data from dental specimens and patient data. Tomosynthetic reconstructions are given as reference for the proposed method.  相似文献   

9.
Generation of anisotropic-smoothness regularization filters for EIT   总被引:3,自引:0,他引:3  
In the inverse conductivity problem, as in any ill-posed inverse problem, regularization techniques are necessary in order to stabilize inversion. A common way to implement regularization in electrical impedance tomography is to use Tikhonov regularization. The inverse problem is formulated as a minimization of two terms: the mismatch of the measurements against the model, and the regularization functional. Most commonly, differential operators are used as regularization functionals, leading to smooth solutions. Whenever the imaged region presents discontinuities in the conductivity distribution, such as interorgan boundaries, the smoothness prior is not consistent with the actual situation. In these cases, the reconstruction is enhanced by relaxing the smoothness constraints in the direction normal to the discontinuity. In this paper, we derive a method for generating Gaussian anisotropic regularization filters. The filters are generated on the basis of the prior structural information, allowing a better reconstruction of conductivity profiles matching these priors. When incorporating prior information into a reconstruction algorithm, the risk is of biasing the inverse solutions toward the assumed distributions. Simulations show that, with a careful selection of the regularization parameters, the reconstruction algorithm is still able to detect conductivities patterns that violate the prior information. A generalized singular-value decomposition analysis of the effects of the anisotropic filters on regularization is presented in the last sections of the paper.  相似文献   

10.
Positron emission tomography (PET) of the cerebral glucose metabolism has shown to be useful in the presurgical evaluation of patients with epilepsy. Between seizures, PET images using fluorodeoxyglucose (FDG) show a decreased glucose metabolism in areas of the gray matter (GM) tissue that are associated with the epileptogenic region. However, detection of subtle hypo-metabolic regions is limited by noise in the projection data and the relatively small thickness of the GM tissue compared to the spatial resolution of the PET system. Therefore, we present an iterative maximum-a-posteriori based reconstruction algorithm, dedicated to the detection of hypo-metabolic regions in FDG-PET images of the brain of epilepsy patients. Anatomical information, derived from magnetic resonance imaging data, and pathophysiological knowledge was included in the reconstruction algorithm. Two Monte Carlo based brain software phantom experiments were used to examine the performance of the algorithm. In the first experiment, we used perfect, and in the second, imperfect anatomical knowledge during the reconstruction process. In both experiments, we measured signal-to-noise ratio (SNR), root mean squared (rms) bias and rms standard deviation. For both experiments, bias was reduced at matched noise levels, when compared to post-smoothed maximum-likelihood expectation-maximization (ML-EM) and maximum a posteriori reconstruction without anatomical priors. The SNR was similar to that of ML-EM with optimal post-smoothing, although the parameters of the prior distributions were not optimized. We can conclude that the use of anatomical information combined with prior information about the underlying pathology is very promising for the detection of subtle hypo-metabolic regions in the brain of patients with epilepsy.  相似文献   

11.
While the ML-EM algorithm for reconstruction for emission tomography is unstable due to the ill-posed nature of the problem. Bayesian reconstruction methods overcome this instability by introducing prior information, often in the form of a spatial smoothness regularizer. More elaborate forms of smoothness constraints may be used to extend the role of the prior beyond that of a stabilizer in order to capture actual spatial information about the object. Previously proposed forms of such prior distributions were based on the assumption of a piecewise constant source distribution. Here, the authors propose an extension to a piecewise linear model-the weak plate-which is more expressive than the piecewise constant model. The weak plate prior not only preserves edges but also allows for piecewise ramplike regions in the reconstruction. Indeed, for the authors' application in SPECT, such ramplike regions are observed in ground-truth source distributions in the form of primate autoradiographs of rCBF radionuclides. To incorporate the weak plate prior in a MAP approach, the authors model the prior as a Gibbs distribution and use a GEM formulation for the optimization. They compare quantitative performance of the ML-EM algorithm, a GEM algorithm with a prior favoring piecewise constant regions, and a GEM algorithm with their weak plate prior. Pointwise and regional bias and variance of ensemble image reconstructions are used as indications of image quality. The authors' results show that the weak plate and membrane priors exhibit improved bias and variance relative to ML-EM techniques.  相似文献   

12.
A Bayesian filtering technique for SAR interferometric phase fields   总被引:1,自引:0,他引:1  
SAR interferograms are affected by a strong noise component which often prevents correct phase unwrapping and always impairs the phase reconstruction accuracy. To obtain satisfactory performance, most filtering techniques exploit prior information by means of ad hoc, empirical strategies. In this paper, we recast phase filtering as a Bayesian estimation problem in which the image prior is modeled as a suitable Markov random field, and the filtered phase field is the configuration with maximum a posteriori probability. Assuming the image to be residue free and generally smooth, a two-component MRF model is adopted, where the first component penalizes residues, while the second one penalizes discontinuities. Constrained aimulated annealing is then used to find the optimal solution. The experimental analysis shows that, by gradually adjusting the MRF parameters, the algorithm filters out most of the high-frequency noise and, in the limit, eliminates all residues, allowing for a trivial phase unwrapping. Given a limited processing time, the algorithm is still able to eliminate most residues, paving the way for the successful use of any subsequent phase unwrapping technique.  相似文献   

13.
A Markov model for blind image separation by a mean-field EM algorithm.   总被引:1,自引:0,他引:1  
This paper deals with blind separation of images from noisy linear mixtures with unknown coefficients, formulated as a Bayesian estimation problem. This is a flexible framework, where any kind of prior knowledge about the source images and the mixing matrix can be accounted for. In particular, we describe local correlation within the individual images through the use of Markov random field (MRF) image models. These are naturally suited to express the joint pdf of the sources in a factorized form, so that the statistical independence requirements of most independent component analysis approaches to blind source separation are retained. Our model also includes edge variables to preserve intensity discontinuities. MRF models have been proved to be very efficient in many visual reconstruction problems, such as blind image restoration, and allow separation and edge detection to be performed simultaneously. We propose an expectation-maximization algorithm with the mean field approximation to derive a procedure for estimating the mixing matrix, the sources, and their edge maps. We tested this procedure on both synthetic and real images, in the fully blind case (i.e., no prior information on mixing is exploited) and found that a source model accounting for local autocorrelation is able to increase robustness against noise, even space variant. Furthermore, when the model closely fits the source characteristics, independence is no longer a strict requirement, and cross-correlated sources can be separated, as well.  相似文献   

14.
Stochastic modeling and estimation of multispectral image data   总被引:1,自引:0,他引:1  
Multispectral images consist of multiple channels, each containing data acquired from a different band within the frequency spectrum. Since most objects emit or reflect energy over a large spectral bandwidth, there usually exists a significant correlation between channels. Due to often harsh imaging environments, the acquired data may be degraded by both blur and noise. Simply applying a monochromatic restoration algorithm to each frequency band ignores the cross-channel correlation present within a multispectral image. A Gibbs prior is proposed for multispectral data modeled as a Markov random field, containing both spatial and spectral cliques. Spatial components of the model use a nonlinear operator to preserve discontinuities within each frequency band, while spectral components incorporate nonstationary cross-channel correlations. The multispectral model is used in a Bayesian algorithm for the restoration of color images, in which the resulting nonlinear estimates are shown to be quantitatively and visually superior to linear estimates generated by multichannel Wiener and least squares restoration.  相似文献   

15.
Using statistical methods the reconstruction of positron emission tomography (PET) images can be improved by high-resolution anatomical information obtained from magnetic resonance (MR) images. The authors implemented two approaches that utilize MR data for PET reconstruction. The anatomical MR information is modeled as a priori distribution of the PET image and combined with the distribution of the measured PET data to generate the a posteriori function from which the expectation maximization (EM)-type algorithm with a maximum a posteriori (MAP) estimator is derived. One algorithm (Markov-GEM) uses a Gibbs function to model interactions between neighboring pixels within the anatomical regions. The other (Gauss-EM) applies a Gauss function with the same mean for all pixels in a given anatomical region. A basic assumption of these methods is that the radioactivity is homogeneously distributed inside anatomical regions. Simulated and phantom data are investigated under the following aspects: count density, object size, missing anatomical information, and misregistration of the anatomical information. Compared with the maximum likelihood-expectation maximization (ML-EM) algorithm the results of both algorithms show a large reduction of noise with a better delineation of borders. Of the two algorithms tested, the Gauss-EM method is superior in noise reduction (up to 50%). Regarding incorrect a priori information the Gauss-EM algorithm is very sensitive, whereas the Markov-GEM algorithm proved to be stable with a small change of recovery coefficients between 0.5 and 3%  相似文献   

16.
A generalized expectation-maximization (GEM) algorithm is developed for Bayesian reconstruction, based on locally correlated Markov random-field priors in the form of Gibbs functions and on the Poisson data model. For the M-step of the algorithm, a form of coordinate gradient ascent is derived. The algorithm reduces to the EM maximum-likelihood algorithm as the Markov random-field prior tends towards a uniform distribution. Three different Gibbs function priors are examined. Reconstructions of 3-D images obtained from the Poisson model of single-photon-emission computed tomography are presented.  相似文献   

17.
In emission tomography, image reconstruction and therefore also tracer development and diagnosis may benefit from the use of anatomical side information obtained with other imaging modalities in the same subject, as it helps to correct for the partial volume effect. One way to implement this, is to use the anatomical image for defining the a priori distribution in a maximum-a-posteriori (MAP) reconstruction algorithm. In this contribution, we use the PET-SORTEO Monte Carlo simulator to evaluate the quantitative accuracy reached by three different anatomical priors when reconstructing positron emission tomography (PET) brain images, using volumetric magnetic resonance imaging (MRI) to provide the anatomical information. The priors are: 1) a prior especially developed for FDG PET brain imaging, which relies on a segmentation of the MR-image (Baete , 2004); 2) the joint entropy-prior (Nuyts, 2007); 3) a prior that encourages smoothness within a position dependent neighborhood, computed from the MR-image. The latter prior was recently proposed by our group in (Vunckx and Nuyts, 2010), and was based on the prior presented by Bowsher (2004). The two latter priors do not rely on an explicit segmentation, which makes them more generally applicable than a segmentation-based prior. All three priors produced a compromise between noise and bias that was clearly better than that obtained with postsmoothed maximum likelihood expectation maximization (MLEM) or MAP with a relative difference prior. The performance of the joint entropy prior was slightly worse than that of the other two priors. The performance of the segmentation-based prior is quite sensitive to the accuracy of the segmentation. In contrast to the joint entropy-prior, the Bowsher-prior is easily tuned and does not suffer from convergence problems.  相似文献   

18.
In this paper, we propose a class of image restoration algorithms based on the Bayesian approach and a new hierarchical spatially adaptive image prior. The proposed prior has the following two desirable features. First, it models the local image discontinuities in different directions with a model which is continuous valued. Thus, it preserves edges and generalizes the on/off (binary) line process idea used in previous image priors within the context of Markov random fields (MRFs). Second, it is Gaussian in nature and provides estimates that are easy to compute. Using this new hierarchical prior, two restoration algorithms are derived. The first is based on the maximum a posteriori principle and the second on the Bayesian methodology. Numerical experiments are presented that compare the proposed algorithms among themselves and with previous stationary and non stationary MRF-based with line process algorithms. These experiments demonstrate the advantages of the proposed prior.  相似文献   

19.
We introduce an adaptive wavelet graph image model applicable to Bayesian tomographic reconstruction and other problems with nonlocal observations. The proposed model captures coarse-to-fine scale dependencies in the wavelet tree by modeling the conditional distribution of wavelet coefficients given overlapping windows of scaling coefficients containing coarse scale information. This results in a graph dependency structure which is more general than a quadtree, enabling the model to produce smooth estimates even for simple wavelet bases such as the Haar basis. The inter-scale dependencies of the wavelet graph model are specified using a spatially nonhomogeneous Gaussian distribution with parameters at each scale and location. The parameters of this distribution are selected adaptively using nonlinear classification of coarse scale data. The nonlinear adaptation mechanism is based on a set of training images. In conjunction with the wavelet graph model, we present a computationally efficient multiresolution image reconstruction algorithm. This algorithm is based on iterative Bayesian space domain optimization using scale recursive updates of the wavelet graph prior model. In contrast to performing the optimization over the wavelet coefficients, the space domain formulation facilitates enforcement of pixel positivity constraints. Results indicate that the proposed framework can improve reconstruction quality over fixed resolution Bayesian methods.  相似文献   

20.
Inspired by the probability of boundary (Pb) algorithm, a simplified texture gradient method has been developed to locate texture boundaries within grayscale images. Despite considerable simplification, the proposed algorithm’s ability to locate texture boundaries is comparable with Pb’s texture boundary method. The proposed texture gradient method is also integrated with a biologically inspired model, to enable boundaries defined by discontinuities in both intensity and texture to be located. The combined algorithm outperforms the current state-of-art image segmentation method (Pb) when this method is also restricted to using only local cues of intensity and texture at a single scale.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号