首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到16条相似文献,搜索用时 15 毫秒
1.
This work presents a novel computed tomography reconstruction method for few‐view problem based on a compound method. To overcome the disadvantages of total variation (TV) minimization method, we use a high‐order norm coupled within TV and the numerical scheme for our method is given. We use the root mean square error as a referee. The numerical experiments demonstrate that our method achieves better performance than existing reconstruction methods, including filtered back projection, expectation maximization, and TV with projection on convex sets. © 2013 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 23, 249–255, 2013  相似文献   

2.
Magnetic resonance imaging (MRI) reconstruction model based on total variation (TV) regularization can deal with problems such as incomplete reconstruction, blurred boundary, and residual noise. In this article, a non‐convex isotropic TV regularization reconstruction model is proposed to overcome the drawback. Moreau envelope and minmax‐concave penalty are firstly used to construct the non‐convex regularization of L2 norm, then it is applied into the TV regularization to construct the sparse reconstruction model. The proposed model can extract the edge contour of the target effectively since it can avoid the underestimation of larger nonzero elements in convex regularization. In addition, the global convexity of the cost function can be guaranteed under certain conditions. Then, an efficient algorithm such as alternating direction method of multipliers is proposed to solve the new cost function. Experimental results show that, compared with several typical image reconstruction methods, the proposed model performs better. Both the relative error and the peak signal‐to‐noise ratio are significantly improved, and the reconstructed images also show better visual effects. The competitive experimental results indicate that the proposed approach is not limited to MRI reconstruction, but it is general enough to be used in other fields with natural images.  相似文献   

3.
连祥媛  孔慧华  潘晋孝  高文波  王攀 《光电工程》2021,48(9):210211-1-210211-9
基于光子计数探测器的能谱CT在材料分解、组织表征、病变检测等应用中具有巨大的潜力。在重建过程中,通道数的增加会造成单通道中光子数减少,从而导致重建图像质量下降,难以满足实际需求。本文从能谱CT重建的角度出发,将广义总变分向矢量延伸,利用奇异值的稀疏性,促进图像梯度的线性依赖,提出一种基于核范数的多通道联合广义总变分的能谱CT重建算法。在图像重建过程中,多层共享结构信息,同时保留独特的差异。实验结果表明,本文提出的算法在抑制噪声的同时,能够更有效地恢复图像细节及边缘信息。  相似文献   

4.
Recently, the potential harm of electromagnetic radiation used in computed tomography (CT) scanning has been paid much attention to. This makes the few‐view CT reconstruction become an important issue in medical imaging. In this article, an adaptive dynamic combined energy minimization model is proposed for few‐view CT reconstruction based on the compress sensing theory. The L2 energy of the image gradient and the total variation (TV) energy are combined, and the working regions of them are separated adaptively with a dynamic threshold. With the proposed model, the TV model's disadvantageous tendency of uniformly penalize the image gradient irrespective of the underlying image structures is overcome. Numerical experiments of the proposed model are performed with various insufficient data problems in fan‐beam CT and suggest that both the reconstruction speed and quality of the results are generally improved. © 2013 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 23, 44–52, 2013.  相似文献   

5.
In the medical computer tomography field, total variation (TV), which is the ‐norm of the gradient‐magnitude images, is widely used as the regularization based on the compressive sensing theory. To overcome the TV model's disadvantageous tendency of uniformly penalize the image gradient and over smooth the low‐contrast structures, an iterative algorithm based on the ‐norm optimization of the finite difference is proposed. To rise to the challenges introduced by the ‐norm minimization, the algorithm uses the alternating direction method to solve the unconstrained augmented Lagrangian function, which involves a hard thresholding method, a linearization and proximal points technique for each subproblem. The simulation demonstrates the conclusions and indicates that the algorithm proposed in this article can obviously improve the reconstruction quality. © 2014 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 24, 215–223, 2014  相似文献   

6.
It is well known that cone‐beam data acquired with a circular orbit are insufficient for exact image reconstruction. Despite this, because a cone‐beam scanning configuration with a circular orbit is easy to implement in practice, it has been widely employed for data acquisition in, e.g., micro‐CT and CT imaging in radiation therapy. The algorithm developed by Feldkamp, Davis, and Kress (FDK) and its modifications, such as the Tent–FDK (T‐FDK) algorithm, have been used for image reconstruction from circular cone‐beam data. In this work, we present an algorithm with spatially shift‐variant filtration for image reconstruction in circular cone‐beam CT. We performed computer‐simulation studies to compare the proposed and existing algorithms. Numerical results in these studies demonstrated that the proposed algorithm has resolution properties comparable to, and noise properties better than, the FDK algorithm. As compared to the T‐FDK algorithm, our proposed algorithm reconstructs images with an improved in‐plane spatial resolution. © 2005 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 14, 213–221, 2004; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.20026  相似文献   

7.
Quantitative parameter mapping in MRI is typically performed as a two‐step procedure where serial imaging is followed by pixelwise model fitting. In contrast, model‐based reconstructions directly reconstruct parameter maps from raw data without explicit image reconstruction. Here, we propose a method that determines T1 maps directly from multi‐channel raw data as obtained by a single‐shot inversion‐recovery radial FLASH acquisition with a Golden Angle view order. Joint reconstruction of a T1, spin‐density and flip‐angle map is formulated as a nonlinear inverse problem and solved by the iteratively regularized Gauss‐Newton method. Coil sensitivity profiles are determined from the same data in a preparatory step of the reconstruction. Validations included numerical simulations, in vitro MRI studies of an experimental T1 phantom, and in vivo studies of brain and abdomen of healthy subjects at a field strength of 3 T. The results obtained for a numerical and experimental phantom demonstrated excellent accuracy and precision of model‐based T1 mapping. In vivo studies allowed for high‐resolution T1 mapping of human brain (0.5–0.75 mm in‐plane, 4 mm section thickness) and liver (1.0 mm, 5 mm section) within 3.6–5 s. In conclusion, the proposed method for model‐based T1 mapping may become an alternative to two‐step techniques, which rely on model fitting after serial image reconstruction. More extensive clinical trials now require accelerated computation and online implementation of the algorithm. © 2016 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 26, 254–263, 2016  相似文献   

8.
Described herein are the advantages of using sub‐sinograms for single photon emission computed tomography image reconstruction. A sub‐sinogram is a sinogram acquired with an entire data acquisition protocol, but in a fraction of the total acquisition time. A total‐sinogram is the summation of all sub‐sinograms. Images can be reconstructed from the total‐sinogram or from sub‐sinograms and then be summed to produce the final image. For a linear reconstruction method such as the filtered backprojection algorithm, there is no advantage of using sub‐sinograms. However, for nonlinear methods such as the maximum likelihood (ML) expectation maximization algorithm, the use of sub‐sinograms can produce better results. The ML estimator is a random variable, and one ML reconstruction is one realization of the random variable. The ML solution is better obtained via the mean value of the random variable of the ML estimator. Sub‐sinograms can provide many realizations of the ML estimator. We show that the use of sub‐sinograms can produce better estimations for the ML solution than can the total‐sinogram and can also reduce the statistical noise within iteratively reconstructed images. © 2011 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 21, 247–252, 2011;  相似文献   

9.
Multimodal sensor medical image fusion has been widely reported in recent years, but the fused image by the existing methods introduces low contrast information and little detail information. To overcome this problem, the new image fusion method is proposed based on mutual‐structure for joint filtering and sparse representation in this article. First, the source image is decomposed into a series of detail images and coarse images by mutual‐structure for joint filtering. Second, sparse representation is adopted to fuse coarse images and then local contrast is applied for fusing detail images. Finally, the fused image is reconstructed by the addition of the fused coarse images and the fused detail images. By experimental results, the proposed method shows the best performance on preserving detail information and contrast information in the views of subjective and objective evaluations.  相似文献   

10.
The Poisson distribution is commonly used to describe count data for a control chart. However, it may not be appropriate for overdispersion or underdispersion. Thus, it is necessary to generalize the control chart to work well in such situations. This paper proposes a strategy for monitoring dispersed count data with multicollinearity between input variables by combining generalized linear model and principal component analysis. In the strategy, the generalized linear model using flexible distributions is performed on principal component scores from principal component analysis. The deviance residuals from the fitted model are then used to monitor the process. Simulation is conducted for performance under various situations. Also, a real dataset that is not suitable for a classical control chart is used in our example. The results from the simulated data and real data example support our proposed method.  相似文献   

11.
We consider the joint economic‐statistical design of X and R control charts under the assumption that the quality measurement and the in‐control time have Johnson and Weibull distributions. The Johnson distribution is general in that it can be made to fit all possible values of skewness and kurtosis. The four parameters—the sample size n, time h between successive samples, and the control factors k1 and k2 for the X and R charts—are determined so that the mean hourly loss‐cost is minimized under constraints on the Type I and II error probabilities. We have generalized the Costa model to accommodate the Johnson and Weibull distributions. Sensitivity to nonnormality, shift, and Weibull scale parameter is considered in our analysis. Our sensitivity analysis shows that the optimal design parameters are sensitive to nonnormality. Comparisons of the fully economic and economic‐statistical designs are given. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

12.
In this article, for the reconstruction of the positron emission tomography (PET) images, an iterative MAP algorithm was instigated with its adaptive neurofuzzy inference system based image segmentation techniques which we call adaptive neurofuzzy inference system based expectation maximization algorithm (ANFIS‐EM). This expectation maximization (EM) algorithm provides better image quality when compared with other traditional methodologies. The efficient result can be obtained using ANFIS‐EM algorithm. Unlike any usual EM algorithm, the predicted method that we call ANFIS‐EM minimizes the EM objective function using maximum a posteriori (MAP) method. In proposed method, the ANFIS‐EM algorithm was instigated by neural network based segmentation process in the image reconstruction. By the image quality parameter of PSNR value, the adaptive neurofuzzy based MAP algorithm and de‐noising algorithm compared and the PET input image is reconstructed and simulated in MATLAB/simulink package. Thus ANFIS‐EM algorithm provides 40% better peak signal to noise ratio (PSNR) when compared with MAP algorithm. © 2015 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 25, 1–6, 2015  相似文献   

13.
Problems of the form Z (σ) u (σ)= f (σ), where Z is a given matrix, f is a given vector, and σ is a circular frequency or circular frequency‐related parameter arise in many applications including computational structural and fluid dynamics, and computational acoustics and electromagnetics. The straightforward solution of such problems for fine increments of σ is computationally prohibitive, particularly when Z is a large‐scale matrix. This paper discusses an alternative solution approach based on the efficient computation of u and its successive derivatives with respect to σ at a few sample values of this parameter, and the reconstruction of the solution u (σ) in the frequency band of interest using multi‐point Padé approximants. This computational methodology is illustrated with applications from structural dynamics and underwater acoustic scattering. In each case, it is shown to reduce the CPU time required by the straightforward approach to frequency sweep computations by two orders of magnitude. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

14.
This article addresses the problem of reconstructing a magnetic resonance image from highly undersampled data, which frequently arises in accelerated magnetic resonance imaging. We propose to impose sparsity of first and second order difference sparse coefficients within the complement of the known support. Second order variation is involved to overcome blocky effects and support information is used to reduce the sampling rate further. The resulting optimization problem consists of a data fidelity term and first‐second order variation terms penalizing entries within the complement of the known support. The efficient split Bregman algorithm is used to solve the problem. Reconstruction results from magnetic resonance imaging data corresponding to different sampling rates are shown to illustrate the performance of the proposed method. Then, we also assess the tolerance of the new method to noise briefly. © 2015 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 25, 277–284, 2015  相似文献   

15.
Several algorithms have been proposed in the literature for image denoising but none exhibit optimal performance for all range and types of noise and for all image acquisition modes. We describe a new general framework, built from four‐neighborhood clique system, for denoising medical images. The kernel quantifies smoothness energy of spatially continuous anatomical structures. Scalar and vector valued quantification of smoothness energy configures images for Bayesian and variational denoising modes, respectively. Within variational mode, the choice of norm adapts images for either total variation or Tikhonov technique. Our proposal has three significant contributions. First, it demonstrates that the four‐neighborhood clique kernel is a basic filter, in same class as Gaussian and wavelet filters, from which state‐of‐the‐art denoising algorithms are derived. Second, we formulate theoretical analysis, which connects and integrates Bayesian and variational techniques into a two‐layer structured denoising system. Third, our proposal reveals that the first layer of the new denoising system is a hitherto unknown form of Markov random field model referred to as single‐layer Markov random field (SLMRF). The new model denoises a specific type of medical image by minimizing energy subject to knowledge of mathematical model that describes relationship between the image smoothness energy and noise level but without reference to a classical prior model. SLMRF was applied to and evaluated on two real brain magnetic resonance imaging datasets acquired with different protocols. Comparative performance evaluation shows that our proposal is comparable to state‐of‐the‐art algorithms. SLMRF is simple and computationally efficient because it does not incorporate a regularization parameter. Furthermore, it preserves edges and its output is devoid of blurring and ringing artifacts associated with Gaussian‐based and wavelet‐based algorithms. The denoising system is potentially applicable to speckle reduction in ultrasound images and extendable to three‐layer structure that account for texture features in medical images. © 2014 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 24, 224–238, 2014  相似文献   

16.
The aim of image denoising is to recover a visually accepted image from its noisy observation with as much detail as possible. The noise exists in computed tomography images due to hardware errors, software faults and/or low radiation dose. Because of noise, the analysis and extraction of accurate medical information is a challenging task for specialists. Therefore, a novel modification on the total variational denoising algorithm is proposed in this article to attenuate the noise from CT images and provide a better visual quality. The newly developed algorithm can properly detect noise from the other image components using four new noise distinguishing coefficients and reduce it using a novel minimization function. Moreover, the proposed algorithm has a fast computation speed, a simple structure, a relatively low computational cost and preserves the small image details while reducing the noise efficiently. Evaluating the performance of the proposed algorithm is achieved through the use of synthetic and real noisy images. Likewise, the synthetic images are appraised by three advanced accuracy methods –Gradient Magnitude Similarity Deviation (GMSD), Structural Similarity (SSIM) and Weighted Signal‐to‐Noise Ratio (WSNR). The empirical results exhibited significant improvement not only in noise reduction but also in preserving the minor image details. Finally, the proposed algorithm provided satisfying results that outperformed all the comparative methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号