首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 562 毫秒
1.
An adaptive total variation method based on the combination of speckle statistics and total variation restoration is proposed and developed for reducing speckle noise in optical coherence tomography (OCT) images. The statistical distribution of the speckle noise in OCT image is investigated and measured. With the measured parameters such as the mean value and variance of the speckle noise, the OCT image is restored by the adaptive total variation restoration method. The adaptive total variation restoration algorithm was applied to the OCT images of a volunteer’s hand skin, which showed effective speckle noise reduction and image quality improvement. For image quality comparison, the commonly used median filtering method was also applied to the same images to reduce the speckle noise. The measured results demonstrate the superior performance of the adaptive total variation restoration method in terms of image signal-to-noise ratio, equivalent number of looks, contrast-to-noise ratio, and mean square error.  相似文献   

2.
Leksell Gamma Knife is a mini‐invasive technique to obtain a complete destruction of cerebral lesions delivering a single high dose radiation beam. Positron Emission Tomography (PET) imaging is increasingly utilized for radiation treatment planning. Nevertheless, lesion volume delineation in PET datasets is challenging because of the low spatial resolution and high noise level of PET images. Nowadays, the biological target volume (BTV) is manually contoured on PET studies. This procedure is time expensive and operator‐dependent. In this article, a fully automatic algorithm for the BTV delineation based on random walks (RW) on graphs is proposed. The results are compared with the outcomes of the original RW method, 40% thresholding method, region growing method, and fuzzy c‐means clustering method. To validate the effectiveness of the proposed approach in a clinical environment, BTV segmentation on 18 patients with cerebral metastases is performed. Experimental results show that the segmentation algorithm is accurate and has real‐time performance satisfying the physician requirements in a radiotherapy environment.  相似文献   

3.
Poisson noise (also known as shot or photon noise) arises due to the lack of information during the image acquisition phase, it is quite common in the field of microscopic or astronomical imaging applications. In this paper, we propose a non-local total variation regularization framework with a p-norm driven data-fidelity for denoising the Poissonian images. In precise, the energy functional is derived using a Maximum A Posteriori estimator of the Poisson probability density function. The diffusion amounts to a non-local total variation minimization process, which eventually preserves fine structures while denoising the data. The numerical solution is sought under a fast converging split-Bregman iterative scheme. The proposed model is compared visually and statistically with the state-of-the-art Poisson denoising models.  相似文献   

4.
Magnetic resonance imaging (MRI) reconstruction model based on total variation (TV) regularization can deal with problems such as incomplete reconstruction, blurred boundary, and residual noise. In this article, a non‐convex isotropic TV regularization reconstruction model is proposed to overcome the drawback. Moreau envelope and minmax‐concave penalty are firstly used to construct the non‐convex regularization of L2 norm, then it is applied into the TV regularization to construct the sparse reconstruction model. The proposed model can extract the edge contour of the target effectively since it can avoid the underestimation of larger nonzero elements in convex regularization. In addition, the global convexity of the cost function can be guaranteed under certain conditions. Then, an efficient algorithm such as alternating direction method of multipliers is proposed to solve the new cost function. Experimental results show that, compared with several typical image reconstruction methods, the proposed model performs better. Both the relative error and the peak signal‐to‐noise ratio are significantly improved, and the reconstructed images also show better visual effects. The competitive experimental results indicate that the proposed approach is not limited to MRI reconstruction, but it is general enough to be used in other fields with natural images.  相似文献   

5.
An algorithm to solve the inverse problem of detecting inclusion interfaces in a piezoelectric structure is proposed. The material interfaces are implicitly represented by level sets which are identified by applying regularization using total variation penalty terms. The inverse problem is solved iteratively and the extended finite element method is used for the analysis of the structure in each iteration. The formulation is presented for three-dimensional structures and inclusions made of different materials are detected by using multiple level sets. The results obtained prove that the iterative procedure proposed can determine the location and approximate shape of material sub-domains in the presence of higher noise levels.  相似文献   

6.
Former studies by Hoeschen and Buhr indicated a higher total noise in a thorax image than expected from technical noise, i.e. quantum and detector noise. This difference results from the overlay of many small anatomical structures along the X-ray beam, which leads to a noise-like appearance without distinguishable structures in the projected image. A method is proposed to quantitatively determine this 'anatomical noise' component, which is not to be confused with the anatomical background (e.g. ribs). This specific anatomical noise pattern in a radiograph changes completely when the imaging geometry changes because different small anatomical structures contribute to the projected image. Therefore, two images are taken using slightly different exposure geometry, and a correlation analysis based on wavelet transforms allows to determining the uncorrelated noise components. Since the technical noise also differs from image to image, which makes it difficult to separate the anatomical noise, images of a lung phantom were produced on a low-sensitive industrial X-ray film using high-exposure levels. From these results, the anatomical noise level in real clinical thorax radiographs using realistic exposure levels is predicted using the general dose dependence described in the paper text and compared with the quantum and detector noise level of an indirect flat-panel detector system. For consistency testing, the same lung phantom was imaged with the same digital flat-panel detector and the total image noise including anatomical noise is determined. The results show that the relative portion of anatomical noise may exceed the technical noise level. Anatomical noise is an important contributor to the total image noise and, therefore, impedes the recognition of anatomical structures.  相似文献   

7.
A novel finite element-based system identification procedure is proposed to detect defects in existing frame structures when excited by blast loadings. The procedure is a linear time-domain system identification technique where the structure is represented by finite elements and the input excitation is not required to identify the structure. It identifies stiffness parameter (EI/L) of all the elements and tracks the changes in them to locate the defect spots. The similar procedure can also be used to monitor health of structures just after natural events like strong earthquakes and high winds. With the help of several numerical examples, it is shown that the algorithm can identify defect-free and defective structures even in the presence of noise in the output response information. The accuracy of the method is much better than other methods currently available even when input excitation information was used for identification purpose. The method not only detects defective elements but also locate the defect spot more accurately within the defective element. The structures can be excited by single or multiple blast loadings and the defect can be relatively small and large. With the help of several examples, it is established that the proposed method can be used as a nondestructive defect evaluation procedure for the health assessment of existing structures.  相似文献   

8.
Despeckling of medical ultrasound images   总被引:6,自引:0,他引:6  
Speckle noise is an inherent property of medical ultrasound imaging, and it generally tends to reduce the image resolution and contrast, thereby reducing the diagnostic value of this imaging modality. As a result, speckle noise reduction is an important prerequisite, whenever ultrasound imaging is used for tissue characterization. Among the many methods that have been proposed to perform this task, there exists a class of approaches that use a multiplicative model of speckled image formation and take advantage of the logarithmical transformation in order to convert multiplicative speckle noise into additive noise. The common assumption made in a dominant number of such studies is that the samples of the additive noise are mutually uncorrelated and obey a Gaussian distribution. The present study shows conceptually and experimentally that this assumption is oversimplified and unnatural. Moreover, it may lead to inadequate performance of the speckle reduction methods. The study introduces a simple preprocessing procedure, which modifies the acquired radio-frequency images (without affecting the anatomical information they contain), so that the noise in the log-transformation domain becomes very close in its behavior to a white Gaussian noise. As a result, the preprocessing allows filtering methods based on assuming the noise to be white and Gaussian, to perform in nearly optimal conditions. The study evaluates performances of three different, nonlinear filters--wavelet denoising, total variation filtering, and anisotropic diffusion--and demonstrates that, in all these cases, the proposed preprocessing significantly improves the quality of resultant images. Our numerical tests include a series of computer-simulated and in vivo experiments.  相似文献   

9.
This paper presents a new method for 2-D blind homomorphic deconvolution of medical B-scan ultrasound images. The method is based on noise-robust 2-D phase unwrapping and a noise-robust procedure to estimate the pulse in the complex cepstrum domain. Ordinary Wiener filtering is used in the subsequent deconvolution. The resulting images became much sharper with better defined tissue structures compared with the ordinary images. The deconvolved images had a resolution gain of the order of 3 to 7, and the signal-to-noise ratio (SNR) doubled for many of the images used in our experiments. The method gave stable results with respect to noise and gray levels through several image sequences  相似文献   

10.
基于二次曲面拟合的亚象素图象匹配算法   总被引:5,自引:1,他引:4  
侯成刚  赵明涛 《计量学报》1997,18(3):227-231
在图象测量系统中,目标的精确定位是一个关键问题,也是应用其它图像处理技术的基础。传统的图象匹配算法只能在象素极定位,本文基于相关函数的二次曲面拟合提出了一种亚象素精度的匹配算法,它对于无噪声图象匹配的绝对误差小于0.01象素。模拟实验表明,在有噪声的情况下该算法仍具有较小的偏差。  相似文献   

11.
M. Grediac  F. Sur 《Strain》2014,50(1):1-27
This paper deals with noise propagation from camera sensor to displacement and strain maps when the grid method is employed to estimate these quantities. It is shown that closed‐form equations can be employed to predict the link between metrological characteristics such as resolution and spatial resolution in displacement and strain maps on the one hand and various quantities characterising grid images such as brightness, contrast and standard deviation of noise on the other hand. Various numerical simulations confirm first the relevance of this approach in the case of an idealised camera sensor impaired by a homoscedastic Gaussian white noise. Actual CCD or CMOS sensors exhibit, however, a heteroscedastic noise. A pre‐processing step is therefore proposed to first stabilise noise variance prior to employing the predictive equations, which provide the resolution in strain and displacement maps due to sensor noise. This step is based on both a modelling of sensor noise and the use of the generalised Anscombe transform to stabilise noise variance. Applying this procedure in the case of a translation test confirms that it is possible to model correctly noise propagation from sensor to displacement and strain maps, and thus also to predict the actual link between resolution, spatial resolution and standard deviation of noise in grid images.  相似文献   

12.
The aim of image denoising is to recover a visually accepted image from its noisy observation with as much detail as possible. The noise exists in computed tomography images due to hardware errors, software faults and/or low radiation dose. Because of noise, the analysis and extraction of accurate medical information is a challenging task for specialists. Therefore, a novel modification on the total variational denoising algorithm is proposed in this article to attenuate the noise from CT images and provide a better visual quality. The newly developed algorithm can properly detect noise from the other image components using four new noise distinguishing coefficients and reduce it using a novel minimization function. Moreover, the proposed algorithm has a fast computation speed, a simple structure, a relatively low computational cost and preserves the small image details while reducing the noise efficiently. Evaluating the performance of the proposed algorithm is achieved through the use of synthetic and real noisy images. Likewise, the synthetic images are appraised by three advanced accuracy methods –Gradient Magnitude Similarity Deviation (GMSD), Structural Similarity (SSIM) and Weighted Signal‐to‐Noise Ratio (WSNR). The empirical results exhibited significant improvement not only in noise reduction but also in preserving the minor image details. Finally, the proposed algorithm provided satisfying results that outperformed all the comparative methods.  相似文献   

13.
Positron emission tomography (PET) is becoming increasingly important in the fields of medicine and biology. Penalized iterative algorithms based on maximum a posteriori (MAP) estimation for image reconstruction in emission tomography place conditions on which types of images are accepted as solutions. The recently introduced median root prior (MRP) favors locally monotonic images. MRP can preserve sharp edges, but a steplike streaking effect and much noise are still observed in the reconstructed image, both of which are undesirable. An MRP tomography reconstruction combined with nonlinear anisotropic diffusion interfiltering is proposed for removing noise and preserving edges. Analysis shows that the proposed algorithm is capable of producing better reconstructed images compared with those reconstructed by conventional maximum-likelihood expectation maximization (MLEM), MAP, and MRP-based algorithms in PET image reconstruction.  相似文献   

14.
Arrays of regular macropores in electronic, magnetic, photonic and sensing devices can be patterned by X-ray lithography. Such structures inevitably contain some irregularity and require time-consuming pattern inspections. In this work, a pattern inspection by intensity-based digital image processing procedure is proposed and tested on scanning electron microscopy images of porous SU-8 polymer resist. The Otsu’s thresholding converted grayscale to binary images and the closing morphology algorithm was applied to reduce noise in the images. The Canny edge detector was used to identify the contour of each pore by detecting abrupt intensity changes in the binary image. Pores were detected and their sizes were subsequently evaluated. The morphological distributions analyzed by this procedure are comparable to those carried out by the one-by-one human inspection.  相似文献   

15.
Piecewise linear (PWL) models are very attractive for image processing due to their simplicity and effectiveness. A new filtering architecture adopting multiparameter PWL functions is proposed for accurate restoration of images corrupted by Gaussian noise. The filtering performance is analyzed by taking into account the different behavior from the point of view of noise removal and detail preservation. The sensitivity to a change of the parameter settings is also investigated. In the new approach, the parameter values are automatically selected by resorting to a procedure that estimates the standard deviation of the Gaussian noise. Results dealing with different test images and noise variances show that the method yields a very accurate restoration of the image data  相似文献   

16.
Images degraded due to bad weather conditions affect the quality of vision-based security systems, as the distortions obscure contrasts in the frames of the images. In this paper, we present a novel approach that uses adaptive total variation minimisation to retain the object and reduce the noise from a single image, thereby enhancing fog-degraded images. Representative experimental results show that our proposed algorithm is effective for contrast and saturation enhancement of fog-degraded images.  相似文献   

17.
Improved adaptive nonlocal means (IANLM) is a variant of classical nonlocal means (NLM) denoising method based on adaptation of its search window size. In this article, an extended nonlocal means (XNLM) algorithm is proposed by adapting IANLM to Rician noise in images obtained by magnetic resonance (MR) imaging modality. Moreover, for improved denoising, a wavelet coefficient mixing procedure is used in XNLM to mix wavelet sub‐bands of two IANLM‐filtered images, which are obtained using different parameters of IANLM. Finally, XNLM includes a novel parameter‐free pixel preselection procedure for improving computational efficiency of the algorithm. The proposed algorithm is validated on T1‐weighted, T2‐weighted and Proton Density (PD) weighted simulated brain MR images (MRI) at several noise levels. Optimal values of different parameters of XNLM are obtained for each type of MRI sequence, and different variants are investigated to reveal the benefits of different extensions presented in this work. The proposed XNLM algorithm outperforms several contemporary denoising algorithms on all the tested MRI sequences, and preserves important pathological information more effectively. Quantitative and visual results show that XNLM outperforms several existing denoising techniques, preserves important pathological information more effectively, and is computationallyefficient.  相似文献   

18.
The emerging technology of positron emission image reconstruction is introduced in this paper as a multicriteria optimization problem. We show how selected families of objective functions may be used to reconstruct positron emission images. We develop a novel neural network approach to positron emission imaging problems. We also studied the most frequently used image reconstruction methods, namely, maximum likelihood under the framework of single performance criterion optimization. Finally, we introduced some of the results obtained by various reconstruction algorithms using computer‐generated noisy projection data from a chest phantom and real positron emission tomography (PET) scanner data. Comparison of the reconstructed images indicated that the multicriteria optimization method gave the best in error, smoothness (suppression of noise), gray value resolution, and ghost‐free images. © 2001 John Wiley & Sons, Inc. Int J Imaging Syst Technol 11, 361–364, 2000  相似文献   

19.
It is a significant challenge to accurately reconstruct medical computed tomography (CT) images with important details and features. Reconstructed images always suffer from noise and artifact pollution because the acquired projection data may be insufficient or undersampled. In reality, some “isolated noise points” (similar to impulse noise) always exist in low‐dose CT projection measurements. Statistical iterative reconstruction (SIR) methods have shown greater potential to significantly reduce quantum noise but still maintain the image quality of reconstructions than the conventional filtered back‐projection (FBP) reconstruction algorithm. Although the typical total variation‐based SIR algorithms can obtain reconstructed images of relatively good quality, noticeable patchy artifacts are still unavoidable. To address such problems as impulse‐noise pollution and patchy‐artifact pollution, this work, for the first time, proposes a joint regularization constrained SIR algorithm for sparse‐view CT image reconstruction, named “SIR‐JR” for simplicity. The new joint regularization consists of two components: total generalized variation, which could process images with many directional features and yield high‐order smoothness, and the neighborhood median prior, which is a powerful filtering tool for impulse noise. Subsequently, a new alternating iterative algorithm is utilized to solve the objective function. Experiments on different head phantoms show that the obtained reconstruction images are of superior quality and that the presented method is feasible and effective.  相似文献   

20.
《成像科学杂志》2013,61(7):408-422
Abstract

Image fusion is a challenging area of research with a variety of applications. The process of image fusion collects information from different sources and combines them in a single composite image. The composite fused image can better describe the scene than any of the source images. In this paper, we have proposed a method for noisy image fusion in contourlet domain. The proposed method works equally well for fusion of noise free images. Contourlet transform is a multiscale, multidirectional transform with various aspect ratios. These properties make it more suitable for image fusion than other conventional transforms. In the proposed work, the fusion algorithm is combined with a denoising algorithm to reverse the effect of noise. In the proposed method, we have used a level dependent threshold that is based on standard deviation of contourlet coefficients, mean and median of the absolute contourlet coefficients. Experimental results demonstrate that the proposed method performs well in the presence of different types of noise. Performance of the proposed method is compared with principal components analysis and sharp fusion based methods as well as other fusion methods based on variants of wavelet transform like dual tree complex wavelet transform, discrete wavelet transform, lifting wavelet transform, multiwavelet transform, stationary wavelet transform and pyramid transform using six standard quantitative quality metrics (entropy, standard deviation, edge strength, fusion factor, sharpness and peak signal to noise ratio). The combined qualitative and quantitative evaluation of the experimental results shows that the proposed method performs better than other methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号