首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Deblurring computed tomography (CT) images has been an active research topic in recent years because of the wide variety of challenges it offers. Hence, a novel filter is proposed in this article unveiling a simple, efficient, and fast deblurring process that involves few parameters, low calculations and does not utilize the undesirable iterative property or introduce the common deblurring problems. The newly proposed filter is validated on both real and synthetic blurred CT images to provide a sufficient understanding about its performance. Moreover, proper comparisons are made with high‐profile deblurring methods, in which the results are evaluated using three reliable quality metrics of feature similarity index (FSIM), structural similarity (SSIM), and visual information fidelity in pixel domain (VIFP). The intensive experiments and performance evaluations exhibited the efficiency of the proposed filter, in that it outperformed all the comparative methods. © 2015 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 25, 265–275, 2015  相似文献   

2.
Magnetic resonance imaging (MRI) images are frequently sensitive to certain types of noises and artifacts. The denoising of MRI images is essential for improving visual quality and reliability of the quantitative analysis of diagnosis and treatment. In this article, a new block difference-based filtering method is proposed to denoise the MRI images. First, the normal MRI image is degraded by a certain percentage of noise. The block difference between the intensity of the normal and noisy MRI is computed, and then it is compared with the intensity of the blocks of the normal MRI image. Based on the comparison, the pixel weights are updated to each block of the denoised MRI image. Observational results are brought out on the BrainWeb and BraTS datasets and evaluated by performance metrics such as peak signal-to-noise ratio, structural similarity index measures, universal quality index, and root mean square error. The proposed method outperforms the existing denoising filtering techniques.  相似文献   

3.
Several algorithms have been proposed in the literature for image denoising but none exhibit optimal performance for all range and types of noise and for all image acquisition modes. We describe a new general framework, built from four‐neighborhood clique system, for denoising medical images. The kernel quantifies smoothness energy of spatially continuous anatomical structures. Scalar and vector valued quantification of smoothness energy configures images for Bayesian and variational denoising modes, respectively. Within variational mode, the choice of norm adapts images for either total variation or Tikhonov technique. Our proposal has three significant contributions. First, it demonstrates that the four‐neighborhood clique kernel is a basic filter, in same class as Gaussian and wavelet filters, from which state‐of‐the‐art denoising algorithms are derived. Second, we formulate theoretical analysis, which connects and integrates Bayesian and variational techniques into a two‐layer structured denoising system. Third, our proposal reveals that the first layer of the new denoising system is a hitherto unknown form of Markov random field model referred to as single‐layer Markov random field (SLMRF). The new model denoises a specific type of medical image by minimizing energy subject to knowledge of mathematical model that describes relationship between the image smoothness energy and noise level but without reference to a classical prior model. SLMRF was applied to and evaluated on two real brain magnetic resonance imaging datasets acquired with different protocols. Comparative performance evaluation shows that our proposal is comparable to state‐of‐the‐art algorithms. SLMRF is simple and computationally efficient because it does not incorporate a regularization parameter. Furthermore, it preserves edges and its output is devoid of blurring and ringing artifacts associated with Gaussian‐based and wavelet‐based algorithms. The denoising system is potentially applicable to speckle reduction in ultrasound images and extendable to three‐layer structure that account for texture features in medical images. © 2014 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 24, 224–238, 2014  相似文献   

4.
Independent component analysis is a technique used for separation of statistically independent sources. It can estimate unknown sources from a mixture of sources without any prior knowledge about them. The sources should be non‐Gaussian and independent with each other. In this work, multiscale ICA is proposed for medical images (fundus images, MRI Images). The data matrix is formed by considering the higher sub‐bands of multiscale decompositions. Performance of multiscale ICA is evaluated and compared with the ICA algorithms using simulated signals and different medical images using Amari performance index and Comon test values. Results show that API and Comon test values are less for multiscale ICA for simulated signals. In case of pathological images, the features are separated correctly by multiscale ICA. Multiscale ICA performs better than simple ICA for separation and detection of independent components from medical images (fundus images), such as blood vessels and artifacts. © 2013 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 23, 327–337, 2013  相似文献   

5.
Improved adaptive nonlocal means (IANLM) is a variant of classical nonlocal means (NLM) denoising method based on adaptation of its search window size. In this article, an extended nonlocal means (XNLM) algorithm is proposed by adapting IANLM to Rician noise in images obtained by magnetic resonance (MR) imaging modality. Moreover, for improved denoising, a wavelet coefficient mixing procedure is used in XNLM to mix wavelet sub‐bands of two IANLM‐filtered images, which are obtained using different parameters of IANLM. Finally, XNLM includes a novel parameter‐free pixel preselection procedure for improving computational efficiency of the algorithm. The proposed algorithm is validated on T1‐weighted, T2‐weighted and Proton Density (PD) weighted simulated brain MR images (MRI) at several noise levels. Optimal values of different parameters of XNLM are obtained for each type of MRI sequence, and different variants are investigated to reveal the benefits of different extensions presented in this work. The proposed XNLM algorithm outperforms several contemporary denoising algorithms on all the tested MRI sequences, and preserves important pathological information more effectively. Quantitative and visual results show that XNLM outperforms several existing denoising techniques, preserves important pathological information more effectively, and is computationallyefficient.  相似文献   

6.
We present an intelligent technique for image denoising problem of gray level images degraded with Gaussian white noise in spatial domain. The proposed technique consists of using fuzzy logic as a mapping function to decide whether a pixel needs to be krigged or not. Genetic programming is then used to evolve an optimal pixel intensity‐estimation function for restoring degraded images. The proposed system has shown considerable improvement when compared both qualitatively and quantitatively with the adaptive Wiener filter, methods based on fuzzy kriging, and a fuzzy‐based averaging technique. Experimental results conducted using an image database confirms that the proposed technique offers superior performance in terms of image quality measures. This also validates the use of hybrid techniques for image restoration. © 2007 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 17, 224–231, 2007  相似文献   

7.
In this article, a novel denoising technique based on custom thresholding operating in the wavelet transform domain is proposed. The denoising process is spatially adaptive and also sub‐band adaptive. To render the denoising algorithm space adaptive, a Vector Quantization (VQ)‐based algorithm is used. The design of the VQ is based on Expectation Maximization (EM) algorithm. The results of the algorithm is demonstrated on SAR images corrupted by speckle noise. Experimental results show that Custom thresholding function outperforms the traditional soft, hard, and Bayes threshoding functions, improving the denoised results significantly in terms of Peak Signal to Noise Ratio (PSNR). © 2009 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 19, 175–178, 2009  相似文献   

8.
Various diseases are diagnosed using medical imaging used for analysing internal anatomical structures. However, medical images are susceptible to noise introduced in both acquisition and transmission processes. We propose an adaptive data-driven image denoising algorithm based on an improvement of the intersection of confidence intervals (ICI), called relative ICI (RICI) algorithm. The 2D mask of the adaptive size and shape is calculated for each image pixel independently, and utilized in the design of the 2D local polynomial approximation (LPA) filters. Denoising performances, in terms of the PSNR, are compared to the original ICI-based method, as well as to the fixed sized filtering. The proposed adaptive RICI-based denoising outperformed the original ICI-based method by up to 1.32 dB, and the fixed size filtering by up to 6.48 dB. Furthermore, since the denoising of each image pixel is done locally and independently, the method is easy to parallelize.  相似文献   

9.
In this paper, we have proposed a blind motion deblurring algorithm that comprises the estimation of the motion blur parameters (length and angle) in a modified cepstrum domain with a blind no-reference image spatial quality evaluator (BRISQUE) used for the tuning of point spread function (PSF) parameters. Ringing artifacts are generated during the deblurring process. In this paper, the modified R–L (Richardson–Lucy) algorithm with weight calculation based on graphcut is presented to obtain good estimates of the unblurred image with ringing reduction. The method involves the selection of different weights for edges and smooth regions such that the ringing effect over R–L iterations can be reduced. A newly proposed method has been tested on various natural images with a motion blur of different length and degrees. A comparison with state-of-the-art methods proves that the proposed technique achieved better results in terms of different quality measures such as SSIM, FSIM and PSNR and can be greatly beneficial for deblurring purpose.  相似文献   

10.
Digital tomosynthesis (DTS) has been widely used in both industrial nondestructive testing and medical x-ray imaging as a popular multiplanar imaging modality. However, although it provides some of the tomographic benefits of computed tomography (CT) at reduced dose and imaging time, the image characteristics are relatively poor due to blur artifacts originated from incomplete data sampling for a limited angular range and also aspects inherent to imaging system, including finite focal spot size of the x-ray source, detector resolution, etc. In this work, in order to overcome these difficulties, we propose an intuitive method in which a compressed-sensing (CS)-based deblurring scheme is applied to the projection images before common DTS reconstruction. We implemented the proposed deblurring algorithm and performed a systematic experiment to demonstrate its viability for improving the image characteristics in DTS. According to our results, the proposed method appears to be effective for the blurring problems in DTS and seems to be promising to our ongoing application to x-ray nondestructive testing.  相似文献   

11.
The article introduces a low‐cost algorithm for improving the demosaicking process in the texture areas such as one‐pixel patterns. The algorithm first detects difficult texture regions. After the detection process is completed, the algorithm demosaicks the texture areas using special demosaicking operations whereas non‐texture regions are restored using some of the existing demosaicking approaches. In this way, the quality of the texture areas in demosaicked images can be improved up to 70% while only little increasing the computational complexity of the original demosaicking solution. © 2007 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 17, 232–243, 2007  相似文献   

12.
The aim of this article is to design an expert system for medical image diagnosis. We propose a method based on association rule mining combined with classification technique to enhance the diagnosis of medical images. This system classifies the images into two categories namely benign and malignant. In the proposed work, association rules are extracted for the selected features using an algorithm called AprioriTidImage, which is an improved version of Apriori algorithm. Then, a new associative classifier CLASS_Hiconst ( CL assifier based on ASS ociation rules with Hi gh Con fidence and S uppor t ) is modeled and used to diagnose the medical images. The performance of our approach is compared with two different classifiers Fuzzy‐SVM and multilayer back propagation neural network (MLPNN) in terms of classifier efficiency with sensitivity, specificity, accuracy, positive predictive value, and negative predictive value. The experimental result shows 96% accuracy, 97% sensitivity, and 96% specificity and proves that association rule based classifier is a powerful tool in assisting the diagnosing process. © 2013 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 23, 194–203, 2013  相似文献   

13.
The data acquired by magnetic resonance (MR) imaging system are inherently degraded by noise that has its origin in the thermal Brownian motion of electrons. Denoising can enhance the quality (by improving the SNR) of the acquired MR image, which is important for both visual analysis and other post processing operations. Recent works on maximum likelihood (ML) based denoising shows that ML methods are very effective in denoising MR images and has an edge over the other state‐of‐the‐art methods for MRI denoising. Among the ML based approaches, the Nonlocal maximum likelihood (NLML) method is commonly used. In the conventional NLML method, the samples for the ML estimation of the unknown true pixel are chosen in a nonlocal fashion based on the intensity similarity of the pixel neighborhoods. Euclidean distance is generally used to measure this similarity. It has been recently shown that computing similarity measure is more robust in discrete cosine transform (DCT) subspace, compared with Euclidean image subspace. Motivated by this observation, we integrated DCT into NLML to produce an improved MRI filtration process. Other than improving the SNR, the time complexity of the conventional NLML can also be significantly reduced through the proposed approach. On synthetic MR brain image, an average improvement of 5% in PSNR and 86%reduction in execution time is achieved with a search window size of 91 × 91 after incorporating the improvements in the existing NLML method. On an experimental kiwi fruit image an improvement of 10% in PSNR is achieved. We did experiments on both simulated and real data sets to validate and to demonstrate the effectiveness of the proposed method. © 2015 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 25, 256–264, 2015  相似文献   

14.
In single photon emission computed tomography (SPECT), the nonstationary Poisson noise in projection data (sinogram) is a major cause of compromising the quality of reconstructed images. To improve the quality, we must suppress the Poisson noise in the sinogram before or during image reconstruction. However, the conventional space or frequency domain denoising methods will likely remove some information that is very important for accurate image reconstruction, especially for analytical SPECT reconstruction with compensation for nonuniform attenuation. As a time‐frequency analysis tool, wavelet transform has been widely used in the signal and image processing fields and demonstrated its powerful functions in the application of denoising. In this article, we studied the denoising abilities of wavelet‐based denoising method and the impact of the denoising on analytical SPECT reconstruction with nonuniform attenuation. Six popular wavelet‐based denoising methods were tested. The reconstruction results showed that the Revised BivaShrink method with complex wavelet is better than others in analytical SPECT reconstruction with nonuniform attenuation compensation. Meanwhile, we found that the effect of the Anscombe transform for denoising is not significant on the wavelet‐based denoising methods, and the wavelet‐based de‐noise methods can obtain good denoising result even if we do not use Anscombe transform. The wavelet‐based denoising methods are the good choice for analytical SPECT reconstruction with compensation for nonuniform attenuation. © 2013 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 23, 36–43, 2013  相似文献   

15.
The magnetic resonance imaging (MRI) modality is an effective tool in the diagnosis of the brain. These MR images are introduced with noise during acquisition which reduces the image quality and limits the accuracy in diagnosis. Elimination of noise in medical images is an important task in preprocessing and there exist different methods to eliminate noise in medical images. In this article, different denoising algorithms such as nonlocal means, principal component analysis, bilateral, and spatially adaptive nonlocal means (SANLM) filters are studied to eliminate noise in MR. Comparative analysis of these techniques have been with help of various metrics such as signal‐to‐noise ratio, peak signal‐to‐noise ratio (PSNR), mean squared error, root mean squared error, and structure similarity (SSIM). This comparative study shows that the SANLM denoising filter gives the best performance in terms of better PSNR and SSIM in visual interpretation. It also helps in clinical diagnosis of the brain.  相似文献   

16.
The research and development of biomedical imaging techniques requires more number of image data from medical image acquisition devices, like computed tomography (CT), magnetic resonance imaging (MRI), positron emission technology, and single photon emission computed tomography. Multimodal image fusion is the process of combining information from various images to get the maximum amount of content captured by a single image acquisition device at different angles and different times or stages. This article analyses and compares the performance of different existing image fusion techniques for the clinical images in the medical field. The fusion techniques compared are simple or pixel‐based fusion, pyramid‐based fusion, and transform‐based fusion techniques. Four set of CT and MRI images are used for the above fusion techniques. The performance of the fused results is measured with seven parameters. The experimental results show that out of seven parameters the values of four parameters, such as average difference, mean difference, root mean square error, and standard deviation are minimum and the values of remaining three parameters, such as peak signal to noise ratio, entropy, and mutual information are maximum. From the experimental results, it is clear that out of 14 fusion techniques taken for survey, image fusion using dual tree complex wavelet transform gives better fusion result for the clinical CT and MRI images. Advantages and limitations of all the techniques are discussed with their experimental results and their relevance. © 2014 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 24, 193–202, 2014.  相似文献   

17.
The advancement in medical imaging systems such as computed tomography (CT), magnetic resonance imaging (MRI), positron emitted tomography (PET), and computed radiography (CR) produces huge amount of volumetric images about various anatomical structure of human body. There exists a need for lossless compression of these images for storage and communication purposes. The major issue in medical image is the sequence of operations to be performed for compression and decompression should not degrade the original quality of the image, it should be compressed loss lessly. In this article, we proposed a lossless method of volumetric medical image compression and decompression using adaptive block‐based encoding technique. The algorithm is tested for different sets of CT color images using Matlab. The Digital Imaging and Communications in Medicine (DICOM) images are compressed using the proposed algorithm and stored as DICOM formatted images. The inverse process of adaptive block‐based algorithm is used to reconstruct the original image information loss lessly from the compressed DICOM files. We present the simulation results for large set of human color CT images to produce a comparative analysis of the proposed methodology with block‐based compression, and JPEG2000 lossless image compression technique. This article finally proves the proposed methodology gives better compression ratio than block‐based coding and computationally better than JPEG 2000 coding. © 2013 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 23, 227–234, 2013  相似文献   

18.
In this article, registration and retrieval are carried out separately for medical images and then registration‐based retrieval is performed. It is aimed to provide a more thorough insight on the use of registration, retrieval, and registration‐based retrieval algorithm for medical images. The purpose of this work is to deal these techniques with anatomical imaging modalities for clinical diagnosis, treatment, intervention, and surgical planning in a more effective manner. Two steps are implemented. In the first step, the affine transformation‐based registration for medical image is processed. The second step is the retrieval of medical images processed by using seven distance metrics such as euclidean, manhattan, mahalanobis, canberra, bray‐curtis, squared chord, chi‐squared, and also by using the features like mean, standard deviation, skewness, energy, and entropy. Now images registered by affine transformation are applied for retrieval. In this work, both registration and retrieval techniques in medical domain share some common image processing steps and required to be integrated in a larger system to complement each other. Experimental results, it is evident that euclidean and manhattan produces 100% precision and 35% recall found to have higher performance in retrieval. From the four anatomical modalities considered (brain, chest, liver, and limbs) brain image has better registration. It is also found that though the registration of images changes the orientation, for better performance of images in clinical evaluation it does not widely affect the retrieval performance. In the medical domain the ultimate aim of this work is to provide diagnostic support to physicians and radiologists by displaying relevant past cases, along with proven pathologies as ground truth from experimental results. © 2013 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 23, 360–371, 2013  相似文献   

19.
Color‐edge detection is an important research task in the field of image processing. Efficient and accurate edge detection will lead to higher performance of subsequent image processing techniques, including image segmentation, object‐based image coding, and image retrieval. To improve the performance of color‐edge detection while considering that human eyes are ultimate receiver of color images, the perceptually insignificant edges should avoid being over‐detected. In this article, a color‐edge detection scheme based on the perceptual color contrast is proposed. The perceptual color contrast is defined as the visible color difference across an edge in the CIE‐Lab color space. A perceptual metric for measuring the visible color difference of a target color pixel is defined by utilizing the associated perceptually indistinguishable region. The perceptually indistinguishable region for each color pixel in the CIE‐Lab color space is estimated by the design of an experiment that considers the local property due to local changes in luminance. Simulation results show that the perceptual color contrast is effectively defined and the color edges in color images are detected while most of the perceptually insignificant edges are successfully suppressed through the proposed color‐edge detection scheme. © 2009 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 19, 332–339, 2009  相似文献   

20.
A new hybrid variational model for recovering blurred images in the presence of multiplicative noise is proposed. Inspired by previous work on multiplicative noise removal, an I-divergence technique is used to build a strictly convex model under a condition that ensures the uniqueness of the solution and the stability of the algorithm. A split-Bregman algorithm is adopted to solve the constrained minimisation problem in the new hybrid model efficiently. Numerical tests for simultaneous deblurring and denoising of the images subject to multiplicative noise are then reported. Comparison with other methods clearly demonstrates the good performance of our new approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号