首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In our previous work, the eliminating-highest error (EHE) criterion was proposed for the modified Hopfield (1982) neural network (MHNN) for image restoration and reconstruction. The performance of the MHNN is considerably improved by the EHE criterion as shown in many simulations. In inspiration of revealing the insight of the EHE criterion, in this paper, we first present a generalized updating rule (GUR) of the MHNN for gray image recovery. The stability properties of the GUR are given. It is shown that the neural threshold set up in this GUR is necessary and sufficient for energy decrease with probability one at each update. The new fastest-energy-descent (FED) criterion is then proposed parallel to the EHE criterion. While the EHE criterion is shown to achieve the highest probability of correct transition, the FED criterion achieves the largest amount of energy descent. In image restoration, the EHE and FED criteria are equivalent. A group of new algorithms based on the EHE and FED criteria is set up. A new measure, the correct transition rate (CTR), is proposed for the performance of iterative algorithms. Simulation results for gray image restoration show that the EHE (FED) based algorithms obtained the best visual quality and highest SNR of recovered images, took much smaller number of iterations, and had higher CTR. The CTR is shown to be a rational performance measure of iterative algorithms and predict quality of recovered images  相似文献   

2.
To deal with the problem of restoring degraded images with non-Gaussian noise, this paper proposes a novel cooperative neural fusion regularization (CNFR) algorithm for image restoration. Compared with conventional regularization algorithms for image restoration, the proposed CNFR algorithm can relax need of the optimal regularization parameter to be estimated. Furthermore, to enhance the quality of restored images, this paper presents a cooperative neural fusion (CNF) algorithm for image fusion. Compared with existing signal-level image fusion algorithms, the proposed CNF algorithm can greatly reduce the loss of contrast information under blind Gaussian noise environments. The performance analysis shows that the proposed two neural fusion algorithms can converge globally to the robust and optimal image estimate. Simulation results confirm that in different noise environments, the proposed two neural fusion algorithms can obtain a better image estimate than several well known image restoration and image fusion methods.  相似文献   

3.
红外图像去雾算法的主要任务是解决红外图像因米氏散射引起的低可见性和模糊.但是当前红外图像去雾算法对红外图像暗处透射率估计欠佳,针对这一情况,研究了基于雾线暗原色先验的红外图像去雾算法.首先,利用霍夫变换估计大气光照;然后,针对雾线去雾方法在部分场景中失效的现象,采用雾线暗原色先验方法,通过假设雾线较暗端为真实颜色估计透射率,获取透射率图;最后为去除透射率图中噪声,对透射率图全变分正则化进一步优化透射率图.以公开红外数据库LTIR作为测试对象,实验结果表明,本文去雾算法在增强红外图像清晰度的同时未破坏红外辐射分布,对各种场景的红外图像有较好去雾效果.透射率估计准确,有较好红外图像去雾能力.  相似文献   

4.
In many applications, it is required to reconstruct a high-resolution image from multiple, undersampled and shifted noisy images. Using the regularization techniques such as the classical Tikhonov regularization and maximum a posteriori (MAP) procedure, a high-resolution image reconstruction algorithm is developed. Because of the blurring process, the boundary values of the low-resolution image are not completely determined by the original image inside the scene. This paper addresses how to use (i) the Neumann boundary condition on the image, i.e., we assume that the scene immediately outside is a reflection of the original scene at the boundary, and (ii) the preconditioned conjugate gradient method with cosine transform preconditioners to solve linear systems arising from the high-resolution image reconstruction with multisensors. The usefulness of the algorithm is demonstrated through simulated examples.  相似文献   

5.
Dynamic electrical impedance tomography (EIT) images changes in the conductivity distribution of a medium from low frequency electrical measurements made at electrodes on the medium surface. Reconstruction of the conductivity distribution is an under-determined and ill-posed problem, typically requiring either simplifying assumptions or regularization based on a priori knowledge. This paper presents a maximum a posteriori (MAP) approach to linearized image reconstruction using knowledge of the noise variance of the measurements and the covariance of the conductivity distribution. This approach has the advantage of an intuitive interpretation of the algorithm parameters as well as fast (near real time) image reconstruction. In order to compare this approach to existing algorithms, the authors develop figures of merit to measure the reconstructed image resolution, the noise amplification of the image reconstruction, and the fidelity of positioning in the image. Finally, the authors develop a communications systems approach to calculate the probability of detection of a conductivity contrast in the reconstructed image as a function of the measurement noise and the reconstruction algorithm used.  相似文献   

6.
No-reference quality assessment using natural scene statistics: JPEG2000.   总被引:7,自引:0,他引:7  
Measurement of image or video quality is crucial for many image-processing algorithms, such as acquisition, compression, restoration, enhancement, and reproduction. Traditionally, image quality assessment (QA) algorithms interpret image quality as similarity with a "reference" or "perfect" image. The obvious limitation of this approach is that the reference image or video may not be available to the QA algorithm. The field of blind, or no-reference, QA, in which image quality is predicted without the reference image or video, has been largely unexplored, with algorithms focusing mostly on measuring the blocking artifacts. Emerging image and video compression technologies can avoid the dreaded blocking artifact by using various mechanisms, but they introduce other types of distortions, specifically blurring and ringing. In this paper, we propose to use natural scene statistics (NSS) to blindly measure the quality of images compressed by JPEG2000 (or any other wavelet based) image coder. We claim that natural scenes contain nonlinear dependencies that are disturbed by the compression process, and that this disturbance can be quantified and related to human perceptions of quality. We train and test our algorithm with data from human subjects, and show that reasonably comprehensive NSS models can help us in making blind, but accurate, predictions of quality. Our algorithm performs close to the limit imposed on useful prediction by the variability between human subjects.  相似文献   

7.
Adaptive MLP post-processing for block-based coded images   总被引:1,自引:0,他引:1  
Block-based image coding techniques are widely used for encoding images and videos. However, many annoying artefacts appear when an image is encoded at low bit rates. In these artefacts, the blocking effects are very obvious to human vision. Thus, an efficient blocking effect reduction scheme is essential for preserving the visual quality of decompressed images. A new adaptive post-processing algorithm is proposed to reduce the blocking artefacts of block-based coded images by using neural network techniques in the spatial domain. The algorithm combines a variance-based classifier and multilayer perceptrons to improve the performance of post-processing. In the proposed algorithm, the blocking and ringing effects in a reconstructed image are diminished without blurring of the edges, and the detailed region in the image is also enhanced. Comparison results between the proposed algorithm and other algorithms are made with several Joint Photographic Experts Group and vector quantisation decompressed images. In the simulations, the results of reconstructed images with improvements in both visual quality and PSNR are shown. It is found that the proposed algorithm is an effective post-processing algorithm for block-based image coding at low bit rates  相似文献   

8.
张地  彭宏 《电子学报》2008,36(1):180-183
超分辨率图像重构是利用关于同一场景的多帧低分辨率图像重构出一幅具有更高分辨率图像的过程.已有的超分辨率图像重构算法对于人工模拟所得到的低分辨率图像序列具有很好的效果,但对于拍摄到的真实低分辨率图像序列而言,重构后的图像往往比较模糊,有时甚至仍然无法分辨.为此,本文提出了一个联合运动估计与基于模式的超分辨率图像重构算法.实验结果表明,该算法能够得到优于常规算法的高分辨率图像.  相似文献   

9.
高美玲  段锦  赵伟强  胡奇 《红外技术》2023,47(10):1096-1105
针对目前卷积神经网络未能充分提取图像的浅层特征信息导致近红外图像彩色化算法存在结果图像局部区域误着色及网络训练不稳定导致结果出现模糊问题,提出了一种新的生成对抗网络方法用于彩色化任务。首先,在生成器残差块中引入自行设计的空洞全局注意力模块,对近红外图像的每个位置理解更加充分,改善局部区域误着色问题;其次,在判别网络中,将批量归一化层替换成梯度归一化层,提升网络判别性能,改善彩色化图像生成过程带来的模糊问题;最后,将本文算法在RGB_NIR数据集上进行定性和定量对比。实验表明,本文算法与其他经典算法相比能充分提取近红外图像的浅层信息特征,在指标方面,结构相似性提高了0.044,峰值信噪比提高了0.835,感知相似度降低了0.021。  相似文献   

10.
图像分割中的交叉熵和模糊散度算法   总被引:11,自引:0,他引:11  
薛景浩  章毓晋 《电子学报》1999,27(10):131-134
本文将交叉熵和模糊散度应用于图像分割中,提出了中最优灰度值选取算法,其一是基于均匀分布假设的最小交叉熵算法,其二是利用后难概率的最大类间交叉熵算法,其三是类间最大模糊散度的改进算法,其四是最小模糊散度算法,针对图像阈什化分割的要求,在后两种算法中构造一种新的模糊录改度函数,本文采用均匀测试和开头测试比较各算法的性能,利用多种类型测试 是到的分割结果,显示了所筛算法的有效性和通用性。  相似文献   

11.
王华君  孟德建  姚湘 《电视技术》2015,39(17):25-30
为了保持高光谱(HS)超分辨率重建过程中的频谱一致性和边缘锐度,提出一种基于空间谱结合非局部相似性的超分辨率重建算法。首先,使用HS图像生成模型,采用稀疏正则化解决全色(PAN)图像和HS图像重建的病态问题求逆;然后分析了从高空间分辨率到低空间分辨率数据生成的丰度系数映射;最后利用非局部相似性,设计空间谱联合正则化项。实验结果表明,本文算法重建图像在PSNR,SSIM和FSIM方面明显高于其他优秀算法,在SAM和ERGAS方面明显低于其他优秀算法,在光谱失真方面丢失最少,仅有2%-3%,低于其他算法30%左右,且重建效果更加清晰自然。  相似文献   

12.
A real-time implementation of scene-based non-uniformity correction (SBNUC) using a digital technique is proposed for microscan-mode staring infrared cameras. Most SBNUC algorithms, without sensor motion, cannot be applied to stationary scenes because of image blurring and fading. Using a microscanning effect, coupled with a modified version of Scribner's algorithm, the proposed technique can correct the artefacts and non-uniformities in real-time  相似文献   

13.
针对图像恢复中的边缘模糊问题,提出了一种基于小波域改进隐马尔可夫树( IHMT) 模型的图像恢复算法。IHMT模型更多描述了相邻尺度小波系数的互相关性,能准确刻画自然图像小波系数的统计特性。本文从图像恢复的贝叶斯框架出发,将简化的IHMT模型作为图像小波域的先验模型,构造正则化约束进行图像恢复。采用近似等价的方法,将含有混合密度的恢复方程简化为单一密度求解。实验结果表明,该算法能有效再现图像的边缘信息,提高峰值信噪比。  相似文献   

14.
The regularization of the least-squares criterion is an effective approach in image restoration to reduce noise amplification. To avoid the smoothing of edges, edge-preserving regularization using a Gaussian Markov random field (GMRF) model is often used to allow realistic edge modeling and provide stable maximum a posteriori (MAP) solutions. However, this approach is computationally demanding because the introduction of a non-Gaussian image prior makes the restoration problem shift-variant. In this case, a direct solution using fast Fourier transforms (FFTs) is not possible, even when the blurring is shift-invariant. We consider a class of edge-preserving GMRF functions that are convex and have nonquadratic regions that impose less smoothing on edges. We propose a decomposition-enabled edge-preserving image restoration algorithm for maximizing the likelihood function. By decomposing the problem into two subproblems, with one shift-invariant and the other shift-variant, our algorithm exploits the sparsity of edges to define an FFT-based iteration that requires few iterations and is guaranteed to converge to the MAP estimate.  相似文献   

15.
An algorithm for compression of bilevel images   总被引:2,自引:0,他引:2  
This paper presents the block arithmetic coding for image compression (BACIC) algorithm: a new method for lossless bilevel image compression which can replace JBIG, the current standard for bilevel image compression. BACIC uses the block arithmetic coder (BAC): a simple, efficient, easy-to-implement, variable-to-fixed arithmetic coder, to encode images. BACIC models its probability estimates adaptively based on a 12-bit context of previous pixel values; the 12-bit context serves as an index into a probability table whose entries are used to compute p(1) (the probability of a bit equaling one), the probability measure BAC needs to compute a codeword. In contrast, the Joint Bilevel Image Experts Group (JBIG) uses a patented arithmetic coder, the IBM QM-coder, to compress image data and a predetermined probability table to estimate its probability measures. JBIG, though, has not get been commercially implemented; instead, JBIG's predecessor, the Group 3 fax (G3), continues to be used. BACIC achieves compression ratios comparable to JBIG's and is introduced as an alternative to the JBIG and G3 algorithms. BACIC's overall compression ratio is 19.0 for the eight CCITT test images (compared to JBIG's 19.6 and G3's 7.7), is 16.0 for 20 additional business-type documents (compared to JBIG's 16.0 and G3's 6.74), and is 3.07 for halftone images (compared to JBIG's 2.75 and G3's 0.50).  相似文献   

16.
何碧容  蔡倩  孔莹莹  周建江 《信号处理》2017,33(11):1457-1467
针对SAR图像降斑过程中会产生过平滑现象及相干斑的滤除不彻底等问题,提出了稀疏结构符合高斯比例混合(Gaussian Scale Mixture,GSM)模型的SAR图像降斑算法。根据贝叶斯原理以及相干斑的统计特性推导该算法的数学模型,在块匹配过程中使用概率而不是欧式距离进行权重衡量,根据图像块之间的结构相似度,可以有效区分同质区与异质区,并得到图像块的较优均值估计。使用PCA字典学习方法对每个图像块进行子字典训练,实现同步稀疏编码(Simultaneous Sparse Coding,SSC),数学模型的求解利用迭代正则化方法。分别使用合成场景SAR图像及真实场景SAR图像对算法进行验证,实验表明,相比于目前已提出的PPB算法、SAR-BM3D算法及FANS算法,该算法能有效提高等效视数,在滤除相干斑的同时很好地保留了图像的局部结构特性与纹理特征。   相似文献   

17.
Image information and visual quality.   总被引:31,自引:0,他引:31  
Measurement of visual quality is of fundamental importance to numerous image and video processing applications. The goal of quality assessment (QA) research is to design algorithms that can automatically assess the quality of images or videos in a perceptually consistent manner. Image QA algorithms generally interpret image quality as fidelity or similarity with a "reference" or "perfect" image in some perceptual space. Such "full-reference" QA methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychovisual features of the human visual system (HVS), or by signal fidelity measures. In this paper, we approach the image QA problem as an information fidelity problem. Specifically, we propose to quantify the loss of image information to the distortion process and explore the relationship between image information and visual quality. QA systems are invariably involved with judging the visual quality of "natural" images and videos that are meant for "human consumption." Researchers have developed sophisticated models to capture the statistics of such natural signals. Using these models, we previously presented an information fidelity criterion for image QA that related image quality with the amount of information shared between a reference and a distorted image. In this paper, we propose an image information measure that quantifies the information that is present in the reference image and how much of this reference information can be extracted from the distorted image. Combining these two quantities, we propose a visual information fidelity measure for image QA. We validate the performance of our algorithm with an extensive subjective study involving 779 images and show that our method outperforms recent state-of-the-art image QA algorithms by a sizeable margin in our simulations. The code and the data from the subjective study are available at the LIVE website.  相似文献   

18.
传统的基于频域和小波域的去模糊算法所得的复原图像总是存在比较明显的边缘振铃及模糊效应,而较为有效的空域迭代优化去模糊算法速度通常比较慢。为了解决上述问题,提出了基于二步迭代阈值收缩(TwIST)与总变分(TV)约束相结合的图像去模糊算法(TwIST-TV)。首先在去模糊目标函数中加入对图像的TV 正则化约束,其次在对图像小波系数的每次二步迭代之前,加入对图像的TV 优化去噪约束,最后迭代获取去模糊图像。实验结果表明:相对于基于频域和小波域的模糊图像恢复算法,TwIST-TV 能有效抑制边缘模糊和振铃效应,复原图像的信噪比(SNR)、峰值信噪比(PSNR)高出1~7 dB,平均结构相似度指标(MSSIM)可高出0.05,相对于空域解卷积算法在保证求解精度相当的情况下具备6 倍以上的速度优势。  相似文献   

19.
Restoration of blurred star field images by maximally sparseoptimization   总被引:1,自引:0,他引:1  
The problem of removing blur from, or sharpening, astronomical star field intensity images is discussed. An approach to image restoration that recovers image detail using a constrained optimization theoretic approach is introduced. Ideal star images may be modeled as a few point sources in a uniform background. It is argued that a direct measure of image sparseness is the appropriate optimization criterion for deconvolving the image blurring function. A sparseness criterion based on the l(p) is presented, and candidate algorithms for solving the ensuing nonlinear constrained optimization problem are presented and reviewed. Synthetic and actual star image reconstruction examples are presented to demonstrate the method's superior performance as compared with several image deconvolution methods.  相似文献   

20.
A VQ-based blind image restoration algorithm   总被引:5,自引:0,他引:5  
Learning-based algorithms for image restoration and blind image restoration are proposed. Such algorithms deviate from the traditional approaches in this area, by utilizing priors that are learned from similar images. Original images and their degraded versions by the known degradation operator (restoration problem) are utilized for designing the VQ codebooks. The codevectors are designed using the blurred images. For each such vector, the high frequency information obtained from the original images is also available. During restoration, the high frequency information of a given degraded image is estimated from its low frequency information based on the codebooks. For the blind restoration problem, a number of codebooks are designed corresponding to various versions of the blurring function. Given a noisy and blurred image, one of the codebooks is chosen based on a similarity measure, therefore providing the identification of the blur. To make the restoration process computationally efficient, the principal component analysis (PCA) and VQ-nearest neighbor approaches are utilized. Simulation results are presented to demonstrate the effectiveness of the proposed algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号