首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
《电子学报:英文版》2016,(6):1114-1120
Images captured in foggy or hazy weather conditions often suffer from poor visibility.The dark channel prior method has well solved the single image dehazing problem in nature,but it is invalid when the scene objects are inherently similar to the atmospheric light and no shadow is cast on them.We propose an efficient regularization method by adding a scene radiance constraint and combing the dark channel prior to remove hazes from a sin-gle input image.The experiments show that this improved algorithm can deal with various levels of foggy weather conditions,as well as greatly enhance the image's visibility and details.In addition,the recovered haze-free image has little or no halo artifacts.  相似文献   

2.
针对水下拍摄的图片存在颜色失真、细节和边缘模 糊等特点,提出了一种基于颜色衰减先验的水下图像增强算法。首先在计算暗通道函数时,用最小值滤波去噪。然后,对图片进行显著图处理,利用颜色先验法则完成深度估计。此滤波方法不仅能降噪,还可以防止颜色失真。最后,基于模型简化获得复原的图片,将其进行伽马变换进行校正,实现柔性去雾。实验结果表明,本文算法与几种典型的水下图像去雾算法相比,能够较好提高图像的清晰度和对比度,同时获得较好的图像颜色。  相似文献   

3.
To effectively solve the ill-posed image compressive sensing (CS) reconstruction problem, it is essential to properly exploit image prior knowledge. In this paper, we propose an efficient hybrid regularization approach for image CS reconstruction, which can simultaneously exploit both internal and external image priors in a unified framework. Specifically, a novel centralized group sparse representation (CGSR) model is designed to more effectively exploit internal image sparsity prior by suppressing the group sparse coding noise (GSCN), i.e., the difference between the group sparse coding coefficients of the observed image and those of the original image. Meanwhile, by taking advantage of the plug-and-play (PnP) image restoration framework, a state-of-the-art deep image denoiser is plugged into the optimization model of image CS reconstruction to implicitly exploit external deep denoiser prior. To make our hybrid internal and external image priors regularized image CS method (named as CGSR-D-CS) tractable and robust, an efficient algorithm based on the split Bregman iteration is developed to solve the optimization problem of CGSR-D-CS. Experimental results demonstrate that our CGSR-D-CS method outperforms some state-of-the-art image CS reconstruction methods (either model-based or deep learning-based methods) in terms of both objective quality and visual perception.  相似文献   

4.
To save bandwidth and storage space as well as speed up data transmission, people usually perform lossy compression on images. Although the JPEG standard is a simple and effective compression method, it usually introduces various visually unpleasing artifacts, especially the notorious blocking artifacts. In recent years, deep convolutional neural networks (CNNs) have seen remarkable development in compression artifacts reduction. Despite the excellent performance, most deep CNNs suffer from heavy computation due to very deep and wide architectures. In this paper, we propose an enhanced wide-activated residual network (EWARN) for efficient and accurate image deblocking. Specifically, we propose an enhanced wide-activated residual block (EWARB) as basic construction module. Our EWARB gives rise to larger activation width, better use of interdependencies among channels, and more informative and discriminative non-linearity activation features without more parameters than residual block (RB) and wide-activated residual block (WARB). Furthermore, we introduce an overlapping patches extraction and combination (OPEC) strategy into our network in a full convolution way, leading to large receptive field, enforced compatibility among adjacent blocks, and efficient deblocking. Extensive experiments demonstrate that our EWARN outperforms several state-of-the-art methods quantitatively and qualitatively with relatively small model size and less running time, achieving a good trade-off between performance and complexity.  相似文献   

5.
Existing enhancement methods tend to overlook the difference between image components of low-frequency and high-frequency. However, image low-frequency portions contain smooth areas occupied the majority of the image, while high-frequency components are sparser in the image. Meanwhile, the different importance of image low-frequency and high-frequency components cannot be precisely and effectively for image enhancement. Therefore, it is reasonable to deal with these components separately when designing enhancement algorithms with image subspaces. In this paper, we propose a novel divide-and-conquer strategy to decompose the observed image into four subspaces and enhance the images corresponding to each subspace individually. We employ the existing technique of gradient distribution specification for these enhancements, which has displayed promising results for image naturalization. We then reconstruct the full image using the weighted fusion of these four subspace images. Experimental results demonstrate the effectiveness of the proposed strategy in both image naturalization and details promotion.  相似文献   

6.
基于空域和频域处理的红外图像细节增强算法   总被引:1,自引:1,他引:0  
高精度高动态红外图像具有对比度低和有效灰度范围窄的特点,因此为了提高红外图像细节纹理的显著性,避免空域处理中噪声和盲元对弱对比度细节纹理的影响,同时利用频域处理的全局性,提出了一种基于空域和频域处理的红外图像细节增强算法.首先在空域内对红外图像进行高频细节和低频背景的分离,然后将其转换到频域内并利用伽玛变换对细节的高频分量进行增强,同时对背景的高频分量进行抑制;将处理后的高频细节和低频背景在空域中以给定的权重值进行重建;最后利用直方图统计拉伸处理实现红外图像有效灰度范围的扩展和细节对比度的增强.实验结果表明,不论是人眼的主观评价还是客观评价,本算法都具有较强的细节增强能力和较佳的图像视觉表现,且具有实时处理的前景.  相似文献   

7.
多尺度Retinex和双边滤波相融合的图像增强算法   总被引:1,自引:0,他引:1  
为了获得更加理想的图像增强效果,针对Retinex算法存在的“光晕伪影”现象,提出了一种多尺度Retinex和双边滤波相融合的图像增强算法.首先对多尺度Retinex算法的对数函数进行改进,拓宽图像的灰度范围,然后采用双边滤波算法对图像反射分量进行处理,消除光照变化不利影响,提高图像的对比度和清晰程度,最后采用伽玛函数对图像亮度进行校正,保持图像细节信息.实验结果表明,无论是主观视觉效果和客观质量,本文算法的性能均要优于多尺度Retinex算法,可以有效消除“光晕伪影”现象.  相似文献   

8.
Low-light images enhancement is a challenging task because enhancing image brightness and reducing image degradation should be considered simultaneously. Although existing deep learning-based methods improve the visibility of low-light images, many of them tend to lose details or sacrifice naturalness. To address these issues, we present a multi-stage network for low-light image enhancement, which consists of three sub-networks. More specifically, inspired by the Retinex theory and the bilateral grid technique, we first design a reflectance and illumination decomposition network to decompose an image into reflectance and illumination maps efficiently. To increase the brightness while preserving edge information, we then devise an attention-guided illumination adjustment network. The reflectance and the adjusted illumination maps are fused and refined by adversarial learning to reduce image degradation and improve image naturalness. Experiments are conducted on our rebuilt SICE low-light image dataset, which consists of 1380 real paired images and a public dataset LOL, which has 500 real paired images and 1000 synthetic paired images. Experimental results show that the proposed method outperforms state-of-the-art methods quantitatively and qualitatively.  相似文献   

9.
This paper presents a fuzzy energy-based active contour model with shape prior for image segmentation. The paper proposes a fuzzy energy functional including a data term and a shape prior term. The data term, inspired from the region-based active contour approach proposed by Chan and Vese, evolves the contour relied on image information. The shape term inspired from Chan and Zhu’s work, defined as the distance between the evolving shape and a reference one, constrains the evolving contour with respect to the reference shape. To align the shapes, we exploit the shape normalization procedure which takes into account the affine transformation. In addition, to minimize the energy functional, we utilize a direct method to calculate the energy alterations. The proposed model therefore can deal with images with background clutter and object occlusion, improves the computational speed, and avoids difficulties associated with time step selection issue in gradient descent-based approaches.  相似文献   

10.
传统暗原色理论的相关算法,在处理雾天图像时会产生颜色的畸变和亮度的损失,针对该情况,该文提出基于权重多曝光融合的单幅雾天图像复原算法。首先通过雾天图像的直方图分析,获取全局大气背景光值的区域。其次构造一种新的Kirsh算子的高阶差分滤波方法,优化透射率图像。最后设计一种基于显著性权重的多曝光图像融合方法,提高处理后图像的视觉效果。该文采用自然图像和合成图像进行实验,与多种算法通过主观和客观评价,表明该文算法比现有的算法有更高的复原效果。  相似文献   

11.
针对现有图像增强算法大多不具备处理多种类型降质图像的能力,提出一种基于先验知识与大气散射模型的快速图像增强算法.首先,通过大量实验统计,提出一种新的图像先验-明亮通道先验,即高质量清晰图像中每个像素邻域都极有可能存在白点;随后,对散射模型所存在的缺陷加以改进,并结合明亮通道先验与黑色通道先验,推导出场景反射率的恢复公式;最后,针对黑色通道先验失效情况,提出一种基于可靠性预测的容错机制,以提高其适用范围.实验结果表明:本文算法不但可以有效的突出纹理细节,还具有一定的色调恢复功能,能够处理多种不同类型的降质图像.  相似文献   

12.
随着图像数据的迅猛增长,当前主流的图像检索方法采用的视觉特征编码步骤固定,缺少学习能力,导致其图像表达能力不强,而且视觉特征维数较高,严重制约了其图像检索性能。针对这些问题,该文提出一种基于深度卷积神径网络学习二进制哈希编码的方法,用于大规模的图像检索。该文的基本思想是在深度学习框架中增加一个哈希层,同时学习图像特征和哈希函数,且哈希函数满足独立性和量化误差最小的约束。首先,利用卷积神经网络强大的学习能力挖掘训练图像的内在隐含关系,提取图像深层特征,增强图像特征的区分性和表达能力。然后,将图像特征输入到哈希层,学习哈希函数使得哈希层输出的二进制哈希码分类误差和量化误差最小,且满足独立性约束。最后,给定输入图像通过该框架的哈希层得到相应的哈希码,从而可以在低维汉明空间中完成对大规模图像数据的有效检索。在3个常用数据集上的实验结果表明,利用所提方法得到哈希码,其图像检索性能优于当前主流方法。  相似文献   

13.
针对传统图像增强方法存在的不足,提出了一种基于模拟退火遗传算法的图像增强算法。首先将图像增强过程参数编码成为遗传算法中的个体,然后通过模拟自然界生物进化过程对参数进行寻优,并引入模拟退火算法克服种群退化现象,同采用动态自适应交叉、变异过程防止局部最优解出现,最后根据找到的最优参数实现图像增强处理,并且采用仿真对比实验对算法有效性和优越性进行测试。仿真实验结果表明,相对于当前其它图像增强算法,模拟退火遗传算法可以恢复图像细节信息,提高了对比度,图像质量得到增强。  相似文献   

14.
介绍了基于光电荷积累的全局像素微弱信号提取技术,分析了用此方法提高图像信号幅度、降低随机噪声的基本原理,给出了图像增强提取的实验效果。  相似文献   

15.
二维分数阶卡尔曼滤波及其在图像处理中的应用   总被引:4,自引:1,他引:3  
该文研究了二维分数阶卡尔曼滤波及其在图像增强与滤波中的应用问题。首先基于分数微积分的定义,建立了二维线性离散系统的分数阶差分状态空间模型。然后,提出了一种可应用于图像信息处理的二维分数阶卡尔曼滤波算法,并通过实验验证了该文提出算法的有效性。仿真结果证明,该算法增强了图像中的细节特征,同时消弱了图像中的背景噪声。  相似文献   

16.
在低照度环境下拍摄的可见光图像可视性较差,若将其与红外图像直接融合会导致融合结果清晰度不理想。针对这一问题,该文提出一种基于对比度增强与多尺度边缘保持分解的图像融合方法。首先,在融合之前采用基于导向滤波的自适应增强算法提高可见光图像中暗区内容的可视性。其次,通过一种尺度感知边缘保持滤波器对输入图像进行多尺度分解。再次,应用频率调谐滤波构造显著图。最后,利用由导向滤波生成的权重图重构融合图像。实验结果表明,所提方法不仅可以使细节信息更突出,而且还能够有效地抑制伪影。  相似文献   

17.
许雷  郑筱祥  陈兴灿 《电子学报》1999,27(9):121-123
针对经典方法对SNR低的医学图像存在噪声过度放大及伪像产生问题,本文在精细尺度上,根据信号与噪声的WT相位在相继尺度上关联性的不同进行去噪,在大尺度上则采用Semisoft阈法对DWT系数进行快速缩减去噪,根据人眼的视觉特性对WT系数的增益进行非线性的自适应控制,较之经典方法,本文方法具有增强图像视觉效果佳,无伪像产生的优点,且在噪声抑制、保边沿及增强各种细节上效果良好。  相似文献   

18.
低照度图像存在细节模糊、对比度低等问题.针对这些问题,本文提出一种低照度彩色图像增强算法.首先建立梯度稀疏和最小平方约束模型,将图像分解为结构层和细节层;然后采用提出的多尺度边缘保护细节增强算法强化图像的细节信息并滤波;最后把细节增强的图像经改进的Retinex算法映射,最终得到细节增强、亮度适宜、对比度较强的修复图像.实验结果表明,主观上:图像细节增强,亮度适宜;客观上:结构层图像的一维像素线性图显示其平滑特性效果较好,细节增强图的NIQE(5.5202)、BRISQE(31.1893)和PSNR(25.3625)特征较好,修复图像的熵值(7.4421)、边缘强度(128.3231)和平均亮度(121.1827)较好.本文算法实现了对低照度图像的有效分解及细节增强,并提高了图像综合质量.  相似文献   

19.
设计一个稳健的自动图像标注系统的重要环节是提取能够有效描述图像语义的视觉特征。由于颜色、纹理和形状等异构视觉特征在表示特定图像语义时所起作用的重要程度不同且同一类特征之间具有一定的相关性,该文提出了一种图正则化约束下的非负组稀疏(Graph Regularized Non-negative Group Sparsity, GRNGS)模型来实现图像标注,并通过一种非负矩阵分解方法来计算其模型参数。该模型结合了图正则化与l2,1-范数约束,使得标注过程中所选的组群特征能体现一定的视觉相似性和语义相关性。在Corel5K和ESP Game等图像数据集上的实验结果表明:相较于一些最新的图像标注模型,GRNGS模型的鲁棒性更强,标注结果更精确。  相似文献   

20.
We introduce both shape prior and edge information to Markov random field (MRF) to segment target of interest in images.Kernel Principal component analysis (PCA) is performed on a set of training shapes to obtain statistical shape representation.Edges are extracted directly from images.Both of them are added to the MRF energy function and the integrated energy function is minimized by graph cuts.An alignment procedure is presented to deal with variations between the target object and shape templates.Edge information makes the influence of inaccurate shape alignment not too severe,and brings result smoother.The experiments indicate that shape and edge play important roles for complete and robust foreground segmentation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号