Images captured under low-light conditions often suffer from severe loss of structural details and color; therefore, image-enhancement algorithms are widely used in low-light image restoration. Image-enhancement algorithms based on the traditional Retinex model only consider the change in the image brightness, while ignoring the noise and color deviation generated during the process of image restoration. In view of these problems, this paper proposes an image enhancement network based on multi-stream information supplement, which contains a mainstream structure and two branch structures with different scales. To obtain richer feature information, an information complementary module is designed to realize the information supplement for the three structures. The feature information from the three structures is then concatenated to perform the final image recovery operation. To restore more abundant structures and realistic colors, we define a joint loss function by combining the L1 loss, structural similarity loss, and color-difference loss to guide the network training. The experimental results show that the proposed network achieves satisfactory performance in both subjective and objective aspects.
In order to enhance the contrast of low-light images and reduce noise in them, we propose an image enhancement method based on Retinex theory and dual-tree complex wavelet transform (DT-CWT). The method first converts an image from the RGB color space to the HSV color space and decomposes the V-channel by dual-tree complex wavelet transform. Next, an improved local adaptive tone mapping method is applied to process the low frequency components of the image, and a soft threshold denoising algorithm is used to denoise the high frequency components of the image. Then, the V-channel is rebuilt and the contrast is adjusted using white balance method. Finally, the processed image is converted back into the RGB color space as the enhanced result. Experimental results show that the proposed method can effectively improve the performance in terms of contrast enhancement, noise reduction and color reproduction. 相似文献
为了提升基于特征点的双目视觉定位算法在低光照环境下定位的准确性,提出一种基于在线估计的视觉同步定位与地图构建(simultaneous localization and mapping,SLAM)低光照图像增强算法.通过在线估计图像亮度值,实时更新图像增强算法的参数,解决了基于固定参数的图像增强算法在图像较亮、较暗等情... 相似文献
With the advancement of the camera-related technology in mobile devices, the vast amount of photos have been taken and shared in our daily life. However, many users still have unsatisfactory experiences with low-visible photos, which are frequently acquired under complicated real-world environments. In this paper, a novel yet simple method for low-light image enhancement has been proposed without any learning procedure. The key idea of the proposed method is to estimate properties of the scene illumination both in global and local manner by exploiting the diffusion pyramid with residuals. Specifically, the residual of each scale level in the diffusion pyramid is combined with the corresponding input. This restored result efficiently highlights local details across different scale spaces, thus it is helpful for preserving the boundary of illuminations. By conducting max-pooling with restored results from different levels of the diffusion pyramid, which are resized to the original resolution, the illumination component is accurately inferred from a given image. Compared to recent learning-based approaches, one important advantage of the proposed method is to effectively avoid the overfitting problem to the specific training dataset. Experimental results on various benchmark datasets demonstrate the efficiency and robustness of the proposed method for low-light image enhancement in real-world scenarios. 相似文献
Low-light images enhancement is a challenging task because enhancing image brightness and reducing image degradation should be considered simultaneously. Although existing deep learning-based methods improve the visibility of low-light images, many of them tend to lose details or sacrifice naturalness. To address these issues, we present a multi-stage network for low-light image enhancement, which consists of three sub-networks. More specifically, inspired by the Retinex theory and the bilateral grid technique, we first design a reflectance and illumination decomposition network to decompose an image into reflectance and illumination maps efficiently. To increase the brightness while preserving edge information, we then devise an attention-guided illumination adjustment network. The reflectance and the adjusted illumination maps are fused and refined by adversarial learning to reduce image degradation and improve image naturalness. Experiments are conducted on our rebuilt SICE low-light image dataset, which consists of 1380 real paired images and a public dataset LOL, which has 500 real paired images and 1000 synthetic paired images. Experimental results show that the proposed method outperforms state-of-the-art methods quantitatively and qualitatively. 相似文献
Aiming to solve the poor performance of low illumination enhancement algorithms on uneven illumination images, a low-light image enhancement(LIME) algorithm based on a residual network was proposed. The algorithm constructs a deep network that uses residual modules to extract image feature information and semantic modules to extract image semantic information from different levels. Moreover, a composite loss function was also designed for the process of low illumination image enhancement, which ... 相似文献
Images captured in weak illumination conditions could seriously degrade the image quality. Solving a series of degradation of low-light images can effectively improve the visual quality of images and the performance of high-level visual tasks. In this study, a novel Retinex-based Real-low to Real-normal Network (R2RNet) is proposed for low-light image enhancement, which includes three subnets: a Decom-Net, a Denoise-Net, and a Relight-Net. These three subnets are used for decomposing, denoising, contrast enhancement and detail preservation, respectively. Our R2RNet not only uses the spatial information of the image to improve the contrast but also uses the frequency information to preserve the details. Therefore, our model achieved more robust results for all degraded images. Unlike most previous methods that were trained on synthetic images, we collected the first Large-Scale Real-World paired low/normal-light images dataset (LSRW dataset) to satisfy the training requirements and make our model have better generalization performance in real-world scenes. Extensive experiments on publicly available datasets demonstrated that our method outperforms the existing state-of-the-art methods both quantitatively and visually. In addition, our results showed that the performance of the high-level visual task (i.e., face detection) can be effectively improved by using the enhanced results obtained by our method in low-light conditions. Our codes and the LSRW dataset are available at: https://github.com/JianghaiSCU/R2RNet. 相似文献