The existing deraining methods based on convolutional neural networks (CNNs) have made great success, but some remaining rain streaks can degrade images drastically. In this work, we proposed an end-to-end multi-scale context information and attention network, called MSCIANet. The proposed network consists of multi-scale feature extraction (MSFE) and multi-receptive fields feature extraction (MRFFE). Firstly, the MSFE can pick up features of rain streaks in different scales and propagate deep features of the two layers across stages by skip connections. Secondly, the MRFFE can refine details of the background by attention mechanism and the depthwise separable convolution of different receptive fields with different scales. Finally, the fusion of these outputs of two subnetworks can reconstruct the clean background image. Extensive experimental results have shown that the proposed network achieves a good effect on the deraining task on synthetic and real-world datasets. The demo can be available at https://github.com/CoderLi365/MSCIANet. 相似文献
In this paper, an end-to-end convolutional neural network is proposed to recover haze-free image named as Attention-Based Multi-Stream Feature Fusion Network (AMSFF-Net). The encoder-decoder network structure is used to construct the network. An encoder generates features at three resolution levels. The multi-stream features are extracted using residual dense blocks and fused by feature fusion blocks. AMSFF-Net has ability to pay more attention to informative features at different resolution levels using pixel attention mechanism. A sharp image can be recovered by the good kernel estimation. Further, AMSFF-Net has ability to capture semantic and sharp textural details from the extracted features and retain high-quality image from coarse-to-fine using mixed-convolution attention mechanism at decoder. The skip connections decrease the loss of image details from the larger receptive fields. Moreover, deep semantic loss function emphasizes more semantic information in deep features. Experimental findings prove that the proposed method outperforms in synthetic and real-world images. 相似文献
Most of the existing deraining methods cannot preserve the details of the image while removing the rain streaks. To solve this problem, we propose a single image de-raining method with dual U-Net generative adversarial network (DU-GAN). By using two U-Net with stronger learning ability as our generator DU-GAN can not only accurately remove more rain streaks but also preserve image details. The network can make full use of image information and extract complete image features. The adversarial loss function using the proposed dual U-Net generator is utilized to generate de-rained images which are close to the ground truth. Furthermore, to obtain the better visual effects of the generated image, The L1 and structure similarity loss functions which are consistent with the human visual effect are applied to generate the final output. The synthetic rainy image datasets and real rainy image datasets are used to evaluate the effectiveness of the proposed network in the experiments. The quantitative and visual experimental results show that the proposed single image deraining method achieves state-of-the-art compared with the other single image deraining methods. The source code can be found at https://github.com/LuBei-design/DU-GAN.
Though deep learning-based methods have demonstrated strong capabilities on image fusion, they usually improve the fusion performance by increasing the width and depth of the network, increasing the computational effort and being unsuitable for industrial applications. In this paper, an end-to-end network based on fixed convolution module of discrete Chebyshev moments is proposed, which does not need any pre- or post-processing. The proposed network is roughly composed of three parts: feature extraction module, fusion module and feature reconstruction module. In the feature extraction module, a novel fixed convolution module based on discrete Chebyshev moments is proposed to obtain different frequency components in a short time. To improve the image sharpness and fuse more details, a spatial attention mechanism based on average gradient is proposed in fusion module. Extensive results demonstrate that the proposed network can achieve remarkable fusion performance, high time efficiency and strong generalization ability. 相似文献