首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Underwater images typically exhibit color distortion and low contrast as a result of the exponential decay that light suffers as it travels. Moreover, colors associated to different wavelengths have different attenuation rates, being the red wavelength the one that attenuates the fastest. To restore underwater images, we propose a Red Channel method, where colors associated to short wavelengths are recovered, as expected for underwater images, leading to a recovery of the lost contrast. The Red Channel method can be interpreted as a variant of the Dark Channel method used for images degraded by the atmosphere when exposed to haze. Experimental results show that our technique handles gracefully artificially illuminated areas, and achieves a natural color correction and superior or equivalent visibility improvement when compared to other state-of-the-art methods.  相似文献   

2.
Underwater captured images often suffer from color cast and low visibility due to light is scattered and absorbed while it traveling in water. In this paper, we proposed a novel method of color correction and Bi-interval contrast enhancement to improve the quality of underwater images. Firstly, a simple and effective color correction method based on sub-interval linear transformation is employed to address color distortion. Then, a Gaussian low-pass filter is applied to the L channel to decompose the low- and high-frequency components. Finally, the low- and high-frequency components are enhanced by Bi-interval histogram based on optimal equalization threshold strategy and S-shaped function to enhancement image contrast and highlight image details. Inspired by the multi-scale fusion, we employed a simple linear fusion to integrate the enhanced high- and low-frequency components. Comparison with state-of-the-art methods show that the proposed method outputs high-quality underwater images with qualitative and quantitative evaluation well.  相似文献   

3.
针对水下图像纹理模糊和色偏严重等问题,提出了一种融合深度学习与多尺度导向滤波Retinex的水下图像增强方法。首先,将陆上图像采用纹理和直方图匹配法进行退化,构建退化水下图像失真的数据集并训练端到端卷积神经网络(convolutional neural network,CNN) 模型,利用该模型对原始水下图像进行颜色校正,得到色彩复原后的水下图像;然后,对色彩复原图像的亮度通道,采用多尺度Retinex(multi-scale Retinex,MSR) 方法得到纹理增强图像;最后,融合色彩复原图像中的颜色分量和纹理增强图像得到最终水下增强图像。本文利用仿真水下图像数据集和真实水下图像对提出方法进行性能测试。实验结果表明,所提方法的均方根误差、峰值信噪比、CIEDE2000和水下图像质量评价指标分别为0.302 0、17.239 2 dB、16.878 4和4.960 0,优于5种对比方法,增强后的水下图像更加真实自然。本文方法在校正水下图像颜色失真的同时,能有效提升纹理清晰度和对比度。  相似文献   

4.
Underwater images are usually degraded due to light scattering and absorption. To recover the scene radiance of degraded underwater images, a new haze removal method is presented by incorporating a learning-based approach to blurriness estimation with the image formation model. Firstly, the image blurriness is estimated with a linear model trained on a set of selected grayscale images, the average Gaussian images and blurriness images. With the estimated image blurriness, three intermediate background lights (BLs) are computed to obtain the synthesized BL. Then the scene depth is calculated by using the estimated image blurriness and BL to construct a transmission map and restore the scene radiance. Compared with other haze removal methods, haze in degraded underwater images can be removed more accurately with our proposed method. Moreover, visual inspection, quantitative evaluation and application test demonstrate that our method is superior to the compared methods and beneficial to high-level vision tasks.  相似文献   

5.
The powerful representation capacity of deep learning has made it inevitable for the underwater image enhancement community to employ its potential. The exploration of deep underwater image enhancement networks is increasing over time; hence, a comprehensive survey is the need of the hour. In this paper, our main aim is two-fold, (1): to provide a comprehensive and in-depth survey of the deep learning-based underwater image enhancement, which covers various perspectives ranging from algorithms to open issues, and (2): to conduct a qualitative and quantitative comparison of the deep algorithms on diverse datasets to serve as a benchmark, which has been barely explored before.We first introduce the underwater image formation models, which are the base of training data synthesis and design of deep networks, and also helpful for understanding the process of underwater image degradation. Then, we review deep underwater image enhancement algorithms, and a glimpse of some of the aspects of the current networks is presented, including architecture, parameters, training data, loss function, and training configurations. We also summarize the evaluation metrics and underwater image datasets. Following that, a systematically experimental comparison is carried out to analyze the robustness and effectiveness of deep algorithms. Meanwhile, we point out the shortcomings of current benchmark datasets and evaluation metrics. Finally, we discuss several unsolved open issues and suggest possible research directions. We hope that all efforts done in this paper might serve as a comprehensive reference for future research and call for the development of deep learning-based underwater image enhancement.  相似文献   

6.
7.
With the rapid development of mobile Internet and digital technology, people are more and more keen to share pictures on social networks, and online pictures have exploded. How to retrieve similar images from large-scale images has always been a hot issue in the field of image retrieval, and the selection of image features largely affects the performance of image retrieval. The Convolutional Neural Networks (CNN), which contains more hidden layers, has more complex network structure and stronger ability of feature learning and expression compared with traditional feature extraction methods. By analyzing the disadvantage that global CNN features cannot effectively describe local details when they act on image retrieval tasks, a strategy of aggregating low-level CNN feature maps to generate local features is proposed. The high-level features of CNN model pay more attention to semantic information, but the low-level features pay more attention to local details. Using the increasingly abstract characteristics of CNN model from low to high. This paper presents a probabilistic semantic retrieval algorithm, proposes a probabilistic semantic hash retrieval method based on CNN, and designs a new end-to-end supervised learning framework, which can simultaneously learn semantic features and hash features to achieve fast image retrieval. Using convolution network, the error rate is reduced to 14.41% in this test set. In three open image libraries, namely Oxford, Holidays and ImageNet, the performance of traditional SIFT-based retrieval algorithms and other CNN-based image retrieval algorithms in tasks are compared and analyzed. The experimental results show that the proposed algorithm is superior to other contrast algorithms in terms of comprehensive retrieval effect and retrieval time.  相似文献   

8.
Application of convolutional neural networks (CNNs) for image additive white Gaussian noise (AWGN) removal has attracted considerable attentions with the rapid development of deep learning in recent years. However, the work of image multiplicative speckle noise removal is rarely done. Moreover, most of the existing speckle noise removal algorithms are based on traditional methods with human priori knowledge, which means that the parameters of the algorithms need to be set manually. Nowadays, deep learning methods show clear advantages on image feature extraction. Multiplicative speckle noise is very common in real life images, especially in medical images. In this paper, a novel neural network structure is proposed to recover noisy images with speckle noise. Our proposed method mainly consists of three subnetworks. One network is rough clean image estimate subnetwork. Another is subnetwork of noise estimation. The last one is an information fusion network based on U-Net and several convolutional layers. Different from the existing speckle denoising model based on the statistics of images, the proposed network model can handle speckle denoising of different noise levels with an end-to-end trainable model. Extensive experimental results on several test datasets clearly demonstrate the superior performance of our proposed network over state-of-the-arts in terms of quantitative metrics and visual quality.  相似文献   

9.
Screen content image (SCI) is a composite image including textual and pictorial regions resulting in many difficulties in image quality assessment (IQA). Large SCIs are divided into image patches to increase training samples for CNN training of IQA model, and this brings two problems: (1) local quality of each image patch is not equal to subjective differential mean opinion score (DMOS) of an entire image; (2) importance of different image patches is not same for quality assessment. In this paper, we propose a novel no-reference (NR) IQA model based on the convolutional neural network (CNN) for assessing the perceptual quality of SCIs. Our model conducts two designs solving problems which benefits from two strategies. For the first strategy, to imitate full-reference (FR) CNN-based model behavior, a CNN-based model is designed for both FR and NR IQA, and performance of NR-IQA part improves when the image patch scores predicted by FR-IQA part are adopted as the ground-truth to train NR-IQA part. For the second strategy, image patch qualities of one entire SCI are fused to obtain the SCI quality with an adaptive weighting method taking account the effect of the different image patch contents. Experimental results verify that our model outperforms all test NR IQA methods and most FR IQA methods on the screen content image quality assessment database (SIQAD). On the cross-database evaluation, the proposed method outperforms the existing NR IQA method in terms of at least 2.4 percent in PLCC and 2.8 percent in SRCC, which shows high generalization ability and high effectiveness of our model.  相似文献   

10.
With the rapid development of Internet technology, the copyright protection of color images has become more and more important. In order to fulfill this purpose, this paper designs a blind color digital image watermarking method based on image correction and eigenvalue decomposition (EVD). Firstly, all the eigenvalues of the pixel block in the color host image are obtained by EVD. Then, the sum of the absolute value of the eigenvalues is quantified by the variable quantization steps to embed the color watermark image that encrypted by affine transform and encoded by the Hamming code. If the watermarked image is processed by geometric attack, then the attacked image can be corrected by using the geometric attributes. Finally, the inverse embedding process is performed to extract the color watermark. Moreover, the advantages of the proposed method are shown as follows: (1) all Peak Signal-to-noise Ratio (PSNR) values are greater than 42 dB; (2) the average Structural Similarity Index Metric (SSIM) values are greater than 0.97; (3) the maximum embedded capacity is 0.25bpp; (4) whole running-time is less than 20 s; (5) the key space is more than 2450; (6) most Normalized Cross-correlation (NC) values are more than 0.9. Compared with the related methods, the experimental results show that the proposed method not only has better watermark invisibility and larger watermark capacity, but also has higher security and stronger robustness against geometric attacks.  相似文献   

11.
Convolutional neural network (CNN) based methods have recently achieved extraordinary performance in single image super-resolution (SISR) tasks. However, most existing CNN-based approaches increase the model’s depth by stacking massive kernel convolutions, bringing expensive computational costs and limiting their application in mobile devices with limited resources. Furthermore, large kernel convolutions are rarely used in lightweight super-resolution designs. To alleviate the above problems, we propose a multi-scale convolutional attention network (MCAN), a lightweight and efficient network for SISR. Specifically, a multi-scale convolutional attention (MCA) is designed to aggregate the spatial information of different large receptive fields. Since the contextual information of the image has a strong local correlation, we design a local feature enhancement unit (LFEU) to further enhance the local feature extraction. Extensive experimental results illustrate that our proposed MCAN can achieve better performance with lower model complexity compared with other state-of-the-art lightweight methods.  相似文献   

12.
Compared with the traditional image denoising method, although the convolutional neural network (CNN) has better denoising performance, there is an important issue that has not been well resolved: the residual image obtained by learning the difference between noisy image and clean image pairs contains abundant image detail information, resulting in the serious loss of detail in the denoised image. In this paper, in order to relearn the lost image detail information, a mathematical model is deducted from a minimization problem and an end-to-end detail retaining CNN (DRCNN) is proposed. Unlike most denoising methods based on CNN, DRCNN is not only focus to image denoising, but also the integrity of high frequency image content. DRCNN needs less parameters and storage space, therefore it has better generalization ability. Moreover, DRCNN can also adapt to different image restoration tasks such as blind image denoising, single image superresolution (SISR), blind deburring and image inpainting. Extensive experiments show that DRCNN has a better effect than some classic and novel methods.  相似文献   

13.
Recently, Convolutional Neural Networks (CNNs) have achieved great success in Single Image Super-Resolution (SISR). In particular, the recursive networks are now widely used. However, existing recursion-based SISR networks can only make use of multi-scale features in a layer-wise manner. In this paper, a Deep Recursive Multi-Scale Feature Fusion Network (DRMSFFN) is proposed to address this issue. Specifically, we propose a Recursive Multi-Scale Feature Fusion Block (RMSFFB) to make full use of multi-scale features. Besides, a Progressive Feature Fusion (PFF) technique is proposed to take advantage of the hierarchical features from the RMSFFB in a global manner. At the reconstruction stage, we use a deconvolutional layer to upscale the feature maps to the desired size. Extensive experimental results on benchmark datasets demonstrate the superiority of the proposed DRMSFFN in comparison with the state-of-the-art methods in both quantitative and qualitative evaluations.  相似文献   

14.
针对基于传统卷积神经网络模型的高光谱图像分类算法细节表现力不强及网络结构过于复杂的问题,设计了一种基于多尺度近端特征拼接网络的高光谱图像分类方法.通过引入多尺度滤波器和空洞卷积,在保持模型轻量化的同时可以获取更丰富的空间-光谱判别特征,并提出利用卷积神经网络近端特征间的相互联系进一步增强细节表现力.在3个基准高光谱图像...  相似文献   

15.
Automatic image annotation is one of the most important challenges in computer vision, which is critical to many real-world researches and applications. In this paper, we focus on the issue of large scale image annotation with deep learning. Firstly, considering the existing image data, especially the network images, most of the labels of themselves are inaccurate or imprecise. We propose a Multitask Voting (MV) method, which can improve the accuracy of original annotation to a certain extent, thereby enhancing the training effect of the model. Secondly, the MV method can also achieve the adaptive label, whereas most existing methods pre-specify the number of tags to be selected. Additionally, based on convolutional neural network, a large scale image annotation model MVAIACNN is constructed. Finally, we evaluate the performance with experiments on the MIRFlickr25K and NUS-WIDE datasets, and compare with other methods, demonstrating the effectiveness of the MVAIACNN.  相似文献   

16.
Recently, single image super-resolution (SISR) has been widely applied in the fields of underwater robot vision and obtained remarkable performance. However, most current methods generally suffered from the problem of a heavy burden on computational resources with large model sizes, which limited their real-world underwater robotic applications. In this paper, we introduce and tackle the super resolution (SR) problem for underwater robot vision and provide an efficient solution for near real-time applications. We present a novel lightweight multi-stage information distillation network, named MSIDN, for better balancing performance against applicability, which aggregates the local distilled features from different stages for more powerful feature representation. Moreover, a novel recursive residual feature distillation (RRFD) module is constructed to progressively extract useful features with a modest number of parameters in each stage. We also propose a channel interaction & distillation (CI&D) module that employs channel split operation on the preceding features to produce two-part features and utilizes the inter channel-wise interaction information between them to generate the distilled features, which can effectively extract the useful information of current stage without extra parameters. Besides, we present USR-2K dataset, a collection of over 1.6K samples for large-scale underwater image SR training, and a testset with an additional 400 samples for benchmark evaluation. Extensive experiments on several standard benchmark datasets show that the proposed MSIDN can provide state-of-the-art or even better performance in both quantitative and qualitative measurements.  相似文献   

17.
Underwater image processing technologies have always been challenging tasks due to the complex underwater environment. Images captured under water are not only affected by the water itself, but also by the diverse suspended particles that increase the effect of absorption and scattering. Moreover, these particles themselves are usually imaged on the picture, causing the spot noise signal to interfere with the target objects. To address this issue, we propose a novel deep neural network for removing the spot noise from underwater images. Its main idea is to train a generative adversarial network (GAN) to transform the noisy image to clean image. Based on the deep encoder and decoder framework, the skip connections are introduced to combine the features of low-level and high-level to help recover the original image. Meanwhile, the self-attention mechanism is employed to the generative network to capture global dependencies in the feature maps, which can generate the image with fine details at every location. Furthermore, we apply the spectral normalization to both the generative and discriminative networks to stabilize the training process. Experiments evaluated on synthetic and real-world images show that the proposed method outperforms many recent state-of-the-art methods in terms of quantitative and visual quality. Besides, the results also demonstrate that the proposed method has the good ability to remove the spot noise from underwater images while preserving sharp edge and fine details.  相似文献   

18.
水下鱼类图像因受到光线散射和吸收、水体杂质 等因素影响,导致水下鱼类图像质量 较低,本文通过改进自动彩色均衡(automatic color equalization,ACE)算法进行水下鱼类 图像增强,有效改善图像质量,并为后续的水下图像分割打下良好的基础。针对水下鱼类图 像分割效果差、实时性低等问题,本文提出ARD-PSPNet网络模型,使用ResNet101网络模型 作为特征提取网络,利用分割性能良好的PSPNet(pyramid scene parsing network)网络模 型作为基础图像分割模型,通过引入深度可分离卷积来降低计算量,通过R-MCN网络结构, 充分利用浅层网络特征层丰富的位置信息和完整性,改进损失函数使得分割位置更加准确, 在Fish4knowledge数据集上进行实验, 结果表明:新模型与原模型相比,在平均交并比(mean intersection over union,MIOU)上提高了2.8个百分点,在平均像素准确率(mean pixel accuracy,MPA)上提高了约2个百分点。  相似文献   

19.
Image source identification is important to verify the origin and authenticity of digital images. However, when images are altered by some post-processing, the performance of the existing source verification methods may degrade. In this paper, we propose a convolutional neural network (CNN) to solve the above problem. Specifically, we present a theoretical framework for different tampering operations, to confirm whether a single operation has affected photo response non-uniformity (PRNU) contained in images. Then, we divide these operations into two categories: non-influential operation and influential operation. Besides, the images altered by the combination of non-influential and influential operations are equal to images that have only undergone a single influential operation. To make our introduced CNN robust to both non-influential operation and influential operation, we define a multi-kernel noise extractor that consists of a high-pass filter and three parallel convolution filters of different sizes. The features generated by the parallel convolution layers are then fed to subsequent convolutional layers for further feature extraction. The experimental results provide the effectiveness of our method.  相似文献   

20.
为避免因模糊核估计不准确而使得复原后的图像易造成欠去模糊或过度去模糊等问题,提出了一种基于多尺度编解码器网络去模糊模型.首先,在传统的编解码器网络中加入跳跃连接和多尺度循环连接,结合各层图像特征信息的同时使网络训练得更加稳定.其次,将提出的编解码器网络与改进的嵌套残差网络结合,采用由"粗"到"精"的方法进一步提取不同尺...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号