首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
目的 现有基于对抗图像的隐写算法大多只能针对一种隐写分析器设计对抗图像,且无法抵御隐写分析残差网络(steganalysis residual network,SRNet)、Zhu-Net等最新基于卷积神经网络隐写分析器的检测。针对这一现状,提出了一种联合多重对抗与通道注意力的高安全性图像隐写方法。方法 采用基于U-Net结构的生成对抗网络生成对抗样本图像,利用对抗网络的自学习特性实现多重对抗隐写网络参数迭代优化,并通过针对多种隐写分析算法的对抗训练,生成更适合内容隐写的载体图像。同时,通过在生成器中添加多个轻量级通道注意力模块,自适应调整对抗噪声在原始图像中的分布,提高生成对抗图像的抗隐写分析能力。其次,设计基于多重判别损失和均方误差损失相结合的动态加权组合方案,进一步增强对抗图像质量,并保障网络快速稳定收敛。结果 实验在BOSS Base 1.01数据集上与当前主流的4种方法进行比较,在使用原始隐写图像训练后,相比于基于U-Net结构的生成式多重对抗隐写算法等其他4种方法,使得当前性能优异的5种隐写分析器平均判别准确率降低了1.6%;在使用对抗图像和增强隐写图像再训练后,相比其他4种方法,仍使得当前性能优异的5种隐写分析器平均判别准确率降低了6.8%。同时也对对抗图像质量进行分析,基于测试集生成的2 000幅对抗图像的平均峰值信噪比(peak signal-tonoise ratio,PSNR)可达到39.925 1 dB,实验结果表明本文提出的隐写网络极大提升了隐写算法的安全性。结论 本文方法在隐写算法安全性领域取得了较优秀的性能,且生成的对抗图像具有很高的视觉质量。  相似文献   

2.
马宾  韩作伟  徐健  王春鹏  李健  王玉立 《软件学报》2023,34(7):3385-3407
人工智能的发展为信息隐藏技术带来越来越多的挑战,提高现有隐写方法的安全性迫在眉睫.为提高图像的信息隐藏能力,提出一种基于U-Net结构的生成式多重对抗隐写算法.所提算法通过生成对抗网络与隐写分析器优化网络、隐写分析对抗网络间的多重对抗训练,构建生成式多重对抗隐写网络模型,生成适合信息隐写的载体图像,提高隐写图像抗隐写分析能力;同时,针对现有生成对抗网络只能生成随机图像,且图像质量不高的问题,设计基于U-Net结构的生成式网络模型,将参考图像的细节信息传递到生成载体图像中,可控地生成高质量目标载体图像,增强信息隐藏能力;其次,采用图像判别损失、均方误差(MSE)损失和隐写分析损失动态加权组合作为网络迭代优化总损失,保障生成式多重对抗隐写网络快速稳定收敛.实验表明,基于U-Net结构的生成式多重对抗隐写算法生成的载体图像PSNR最高可达到48.60 dB,隐写分析器对生成载体图像及其隐写图像的判别率为50.02%,所提算法能够生成适合信息嵌入的高质量载体图像,保障隐写网络快速稳定收敛,提高了图像隐写安全性,可以有效抵御当前优秀的隐写分析算法的检测.  相似文献   

3.
袁超  王宏霞  何沛松 《软件学报》2024,35(3):1502-1514
随着深度学习与隐写技术的发展,深度神经网络在图像隐写领域的应用越发广泛,尤其是图像嵌入图像这一新兴的研究方向.主流的基于深度神经网络的图像嵌入图像隐写方法需要将载体图像和秘密图像一起输入隐写模型生成含密图像,而最近的研究表明,隐写模型仅需要秘密图像作为输入,然后将模型输出的含密扰动添加到载体图像上,即可完成秘密图像的嵌入过程.这种不依赖载体图像的嵌入方式极大地扩展了隐写的应用场景,实现了隐写的通用性.但这种嵌入方式目前仅验证了秘密图像嵌入和恢复的可行性,而对隐写更重要的评价标准,即隐蔽性,未进行考虑和验证.提出一种基于注意力机制的高容量通用图像隐写模型USGAN,利用注意力模块, USGAN的编码器可以在通道维度上对秘密图像中像素位置的扰动强度分布进行调整,从而减小含密扰动对载体图像的影响.此外,利用基于CNN的隐写分析模型作为USGAN的目标模型,通过与目标模型进行对抗训练促使编码器学习生成含密对抗扰动,从而使含密图像同时成为攻击隐写分析模型的对抗样本.实验结果表明,所提模型不仅可以实现不依赖载体图像的通用嵌入方式,还进一步提高了隐写的隐蔽性.  相似文献   

4.
Sharma  Akanksha  Jindal  Neeru  Rana  P. S. 《Multimedia Tools and Applications》2020,79(37-38):27407-27437

Generative Adversarial Network (GAN) has gained eminence in a very short period as it can learn deep data distributions with the help of a competitive process among two networks. GANs can synthesize images/videos from latent noise with a minimized adversarial cost function. The cost function plays a deciding factor in GAN training and thus, it is often subjected to new modifications to yield better performance. To date, numerous new GAN models have been proposed owing to changes in cost function according to applications. The main objective of this research paper is to present a gist of major GAN publications and developments in image and video field. Several publications were selected after carrying out a thorough literature survey. Beginning from trends in GAN research publications, basics, literature survey, databases for performance evaluation parameters are presented under one umbrella.

  相似文献   

5.
音频隐写术是将秘密信息(如文本、图像、音频、视频等)隐藏到载体音频中,不仅能够保证秘密信息本身的安全,而且能保证秘密信息传输的安全,已成为信息隐藏领域的研究热点之一.近年来,基于深度学习的音频隐写分析技术能够在充分挖掘隐写深度特征的基础上实现高效的隐写检测,导致隐写术的安全性降低,为隐写术带来了新的挑战.不过,生成对抗...  相似文献   

6.
Image steganography is the technique of hiding secret information within images. It is an important research direction in the security field. Benefitting from the rapid development of deep neural networks, many steganographic algorithms based on deep learning have been proposed. However, two problems remain to be solved in which the most existing methods are limited by small image size and information capacity. In this paper, to address these problems, we propose a high capacity image steganographic model named HidingGAN. The proposed model utilizes a new secret information preprocessing method and Inception‐ResNet block to promote better integration of secret information and image features. Meanwhile, we introduce generative adversarial networks and perceptual loss to maintain the same statistical characteristics of cover images and stego images in the high‐dimensional feature space, thereby improving the undetectability. Through these manners, our model reaches higher imperceptibility, security, and capacity. Experiment results show that our HidingGAN achieves the capacity of 4 bits‐per‐pixel (bpp) at 256 × 256 pixels, improving over the previous best result of 0.4 bpp at 32 × 32 pixels.  相似文献   

7.
近年来,越来越多的生成对抗网络出现在深度学习的各个领域中.条件生成对抗网络(Conditional Generative Adver-sarial Networks,cGAN)开创性地将监督学习引入到无监督的GAN网络中,这使得GAN可以生成有标签数据.传统的GAN通过多次卷积运算来模拟不同区域之间的相关性,进而生成图...  相似文献   

8.

The main role of cancellable biometric schemes is to protect the privacy of the enrolled users. The protected biometric data are generated by applying a parametrized transformation function to the original biometric data. Although cancellable biometric schemes achieve high security levels, they may degrade the recognition accuracy. One of the mostwidely used approaches to enhance the recognition accuracy in biometric systems is to combine several instances of the same biometric modality. In this paper, two multi-instance cancellable biometric schemes based on iris traits are presented. The iris biometric trait is used in both schemes because of the reliability and stability of iris traits compared to the other biometric traits. A generative adversarial network (GAN) is used as a transformation function for the biometric features. The first scheme is based on a pre-transformation feature-level fusion, where the binary features of multiple instances are concatenated and inputted to the transformation phase. On the other hand, the second scheme is based on a post-transformation feature-level fusion, where each instance is separately inputted to the transformation phase. Experiments conducted on the CASIA Iris-V3-Internal database confirm the high recognition accuracy of the two proposed schemes. Moreover, the security of the proposed schemes is analyzed, and their robustness against two well-known types of attacks is proven.

  相似文献   

9.
王耀杰  钮可  杨晓元 《计算机应用》2018,38(10):2923-2928
针对信息隐藏中含密载体会留有修改痕迹,从根本上难以抵抗基于统计的隐写分析算法检测的问题,提出一种基于生成对抗网络(GAN)的信息隐藏方案。该方案首先利用生成对抗网络中的生成模型G以噪声为驱动生成原始载体信息;其次,使用±1嵌入算法,将秘密消息嵌入到生成的载体信息中生成含密信息;最终,将含密信息与真实图像样本作为生成对抗网络中判别模型D的输入,进行迭代优化,同时使用判别模型S来检测图像是否存在隐写操作,反馈生成图像质量的特性,G&D&S三者在迭代过程中相互竞争,性能不断提高。该方案所采用的策略与SGAN(Steganographic GAN)和SSGAN(Secure Steganography based on GAN)两种方案不同,主要区别是将含密信息与真实图像样本作为判别模型的输入,对于判别网络D进行重构,使网络更好地评估生成图像的性能。与SGAN和SSGAN相比,该方案使得攻击者在隐写分析正确性上分别降低了13.1%和6.4%。实验结果表明,新的信息隐藏方案通过生成更合适的载体信息来保证信息隐藏的安全性,能够有效抵抗隐写算法的检测,在抗隐写分析和安全性指标上明显优于对比方案。  相似文献   

10.
Yang  Lu  Song  Qing  Wu  Yingqi 《Multimedia Tools and Applications》2021,80(1):855-875

With the broad use of face recognition, its weakness gradually emerges that it is able to be attacked. Therefore, it is very important to study how face recognition networks are subject to attacks. Generating adversarial examples is an effective attack method, which misleads the face recognition system through obfuscation attack (rejecting a genuine subject) or impersonation attack (matching to an impostor). In this paper, we introduce a novel GAN, Attentional Adversarial Attack Generative Network (A3GN), to generate adversarial examples that mislead the network to identify someone as the target person not misclassify inconspicuously. For capturing the geometric and context information of the target person, this work adds a conditional variational autoencoder and attention modules to learn the instance-level correspondences between faces. Unlike traditional two-player GAN, this work introduces a face recognition network as the third player to participate in the competition between generator and discriminator which allows the attacker to impersonate the target person better. The generated faces which are hard to arouse the notice of onlookers can evade recognition by state-of-the-art networks and most of them are recognized as the target person.

  相似文献   

11.

Generative Adversarial Networks (GANs) are most popular generative frameworks that have achieved compelling performance. They follow an adversarial approach where two deep models generator and discriminator compete with each other. They have been used for many applications especially for image synthesis because of their capability to generate high quality images. In past few years, different variants of GAN have proposed and they produced high quality results for image generation. This paper conducts an analysis of working and architecture of GAN and its popular variants for image generation in detail. In addition, we summarize and compare these models according to different parameters such as architecture, training method, learning type, benefits and performance metrics. Finally, we apply all these methods on a benchmark MNIST dataset, which contains handwritten digits and compare qualitative and quantitative results. The evaluation is based on quality of generated images, classification accuracy, discriminator loss, generator loss and computational time of these models. The aim of this study is to provide a comprehensive information about GAN and its various models in the field of image synthesis. Our main contribution in this work is critical comparison of popular GAN variants for image generation on MNIST dataset. Moreover, this paper gives insights regarding existing limitations and challenges faced by GAN and discusses associated future research work.

  相似文献   

12.
针对传统视网膜图像血管分割中部分血管轮廓粗糙、血管末梢和分支细节丢失等问题,提出 一种结合线性谱聚类超像素与生成对抗网络(Generative Adversarial Networks,GAN)的视网膜血管分割 方法。该方法首先对 GAN 进行改进,采用空洞空间金字塔池化模块的多尺度特征提取来提高 GAN 分 割精度,在获得视网膜血管分割图像后,利用线性谱聚类超像素分割的边缘贴合性高、轮廓清晰的特 点,将 GAN 输出图像映射到超像素分割图再对像素块进行分类,以达到分割的效果。仿真实验结果表 明,与传统的视网膜血管分割方法相比,该方法在灵敏度和准确性上有一定提升,轮廓边缘细节方面 有着更好的效果。  相似文献   

13.
目的 图像信息隐藏包括图像隐写术和图像水印技术两个分支。隐写术是一种将秘密信息隐藏在载体中的技术,目的是为了实现隐秘通信,其主要评价指标是抵御隐写分析的能力。水印技术与隐写术原理类似,但其是通过把水印信息嵌入到载体中以达到保护知识产权的作用,追求的是防止水印被破坏而尽可能地提高水印信息的鲁棒性。研究者们试图利用生成对抗网络(generative adversarial networks,GANs)进行自动化的隐写算法以及鲁棒水印算法的设计,但所设计的算法在信息提取准确率、嵌入容量和隐写安全性或水印鲁棒性、水印图像质量等方面存在不足。方法 本文提出了基于生成对抗网络的新型端到端隐写模型(image information hiding-GAN,IIH-GAN)和鲁棒盲水印模型(image robust blind watermark-GAN,IRBW-GAN),分别用于图像隐写术和图像鲁棒盲水印。网络模型中使用了更有效的编码器和解码器结构SE-ResNet(squeeze and excitation ResNet),该模块根据通道之间的相互依赖性来自适应地重新校准通道方式的特征响应。结果 实验结果表明隐写模型IIH-GAN相对其他方法在性能方面具有较大改善,当已知训练好的隐写分析模型的内部参数时,将对抗样本加入到IIH-GAN的训练过程,最终可以使隐写分析模型的检测准确率从97.43%降低至49.29%。该隐写模型还可以在256×256像素的图像上做到高达1 bit/像素(bits-per-pixel)的相对嵌入容量;IRBW-GAN水印模型在提升水印嵌入容量的同时显著提升了水印图像质量以及水印提取正确率,在JEPG压缩的攻击下较对比方法提取准确率提高了约20%。结论 本文所提IIH-GAN和IRBW-GAN模型在图像隐写和图像水印领域分别实现了领先于对比模型的性能。  相似文献   

14.
图像隐写是信息安全领域的研究热点之一.早期隐写方法通过修改载体图像获得含密图像,导致图像统计特性发生变化,因此难以抵抗基于高维统计特征分析的检测.随着深度学习的发展,研究者们提出了许多基于深度学习的图像隐写方法,使像素修改更隐蔽、隐写过程更智能.为了更好地研究图像隐写技术,对基于深度学习的图像隐写方法进行综述.首先根据...  相似文献   

15.
为提升真实场景视觉信号的采集质量,往往需要通过多种融合方式获取相应的图像,例如,多聚焦、多曝光、多光谱和多模态等。针对视觉信号采集的以上特性,图像融合技术旨在利用同一场景不同视觉信号的优势,生成单图像信息描述,提升视觉低、中、高级任务的性能。目前,依托端对端学习强大的特征提取、表征及重构能力,深度学习已成为图像融合研究的主流技术。与传统图像融合技术相比,基于深度学习的图像融合模型性能显著提高。随着深度学习研究的深入,一些新颖的理论和方法也促进了图像融合技术的发展,如生成对抗网络、注意力机制、视觉Transformer和感知损失函数等。为厘清基于深度学习技术的图像融合研究进展,本文首先介绍了图像融合问题建模,并从传统方法视角逐渐向深度学习视角过渡。具体地,从数据集生成、神经网络构造、损失函数设计、模型优化和性能评估等方面总结了基于深度学习的图像融合研究现状。此外,还讨论了选择性图像融合这类衍生问题建模(如基于高分辨率纹理图融合的深度图增强),回顾了一些基于图像融合实现其他视觉任务的代表性工作。最后,根据现有技术的缺陷,提出目前图像融合技术存在的挑战,并对未来发展趋势给出了展望。  相似文献   

16.
在中国,彝文古籍文献日益流失而且损毁严重,由于通晓古彝文的研究人员缺乏,使得古籍恢复工作进展十分缓慢.人工智能在图像文本领域的应用,为古籍文献的自动修复提供可能.本文设计了一种双判别器生成对抗网络(Generative adversarial networks with dual discriminator,D2GAN),以还原古代彝族字符中的缺失部分.D2GAN是在深度卷积生成对抗网络的基础上,增加一个古彝文筛选判别器.通过三个阶段的训练来迭代地优化古彝文字符生成网络,以获得古彝文字符的文字生成器.根据筛选判别器的损失结果优化D2GAN模型,并使用生成的字符恢复古彝文中丢失的笔画.实验结果表明,在字符残缺低于1/3的情况下,本文提出的方法可使文字笔画的修复率达到77.3%,有效地加快了古彝文字符修复工作的进程.  相似文献   

17.
基于条件深度卷积生成对抗网络的图像识别方法   总被引:7,自引:0,他引:7  
生成对抗网络(Generative adversarial networks,GAN)是目前热门的生成式模型.深度卷积生成对抗网络(Deep convolutional GAN,DCGAN)在传统生成对抗网络的基础上,引入卷积神经网络(Convolutional neural networks,CNN)进行无监督训练;条件生成对抗网络(Conditional GAN,CGAN)在GAN的基础上加上条件扩展为条件模型.结合深度卷积生成对抗网络和条件生成对抗网络的优点,建立条件深度卷积生成对抗网络模型(Conditional-DCGAN,C-DCGAN),利用卷积神经网络强大的特征提取能力,在此基础上加以条件辅助生成样本,将此结构再进行优化改进并用于图像识别中,实验结果表明,该方法能有效提高图像的识别准确率.  相似文献   

18.
中国山水画是以山川自然景观为主要描写对象的画,它是中国画的重要画科。当前深度学习模型在图像分类、对象识别、图像风格转换和图像生成等领域都取得了巨大的成功。提出一个基于深度对抗生成网络的中国山水画自动生成模型,以网络上公开的中国山水画图像为训练集,设计适当深度的网络和损失函数,通过生成器和判别器的对抗训练,得到图像生成器。通过与真实的山水画进行比较,本模型能够生成具有接近中国山水画风格的图像。  相似文献   

19.
The crucial challenge that decides the success of any steganographic algorithm lies in simultaneously achieving the three contradicting objectives namely—higher payload capacity, with commendable perceptual quality and high statistical un-detectability. This work is motivated by the interest in developing such a steganographic scheme, which is aimed for establishing secure image covert channel in spatial domain using Octonary PVD scheme. The goals of this paper are to be realized through: (1) pairing a pixel with all of its neighbors in all the eight directions, to offer larger embedding capacity (2) the decision of the number of bits to be embedded in each pixel based on the nature of its region and not done universally same for all the pixels, to enhance the perceptual quality of the images (3) the re-adjustment phase, which sustains any modified pixel in the same level in the stego-image also, where the difference between a pixel and its neighbor in the cover image belongs to, for imparting the statistical un-detectability factor. An extensive experimental evaluation to compare the performance of the proposed system vs. other existing systems was conducted, on a database containing 3338 natural images, against two specific and four universal steganalyzers. The observations reported that the proposed scheme is a state-of-the-art model, offering high embedding capacity while concurrently sustaining the picture quality and defeating the statistical detection through steganalyzers.  相似文献   

20.
目的 针对目前多模态医学图像融合方法深层特征提取能力不足,部分模态特征被忽略的问题,提出了基于U-Net3+与跨模态注意力块的双鉴别器生成对抗网络医学图像融合算法(U-Net3+ and cross-modal attention block dual-discriminator generative adversal network,UC-DDGAN)。方法 结合U-Net3+可用很少的参数提取深层特征、跨模态注意力块可提取两模态特征的特点,构建UC-DDGAN网络框架。UC-DDGAN包含一个生成器和两个鉴别器,生成器包括特征提取和特征融合。特征提取部分将跨模态注意力块嵌入到U-Net3+下采样提取图像深层特征的路径上,提取跨模态特征与提取深层特征交替进行,得到各层复合特征图,将其进行通道叠加、降维后上采样,输出包含两模态全尺度深层特征的特征图。特征融合部分通过将特征图在通道上进行拼接得到融合图像。双鉴别器分别对不同分布的源图像进行针对性鉴别。损失函数引入梯度损失,将其与像素损失加权优化生成器。结果 将UC-DDGAN与5种经典的图像融合方法在美国哈佛医学院公开的脑部疾病图像数据集上进行实验对比,其融合图像在空间频率(spatial frequency,SF)、结构相似性(structural similarity,SSIM)、边缘信息传递因子(degree of edge information,QAB/F)、相关系数(correlation coefficient,CC)和差异相关性(the sum of the correlations of differences,SCD)等指标上均有提高,SF较DDcGAN(dual discriminator generation adversative network)提高了5.87%,SSIM较FusionGAN(fusion generative adversarial network)提高了8%,QAB/F较FusionGAN提高了12.66%,CC较DDcGAN提高了14.47%, SCD较DDcGAN提高了14.48%。结论 UC-DDGAN生成的融合图像具有丰富深层特征和两模态关键特征,其主观视觉效果和客观评价指标均优于对比方法,为临床诊断提供了帮助。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号