首页 | 本学科首页   官方微博 | 高级检索  
     

基于改进循环生成式对抗网络的图像风格迁移
引用本文:张惊雷,厚雅伟.基于改进循环生成式对抗网络的图像风格迁移[J].电子与信息学报,2020,42(5):1216-1222.
作者姓名:张惊雷  厚雅伟
作者单位:1.天津理工大学电气电子工程学院 天津 3003842.天津市复杂系统控制理论及应用重点实验室 天津 300384
摘    要:图像间的风格迁移是一类将图片在不同领域进行转换的方法。随着生成式对抗网络在深度学习中的快速发展,其在图像风格迁移领域中的应用被日益关注。但经典算法存在配对训练数据较难获取,生成图片效果差的缺点。该文提出一种改进循环生成式对抗网络(CycleGAN++),取消了环形网络,并在图像生成阶段将目标域与源域的先验信息与相应图片进行纵深级联;优化了损失函数,采用分类损失代替循环一致损失,实现了不依赖训练数据映射的图像风格迁移。采用CelebA和Cityscapes数据集进行实验评测,结果表明在亚马逊劳务平台感知研究(AMT perceptual studies)与全卷积网络得分(FCN score)两个经典测试指标中,该文算法比CycleGAN, IcGAN, CoGAN, DIAT等经典算法取得了更高的精度。

关 键 词:图像风格迁移    深度学习    生成式对抗网络    损失函数
收稿时间:2019-06-05

Image-to-image Translation Based on Improved Cycle-consistent Generative Adversarial Network
Jinglei ZHANG,Yawei HOU.Image-to-image Translation Based on Improved Cycle-consistent Generative Adversarial Network[J].Journal of Electronics & Information Technology,2020,42(5):1216-1222.
Authors:Jinglei ZHANG  Yawei HOU
Affiliation:1.School of Electrical and Electronic Engineering, Tianjin University of Technology, Tianjin 300384, China2.Tianjin Key Laboratory of Complex System Control Theory and Application, Tianjin 300384, China
Abstract:Image-to-image translation is a method to convert images in different domains. With the rapid development of the Generative Adversarial Network(GAN) in deep learning, GAN applications are increasingly concerned in the field of image-to-image translation. However, classical algorithms have disadvantages that the paired training data is difficult to obtain and the convert effect of generation image is poor. An improved Cycle-consistent Generative Adversarial Network(CycleGAN++) is proposed. New algorithm removes the loop network, and cascades the prior information of the target domain and the source domain in the image generation stage, The loss function is optimized as well, using classification loss instead of cycle consistency loss, realizing image-to-image translation without training data mapping. The evaluation of experiments on the CelebA and Cityscapes dataset show that new method can reach higher precision under the two classical criteria—Amazon Mechanical Turk perceptual studies(AMT perceptual studies) and Full-Convolutional Network score(FCN score), than the classical algorithms such as CycleGAN, IcGAN, CoGAN, and DIAT.
Keywords:
点击此处可从《电子与信息学报》浏览原始摘要信息
点击此处可从《电子与信息学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号