首页 | 本学科首页   官方微博 | 高级检索  
     

基于样本特征解码约束的GANs
引用本文:陈泓佑, 陈帆, 和红杰, 朱翌明. 基于样本特征解码约束的GANs. 自动化学报, 2022, 48(9): 2288−2300 doi: 10.16383/j.aas.c190496
作者姓名:陈泓佑  陈帆  和红杰  朱翌明
作者单位:1.西南交通大学信号与信息处理四川省高校重点实验室 成都 611756
基金项目:国家自然科学基金(61872303, U1936113)和四川省科技厅科技创新人才计划(2018RZ0143)资助
摘    要:生成式对抗网络(Generative adversarial networks, GANs)是一种有效模拟训练数据分布的生成方法, 其训练的常见问题之一是优化Jensen-Shannon (JS)散度时可能产生梯度消失问题. 针对该问题, 提出了一种解码约束条件下的GANs, 以尽量避免JS散度近似为常数而引发梯度消失现象, 从而提高生成图像的质量. 首先利用U-Net结构的自动编码机(Auto-encoder, AE)学习出与用于激发生成器的随机噪声同维度的训练样本网络中间层特征. 然后在每次对抗训练前使用设计的解码约束条件训练解码器. 其中, 解码器与生成器结构相同, 权重共享. 为证明模型的可行性, 推导给出了引入解码约束条件有利于JS散度不为常数的结论以及解码损失函数的类型选择依据. 为验证模型的性能, 利用Celeba和Cifar10数据集, 对比分析了其他6种模型的生成效果. 通过实验对比Inception score (IS)、弗雷歇距离和清晰度等指标发现, 基于样本特征解码约束的GANs能有效提高图像生成质量, 综合性能接近自注意力生成式对抗网络.

关 键 词:生成式对抗网络   梯度消失   特征学习   自动编码机   深度学习
收稿时间:2019-06-29

A GANs Model Based on Sample Feature Decoding Constraint
Chen Hong-You, Chen Fan, He Hong-Jie, Zhu Yi-Ming. A GANs model based on sample feature decoding constraint. Acta Automatica Sinica, 2022, 48(9): 2288−2300 doi: 10.16383/j.aas.c190496
Authors:CHEN Hong-You  CHEN Fan  HE Hong-Jie  ZHU Yi-Ming
Affiliation:1. Key Laboratory of Signal & Information Processing of Sichuan Province, Southwest Jiaotong University, Chengdu 611756
Abstract:Generative adversarial networks (GANs) model is a generative approach for effectively simulating the distribution of training data. One of the common problems in training GANs is the possible vanishing gradient problem while optimizing Jensen-Shannon (JS) divergence. Aiming at the problem, a GANs model under decoding constraint is proposed to avoid JS divergence approximating a constant, thus improving the quality of generated images. Firstly, an auto-encoder (AE) structured under U-Net is utilized to learn the training sample network middle layer feature. It has the same dimension as the random noise used for triggering generative network. Then, the decoding constraint is designed, which shares the same structure and weights as that of the generative network, is used to train decoder before each adversarial training. To prove the feasibility of model, the conclusion is deduced that introducing decoding constraint is beneficial to avoiding JS divergence approximating a constant and the type selection basis of decoding loss function is given. To verify the performance of the model, Celeba and Cifar10 datasets are used to compare and analyze the generated results of other 6 models. By comparing Inception score, Frechet inception distance, clarity and other index via experiment, it is discovered that the novel GANs can improve the quality of generated images, comprehensive performance close to self-attention generation adversarial networks.
Keywords:Generative adversarial networks  vanishing gradient  feature learning  auto-encoder  deep learning
点击此处可从《自动化学报》浏览原始摘要信息
点击此处可从《自动化学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号