首页 | 本学科首页   官方微博 | 高级检索  
     

使用孪生注意力机制的生成对抗网络的研究
引用本文:武随烁,杨金福,单义,许兵兵.使用孪生注意力机制的生成对抗网络的研究[J].计算机科学与探索,2020,14(5):833-840.
作者姓名:武随烁  杨金福  单义  许兵兵
作者单位:北京工业大学 信息学部,北京 100124;计算智能与智能系统北京市重点实验室,北京 100124;北京工业大学 信息学部,北京 100124;计算智能与智能系统北京市重点实验室,北京 100124;北京工业大学 信息学部,北京 100124;计算智能与智能系统北京市重点实验室,北京 100124;北京工业大学 信息学部,北京 100124;计算智能与智能系统北京市重点实验室,北京 100124
基金项目:the Natural Science Foundation of Beijing under Grant No. 4182009 (北京市自然科学基金);The National Natural Science Foundation of China under Grant No. 61533002 (国家自然科学基金)
摘    要:生成对抗网络(GAN)能够生成逼真的图像,已成为生成模型中的一个研究热点。针对生成对抗网络无法有效提取图像局部与全局特征间依赖关系以及各类别间的依赖关系,提出一种用于生成对抗网络的孪生注意力模型(TAGAN)。以孪生注意力机制为驱动,通过模拟局部与全局特征间的依赖关系以及各类别间依赖关系,对真实自然图像建模,创建逼真的非真实图像。孪生注意力机制包含特征注意力模型和通道注意力模型,特征注意力模型通过有选择地聚合特征,学习相似特征间的关联性,通道注意力模型通过整合各通道维度的相关特征,学习各通道的内部依赖关系。在MNIST、CIFAR10和CelebA64数据集上验证了所提出模型的有效性。

关 键 词:深度学习  生成对抗网络(GAN)  生成模型  对抗学习  注意力机制

Research on Generative Adversarial Networks Using Twins Attention Mechanism
WU Suishuo,YANG Jinfu,SHAN Yi,XU Bingbing.Research on Generative Adversarial Networks Using Twins Attention Mechanism[J].Journal of Frontier of Computer Science and Technology,2020,14(5):833-840.
Authors:WU Suishuo  YANG Jinfu  SHAN Yi  XU Bingbing
Affiliation:(Faculty of Information Technology,Beijing University of Technology,Beijing 100124,China;Beijing Key Laboratory of Computational Intelligence and Intelligent System,Beijing 100124,China)
Abstract:Generative adversarial network(GAN) has become a research hotspot in generation model, since it can generate realistic images. To cope with the problem that the GAN cannot effectively capture the dependency between local and global features of the image and the dependency between different classes, this paper proposes a new generation model, named twins attention mechanism based generative adversarial network(TAGAN). Driven by twins attention mechanism, the real natural image is modeled by simulating the dependencies between local and global features and the dependencies between categories to create realistic fake images. TAGAN has feature attention and channel attention. The feature attention learns the correlation between similar features by selectively aggregating features. The channel attention learns the internal dependencies of each channel by integrating the relevant features of each channel dimension. The experiments implemented on the MNIST, CIFAR10 and CelebA64 datasets demonstrate that the proposed model is effective.
Keywords:deep learning  generative adversarial network(GAN)  generative model  against learning  attention mechanism
本文献已被 维普 万方数据 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号