融合特征编码的面部表情编辑技术 |
| |
引用本文: | 刘韵婷,靳佳晖,陈亮,张景异. 融合特征编码的面部表情编辑技术[J]. 电子科技大学学报(自然科学版), 2021, 50(5): 741-748. DOI: 10.12178/1001-0548.2020373 |
| |
作者姓名: | 刘韵婷 靳佳晖 陈亮 张景异 |
| |
作者单位: | 沈阳理工大学自动化与电气工程学院 沈阳 110159 |
| |
基金项目: | 国家重点研发计划(2017YFC0821001,2017YFC0821004) |
| |
摘 要: | 为解决当前连续面部表情生成模型易在表情密集区域产生伪影、表情控制能力较弱等问题,该文对GANimation模型进行了研究改进,提高对表情肌肉运动单元AU控制的准确度.在生成器的编码和解码特征层之间引入多尺度特征融合(MFF)模块,以长跳跃连接的方式将得到的融合特征用于图像解码.在生成器的解码部分中加入一层逆卷积,便于M...
|
关 键 词: | 连续面部表情生成 逆卷积 GANimation改进 多尺度特征融合 |
收稿时间: | 2020-10-09 |
Facial Expression Editing Technology with Fused Feature Coding |
| |
Affiliation: | School of Automation and Electrical Engineering, Shenyang Ligong University Shenyang 110159 |
| |
Abstract: | In order to solve the problems that the current continuous facial expression generation model is easy to produce artifacts in the expression-intensive areas and the expression control ability is weak, the GANimation model is improved for increasing the accuracy of the AU control of the expression muscle motor unit. A multi dimension feature fusion (MFF) module is introduced between the encoding and decoding feature layers of the generator, and the obtained fusion features are used for image decoding in a long-hop connection. A layer of inverse convolution is added to the decoding part of the generator to facilitate the addition of the MFF module to be more efficient and reasonable. Comparing experiments with the original network on the self-made data set, the accuracy of expression synthesis and the quality of the generated images of the improved model have been increased by 1.28 and 2.52 respectively, which verifies that the improved algorithm has better performance in facial expression editing when the image is not blurred and artifacts exist. |
| |
Keywords: | |
本文献已被 万方数据 等数据库收录! |
| 点击此处可从《电子科技大学学报(自然科学版)》浏览原始摘要信息 |
|
点击此处可从《电子科技大学学报(自然科学版)》下载全文 |