首页 | 本学科首页   官方微博 | 高级检索  
     


Facial parts swapping with generative adversarial networks
Affiliation:1. LISTIC, Université de Savoie, B.P. 80439 74944 Annecy le Vieux Cedex, France;2. SATIE, ENS Cachan, CNRS, UniverSud, 61, av President Wilson, F-94230 Cachan, France;3. SONDRA, Supelec, Plateau du Moulon, 3 rue Joliot-Curie, F-91192 Gif-sur-Yvette CEDEX, France;4. ONERA, DEMR/TSI, Chemin de la Hunière, F-91120 Palaiseau, France;1. School of Information Science and Engineering, Huaqiao University, Xiamen, China;2. School of Engineering, Huaqiao University, Quanzhou, China;3. Xiamen Key Laboratory of Mobile Multimedia Communications, Xiamen, China;1. Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou 510006, PR China;2. Institute of Interdisciplinary Information Sciences, Tsinghua University, Beijing 100084, PR China;3. Huakong TsingJiao Information Science (Beijing) Limited (TsingJiao), Beijing 100084, PR China;4. Big Data Center of State Grid Corporation of China, Beijing 100031, PR China;1. Key Laboratory of Marine Chemistry Theory and Technology, Ministry of Education, Ocean University of China, Qingdao 266100, PR China;2. Institute of Materials Science and Engineering, Ocean University of China, Qingdao 266100, PR China;3. Key Laboratory of Advanced Technique & Preparation for Renewable Energy Materials, Ministry of Education, Yunnan Normal University, Kunming 650092, PR China
Abstract:In this paper, we present a novel deep generative facial parts swapping method: parts-swapping generative adversarial network (PSGAN). PSGAN independently handles facial parts, such as eyes (left eye and right eye), nose, mouth and jaw, which achieves facial parts swapping by replacing the target facial parts with source facial parts and reconstructing the entire face image with these parts. By separately modeling the facial parts in the form of region inpainting, the proposed method can successfully achieve highly photorealistic face swapping results, enabling users to freely manipulate facial parts. In addition, the proposed method is able to perform jaw editing based on sketch guidance information. Experimental results on the CelebA dataset suggest that our method achieves superior performance for facial parts swapping and provides higher user control flexibility.
Keywords:Facial parts swapping  Generative adversarial network  Deep leaning
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号