首页 | 本学科首页   官方微博 | 高级检索  
     


Shape transformer nets: Generating viewpoint-invariant 3D shapes from a single image
Affiliation:1. Engineering Research Center of Wideband Wireless Communication Technology, Ministry of Education, Nanjing University of Posts and Telecommunications, Nanjing 210003, China;2. Hangzhou Joyware Limited Corporation, Hangzhou 310051, China;1. Faculty of Electronic and Information Engineering, Xi’an Jiaotong University, Xi’an 710049, PR China;2. National Key Laboratory of Science and Technology on Space Microwave, China Academy of Space Technology, Xi’an 710100, PR China
Abstract:Single-view 3D shapes generation has achieved great success in recent years. However, current methods always blind the learning of shapes and viewpoints. The generated shape only fit the observed viewpoints and would not be optimal from unknown viewpoints. In this paper, we propose a novel encoder–decoder based network which contains a disentangled transformer to generate the viewpoint-invariant 3D shapes. The differentiable and parametric Non-uniform B-spline (NURBS) surface generation and 3D-to-3D viewpoint transformation are incorporated to learn the viewpoint-invariant shape and the camera viewpoint, respectively. Our new framework allows us to learn the latent geometric parameters of shapes and viewpoints without knowing the ground truth viewpoint. That can simultaneously generate camera-viewpoint and viewpoint-invariant 3D shapes of the object. We analyze the effects of disentanglement and show both quantitative and qualitative results of shapes generated at various unknown viewpoints.
Keywords:3D shape generation  Invariant viewpoint  Disentanglement  B-spline surfaces
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号