首页 | 本学科首页   官方微博 | 高级检索  
     


Fast event-inpainting based on lightweight generative adversarial nets
Authors:LIU Sheng  CHENG Haohao  HUANG Shengyue  JIN Kun and YE Huanran
Affiliation:College of Computer Science and Technology,Zhejiang University of Technology,Hangzhou 310023,China
Abstract:Event-based cameras generate sparse event streams and capture high-speed motion information, however, as the time resolution increases, the spatial resolution will decrease sharply. Although the generative adversarial network has achieved remarkable results in traditional image restoration, directly using it for event inpainting will obscure the fast response characteristics of the event camera, and the sparsity of the event stream is not fully utilized. To tackle the challenges, an event-inpainting network is proposed. The number and structure of the network are redesigned to adapt to the sparsity of events, and the dimensionality of the convolution is increased to retain more spatiotemporal information. To ensure the time consistency of the inpainting image, an event sequence discriminator is added. The tests on the DHP19 and MVSEC datasets were performed. Compared with the state-of-the-art traditional image inpainting method, the method in this paper reduces the number of parameters by 93.5% and increases the inference speed by 6 times without reducing the quality of the restored image too much. In addition, the human pose estimation experiment also revealed that this model can fill in human motion information in high frame rate scenes.
Keywords:
本文献已被 万方数据 等数据库收录!
点击此处可从《光电子快报》浏览原始摘要信息
点击此处可从《光电子快报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号