首页 | 本学科首页   官方微博 | 高级检索  
     

基于多级Transformer的超大倍率重建网络:参考图像超分辨率
引用本文:陈彤,周登文.基于多级Transformer的超大倍率重建网络:参考图像超分辨率[J].计算机与现代化,2022,0(8):121-126.
作者姓名:陈彤  周登文
摘    要:超分辨率(SR)是指从一个低分辨率图像,重建其对应的高分辨率副本。针对SR在超大倍率(8×、16×)重建不够精确的问题,本文提出多级Transformer的超大倍率重建网络(MTLF)。MTLF对多个Transformer进行多级堆叠以处理不同倍率的特征,并且利用修正注意力模块改进由Transformer得到的注意力权重,从而合成更精细的纹理。最后将所有倍率的特征融合成超大尺度下的SR图像。实验结果表明MTLF优于目前最好的方法(包括单图像超分辨率和基于Ref的超分辨率方法)。特别地,MTLF在极限倍率(32×)下也取得不错的效果。

关 键 词:参考图像    超分辨率    Transformer    超大倍率    注意力  
收稿时间:2022-08-22

Multistage-transformer Large-factor Network: Reference-based Super-resolution
Abstract:Image super-resolution (SR) refers to reconstructing the corresponding high-resolution copy from a low-resolution image. Aiming at solving the problem of inaccurate reconstruction of SR in the cases of super-large magnification (8×, 16×), a multi-level transformer super-magnification reconstruction network is proposed (MTLF). MTLF performs multi-level stacking of multiple transformers to process features of different scales, and uses the attention weights, which are obtained by the transformer and then improved by the modified attention module, to synthesize finer textures. In the end, the features of all magnifications fuse into a super-large-scale SR image. Experiments resenlts show that MTLF is superior to the state-of-the-art methods (including single-image super-resolution and Ref-based super-resolution methods) in terms of peak signal-to-noise ratio and visual effects. In particular, MTLF achieves fairly good results in the ultimate magnification (32×) scenario.
Keywords:reference-based  super-resolution  transformer  large-factor  attention  
点击此处可从《计算机与现代化》浏览原始摘要信息
点击此处可从《计算机与现代化》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号