首页 | 本学科首页   官方微博 | 高级检索  
     

残差密集注意力网络多模态MR图像超分辨率重建
引用本文:刘羽,朱文瑜,成娟,陈勋.残差密集注意力网络多模态MR图像超分辨率重建[J].中国图象图形学报,2023,28(1):248-259.
作者姓名:刘羽  朱文瑜  成娟  陈勋
作者单位:合肥工业大学生物医学工程系, 合肥 230009;中国科学技术大学电子工程与信息科学系, 合肥 230026
基金项目:国家自然科学基金项目(62176081, 61922075, 62171176)
摘    要:目的 现有医学图像超分辨率方法主要针对单一模态图像进行设计,然而在磁共振成像(magnetic resonance imaging, MRI)技术的诸多应用场合,往往需要采集不同成像参数下的多模态图像。针对单一模态的方法无法利用不同模态图像之间的关联信息,很大程度上限制了重建性能。目前超分辨率网络模型参数量往往较大,导致计算和存储代价较高。为此,本文提出了一个轻量级残差密集注意力网络,以一个统一的网络模型同时实现多模态MR图像的超分辨率重建。方法 首先将不同模态的MR图像堆叠后输入网络,在低分辨率空间中提取共有特征,之后采用设计的残差密集注意力模块进一步精炼特征,再通过一个亚像素卷积层上采样到高分辨率空间,最终分别重建出不同模态的高分辨率图像。结果 本文采用MICCAI (medical image computing and computer assisted intervention) BraTS (brain tumor segmentation) 2019数据集中的T1和T2加权MR图像对网络进行训练和测试,并与8种代表性超分辨率方法进行对比。实验结果表明,本文方法可以取得优于...

关 键 词:图像超分辨率  多模态MR图像  卷积神经网络(CNN)  残差学习  密集连接  注意力机制  多模态信息融合
收稿时间:2022/3/23 0:00:00
修稿时间:2022/4/22 0:00:00

Multi-modal MR image super-resolution with residual dense attention network
Liu Yu,Zhu Wenyu,Cheng Juan,Chen Xun.Multi-modal MR image super-resolution with residual dense attention network[J].Journal of Image and Graphics,2023,28(1):248-259.
Authors:Liu Yu  Zhu Wenyu  Cheng Juan  Chen Xun
Affiliation:Department of Biomedical Engineering, Hefei University of Technology, Hefei 230009, China; Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei 230026, China
Abstract:Objective Image super-resolution (SR) is a sort of procedure that aims to reconstruct high resolution (HR) images from a given single or a set of low resolution (LR) images. This cost effective medical-related technique can improve the spatial resolution of images in terms of image processing algorithms. However, most of medical image super-resolution methods are focused on a single-modal super-resolution design. Current magnetic resonance imaging based (MRI-based) clinical applications, Multiple modalities are obtained by different parameter settings. In this case, a single modality super-resolution method cannot take advantage of the correlation information amongst multiple modalities, which limits the super-resolution capability. In addition, most existing deep learning based (DL-based) super-resolution models have constrained of a number of trainable parameters, higher computational cost and memory storage in practice. To strengthen multi-modalities correlation information for reconstruction, our research is focused on a lightweight DL model (i.e., residual dense attention network) for multi-modal MR image super-resolution. Method A residual dense attention network is developed for multi-modal MR image super-resolution. Our network is composed of three parts: 1) shallow feature extraction, 2) feature refinement and 3) image reconstruction. Two of multi-modal MR images are input to the network after stacking. First, a 3 × 3 convolutional layer in the shallow feature extraction part is used to extract the initial feature maps in the low-resolution space. Next, the feature refinement part is mainly composed of several residual dense attention blocks. Each residual dense attention block consists of a residual dense block and an efficient channel attention module. Third, dense connection and local residual learning are adopted to improve the representation capability of the network in the residual dense block. The efficient channel attention module is facilitated the network to adaptively identify the feature maps that are more crucial for reconstruction. The outputs of all residual dense attention modules are stacked together and fed into two convolutional layers. These convolution layers are employed to reduce the channels of the feature maps as well as for feature fusion. After that, a global residual learning strategy is implemented to optimize the information flow further. The initial feature maps are added to the last layer through a skip connection. Finally, the obtained low-resolution feature maps in the image reconstruction part are up-scaled to the high-resolution space by a sub-pixel convolutional layer. Additionally, two symmetric branches are used to reconstruct the super-resolution results of the different modalities. To reconstruct the residual maps of the two modalities, each branch of them consists of two 3 × 3 convolutional layers. To obtain the final super-resolution results, the residual maps are linked to the interpolated low-resolution images. The popular L1 loss is used to optimize the network parameters. Result In the experiments, to verify the effectiveness of the proposed method, the MR images of two modalities (i.e., T1-weighted and T2-weighted) from the medical image computing and computer assisted intervention (MICCAI) brain tumor segmentation (BraTS) 2019 are adopted. The original MRI scans are split and segmented into a training set, a validation set and a testing set. To verify the effect of the multi-modal image super-resolution manner and the efficient channel attention module, two sets of ablation experiments are designed. The results show that these two of components can optimize the super-resolution performance more. Furthermore, eight representative image super-resolution methods are used for comparative analysis of performance in the experiments. Experimental results demonstrate that our method can improve these reference methods in terms of both of the objective evaluation and visual quality. Specifically, our method can obtain more competitive results as mentioned below: 1) when the up-scale factor is 2, the peak signal to noise ratio (PSNR) of the T1-weighted and T2-weighted modalities improve by 0.109 8 dB and 0.415 5 dB, respectively; 2) when the up-scale factor is 3, the PSNR of the T2-weighted modality improves by 0.295 9 dB while the T1-weight modality decreases by 0.064 6 dB; 3) when the up-scale factor is 4, the PSNR of the T1-weighted and T2-weighted modalities improve by 0.269 3 dB and 0.042 9 dB, respectively. It is worth noting that our network has a more than 10 times reduction in terms of network parameters compared to the popular reference method. Conclusion The correlation information of different modalities between MR images is beneficial to image super-resolution. Our multi-modal MR image super-resolution method can achieve high-quality super-resolution results of two modalities simultaneously in an integrated correlation information-based network. It can obtain more competitive performance than the state-of-the-art super-resolution methods with a relative lightweight model.
Keywords:image super-resolution  multi-modal MR images  convolutional neural networks(CNN)  residual learning  dense connection  attention mechanism  multi-modal information fusion
点击此处可从《中国图象图形学报》浏览原始摘要信息
点击此处可从《中国图象图形学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号