首页 | 本学科首页   官方微博 | 高级检索  
     

跨模态融合的双注意力脑肿瘤分割算法
引用本文:张鹏跃,马巧梅.跨模态融合的双注意力脑肿瘤分割算法[J].计算机系统应用,2024,33(1):119-126.
作者姓名:张鹏跃  马巧梅
作者单位:中北大学 软件学院, 太原 030051;中北大学 软件学院, 太原 030051;山西省医学影像人工智能工程技术研究中心(中北大学), 太原 030051
基金项目:山西省自然科学基金(20210302123019)
摘    要:针对脑肿瘤多模态信息融合不充分以及肿瘤区域细节信息丢失等问题,提出了一种跨模态融合的双注意力脑肿瘤图像分割网络(CFDA-Net).在编码器-解码器的基础结构上,首先在编码器分支采用密集块与大内核注意力并行的新卷积块,可以使全局和局部信息有效融合且可以防止反向传播时梯度消失的问题;其次在编码器的第2、3和4层的左侧加入多模态深度融合模块,有效地利用不同模态间的互补信息;然后在解码器分支使用Shuffle Attention注意力将特征图分组处理后再聚合,其中分组的子特征一分为二地获取空间与通道的重要注意特征.最后使用二进制交叉熵(binary cross entropy, BCE)、Dice Loss与L2 Loss组成新的混合损失函数,缓解了脑肿瘤数据的类别不平衡问题,进一步提升分割性能.在BraTS2019脑肿瘤数据集上的实验结果表明,该模型在整体肿瘤区域、肿瘤核心区域和肿瘤增强区域的平均Dice系数值分别为0.887、0.892和0.815.与其他先进的分割方法 ADHDC-Net、SDS-MSA-Net等相比,该模型在肿瘤核心区域和增强区域具有更好的分割效果.

关 键 词:脑肿瘤  多模态  深度融合  注意力机制  图像分割
收稿时间:2023/7/13 0:00:00
修稿时间:2023/8/11 0:00:00

Brain Tumor Segmentation Algorithm Based on Cross-modal Fusion Dual Attention
ZHANG Peng-Yue,MA Qiao-Mei.Brain Tumor Segmentation Algorithm Based on Cross-modal Fusion Dual Attention[J].Computer Systems& Applications,2024,33(1):119-126.
Authors:ZHANG Peng-Yue  MA Qiao-Mei
Affiliation:School of Software, North University of China, Taiyuan 030051, China; School of Software, North University of China, Taiyuan 030051, China;Shanxi Medical Imaging Artificial Intelligence Engineering Technology Research Center (North University of China), Taiyuan 030051, China
Abstract:This study proposes a cross-modal fusion dual attention net (CFDA-Net) for brain tumor image segmentation to solve the insufficient multi-modal information fusion of brain tumors and detail loss of the tumor regions. Based on the encoder-decoder architecture, a new convolutional block with dense blocks and large kernel attention parallel is first adopted in the encoder branch, which can effectively fuse global and local information and prevent the gradient vanishing during backpropagation. Secondly, a multi-modal deep fusion module is added to the left sides of the second, third, and fourth layers of the encoder to effectively utilize the complementary information among different modalities. Then, in the decoder branch, Shuffle Attention is adopted to group the feature maps and aggregate them, and the subfeatures of the group are divided into two parts to obtain important attention features of space and channels. Finally, binary cross entropy (BCE), Dice Loss, and L2 Loss are employed to form a new hybrid loss function, which alleviates the category imbalance of brain tumor data and further improves the segmentation performance. The experimental results on the BraTS2019 brain tumor dataset show that the average Dice coefficient values of the model in the whole tumor region, tumor core region, and tumor enhancement region are 0.887, 0.892, and 0.815 respectively. The proposed model has better segmentation performance in the core and enhanced regions of tumors than other advanced segmentation methods such as ADHDC-Net and SDS-MSA-Net.
Keywords:brain tumor  multimodal  deep fusion  attention mechanism  image segmentation
点击此处可从《计算机系统应用》浏览原始摘要信息
点击此处可从《计算机系统应用》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号