首页 | 本学科首页   官方微博 | 高级检索  
     

基于深度学习的肝脏CT-MR图像无监督配准
引用本文:王帅坤,周志勇,胡冀苏,钱旭升,耿辰,陈光强,纪建松,戴亚康.基于深度学习的肝脏CT-MR图像无监督配准[J].计算机工程,2023,49(1):223-233.
作者姓名:王帅坤  周志勇  胡冀苏  钱旭升  耿辰  陈光强  纪建松  戴亚康
作者单位:1. 中国科学技术大学 (苏州)生物医学工程学院 生命科学与医学部, 江苏 苏州 215163;2. 中国科学院苏州生物医学工程技术研究所, 江苏 苏州 215163;3. 苏州大学附属第二医院, 江苏 苏州 215000;4. 丽水市中心医院, 浙江 丽水 323000;5. 济南国科医工科技发展有限公司, 济南 250000
基金项目:国家自然科学基金(81971685);国家重点研发计划(2018YFA0703101);中国科学院青年创新促进会会员基金(2021324);江苏省重点研发计划(BE2021053);苏州市科技计划(SS202054)。
摘    要:多模态配准是医学图像分析中的关键环节,在肝癌辅助诊断、图像引导的手术治疗中具有重要作用。针对传统的迭代式肝脏多模态配准计算量大、耗时长、配准精度低等问题,提出一种基于多尺度形变融合和双输入空间注意力的无监督深度学习配准算法。利用多尺度形变融合框架提取不同分辨率的图像特征,实现肝脏的逐阶配准,在提高配准精度的同时避免网络陷入局部最优。采用双输入空间注意力模块在编解码阶段融合不同水平的空间和文本信息提取图像间的差异特征,增强特征表达。引入基于邻域描述符的结构信息损失项进行网络迭代优化,不需要任何先验信息即可实现精确的无监督配准。在临床肝脏CT-MR数据集上的实验结果表明,与传统的Affine、Elastix、VoxelMorph等算法相比,该算法达到最优的DSC值和TRE值,分别为0.926 1±0.018 6和6.39±3.03 mm,其平均配准时间为0.35±0.018 s,相比Elastix算法提升了近380倍,能准确地提取特征及估计规则的形变场,具有较高的配准精度和较快的配准速度。

关 键 词:深度学习  无监督配准  多模态配准  形变融合  结构信息损失  空间注意力
收稿时间:2022-02-22
修稿时间:2022-03-23

Unsupervised Registration for Liver CT-MR Images Based on Deep Learning
WANG Shuaikun,ZHOU Zhiyong,HU Jisu,QIAN Xusheng,GENG Chen,CHEN Guangqiang,JI Jiansong,DAI Yakang.Unsupervised Registration for Liver CT-MR Images Based on Deep Learning[J].Computer Engineering,2023,49(1):223-233.
Authors:WANG Shuaikun  ZHOU Zhiyong  HU Jisu  QIAN Xusheng  GENG Chen  CHEN Guangqiang  JI Jiansong  DAI Yakang
Abstract:Multimodal registration is a key step in medical image analysis, which plays an important role in the assisted diagnosis and the image-guided surgical treatment of liver cancer.Aiming at the problems of large computation, long time consuming, and low registration accuracy of traditional iterative multimodal registration, this paper proposes an unsupervised deep learning-based image registration method based on multi-scale deformation fusion and dual-input spatial attention.Using the multi-scale deformation fusion architecture captures different resolution features of images to achieve liver registration in a coarse-to-fine pattern and avoids local optimization.The dual-input spatial attention module is used to extract the discrepant features between images by integrating spatial and text information at different levels in the codec stage and enhancing feature expression.Additionally, a structural information loss is introduced to globally optimize the registration network, which does not require any prior information and achieves an accurate unsupervised registration.Experimental results on liver Computed Tomography-Magnetic Resonance(CT-MR) datasets show that the proposed algorithm achieved an optimal global Dice Similarity Coefficient(DSC) and Target Registration Error(TRE) values of 0.926 1 ±0.018 6 and 6.39 ±3.03 mm, respectively, which is superior to Affine, Elastix, and VoxelMorph amongst other algorithms.In addition, the average registration time of the proposed algorithm is 0.35 ±0.018 s, which is nearly 380 times faster than the Elastix algorithm.Results show that the proposed algorithm demonstrates higher registration accuracy and faster registration speed by accurately extracting features and estimating the regular deformation field.
Keywords:deep learning  unsupervised registration  multimodal registration  deformation fusion  structural information loss  spatial attention  
点击此处可从《计算机工程》浏览原始摘要信息
点击此处可从《计算机工程》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号