首页 | 本学科首页   官方微博 | 高级检索  
     

结合域适应学习的糖尿病视网膜病变分级诊断
引用本文:宋若仙,曹鹏,赵大哲. 结合域适应学习的糖尿病视网膜病变分级诊断[J]. 中国图象图形学报, 2022, 27(11): 3356-3370
作者姓名:宋若仙  曹鹏  赵大哲
作者单位:东北大学计算机科学与工程学院, 沈阳 110819;医学影像计算教育部重点实验室, 沈阳 110819
基金项目:国家自然科学基金项目(62076059);中央高校基本科研业务费专项资金资助(N2016001)
摘    要:目的 传统的糖尿病视网膜病变(糖网)(diabetic retinopathy, DR)依赖于早期病理特征的精确检测,但由于数据集缺乏病灶标记区域导致无法有效地建立监督性分类模型,引入其他辅助数据集又会出现跨域数据异质性问题;另外,现有的糖网诊断方法大多无法直观地从语义上解释医学模型预测的结果。基于此,本文提出一种端到端式结合域适应学习的糖网自动多分类方法,该方法协同注意力机制和弱监督学习加强优化。方法 首先,利用已标记病灶区域的辅助数据训练病灶检测模型,再将目标域数据集的糖网诊断转化为弱监督学习问题,依靠多分类预测结果指导深度跨域生成对抗网络模型,提升跨域的样本图像质量,用于微调病灶检测模型,进而过滤目标域中一些无关的病灶样本,提升多分类分级诊断性能。最后,在整体模型中融合注意力机制,从医学病理诊断角度提供可解释性支持其分类决策。结果 在公开数据集Messidor上进行糖网多分类评估实验,本文方法获得了71.2%的平均准确率和80.8%的AUC(area under curve)值,相比于其他多种方法具有很大优势,可以辅助医生进行临床眼底筛查。结论 结合域适应学习的糖网分类方法在没有...

关 键 词:糖尿病视网膜病变(DR)  眼底图像  注意力机制  深度学习  弱监督学习  域适应
收稿时间:2021-06-11
修稿时间:2021-12-01

Domain-adaptive-learning based diabetic retinopathy grading diagnosis
Song Ruoxian,Cao Peng,Zhao Dazhe. Domain-adaptive-learning based diabetic retinopathy grading diagnosis[J]. Journal of Image and Graphics, 2022, 27(11): 3356-3370
Authors:Song Ruoxian  Cao Peng  Zhao Dazhe
Affiliation:Computer Science and Engineering, Northeastern University, Shenyang 110819, China;Key Laboratory of Medical Image Computing of Ministry of Education, Northeastern University, Shenyang 110819, China
Abstract:Objective High-incidence diabetic retinopathy (DR) is derived from a diabetic-complication in common. Recent algorithms for DR screening on fundus images are designed to alleviate the issues of uneven distribution of disease and high population density. Traditional DR diagnose is originated from an early-pathological detection, micro-aneurysms (MA) and hemorrhage (H). However, the supervised classification model cannot be effectively trained due to the lack of the lesion labeling, and such medical-oriented pixel-level annotation is time-consuming and labor-intensive. For annotated-lesion-regions-related auxiliary dataset, it is difficult to improve the classification model due to the domain gap. In addition, most of the existing DR diagnostic methods cannot be used to explain the predicted results of medical models. We demonstrate an end-to-end automatic grading algorithm of domain adaptive-learning-based DR, integrated weakly-supervised learning and attention mechanism. Method First, the auxiliary dataset of the labeled lesion area is transferred to train a lesion detection supervision model due for handling the constraints of image-level DR diagnosis label and pixel-level lesion location information. Next, a single-image-labeling DR grading model is considered as a weakly-supervised learning problem to be dealt with. To bridge the domain gap, we facilitate a deep cross-domain generative adversarial network (GAN) model to produce more qualified cross-domain patches. A patches-derived classification model is trained by fine-tuning to filter out irrelevant lesion samples in the target domain, which improves the performance of image-label-based multi-class diagnosis. Finally, attention mechanism is melted into the entire model to strengthen grading interpretability for pathological diagnosis. As a result, the model hypothesis is based on independently and identically distributed local ones in global samples. The local-global relationship between small lesions and completed image is established, and the tracing ability of unclear lesion area is beneficial to classification results of retinal images for the degree of DR (healthy, slighted, moderated and severed). Result A publicly dataset of Messidor is composed of 1 200 fundus images, which provides image-level diagnostic status of DR severity. Meanwhile, to identify normal/abnormal lesions, IDRiD dataset is targeted as source dataset to develop H + MA (MA and H) binary-class task. The experimental results illustrate our end-to-end framework contributions are shown as following:1) improve the disease grading ability on the target domain without the lesion labels; 2) achieve domain adaptation across multiple datasets; 3) highlight the unclear regions with attention mechanism. Compared to the challenging benchmark dataset of Messidor, our optimization is based on the accuracy of 71.2% and the AUC(area under curve) value of 80.8%. We evaluate the contributions of different modules, such as sample filtering, domain adaptation, attention-mechanism-based weakly-supervised DR grading. Our ablation results show that the AUC value of these modules is optimized by 11.8%, 20.2% and 15.8%, respectively. It can be sorted out that irrelevant samples filtering can reduce negative impact on the final results, generate GAN-based cross-domain samples, and optimize data heterogeneity. It promotes pathology-related interpretability and enhances the generalization ability of model. Moreover, the ablation experiments analyze the influence of hyper-parameters in detail and the DR-grading-oriented interpretability is visualized. Conclusion Our domain-adaptive-learning based classification method can achieve grading diagnosis of fundus images effectively in the context of the initial lesion detection stage, the transfer learning strategy, as well as the local-global mapping relationship of lesion and entire retinal image. It has potential to distinguish the severity of lesions types and rediscover subtle modifications towards pathological features, and deal with imbalance between lesion and background patches. The image-level monitoring information can effectively and automatically realize grading diagnosis of fundus images without pixel-level lesion annotation data. It can avoid the limitation of manual segmentation and labeling of lesion in medical images as well. Furthermore, its interpretation and support can provide the potential detection of high risk regions. Our model can harness more weakly-supervised classification of medical images further.
Keywords:diabetic retinopathy(DR)  fundus image  attention mechanism  deep learning  weakly-supervised learning  domain adaptation
点击此处可从《中国图象图形学报》浏览原始摘要信息
点击此处可从《中国图象图形学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号