首页 | 本学科首页   官方微博 | 高级检索  
     

自监督对比特征学习的多模态乳腺超声诊断
引用本文:丁维昌,施俊,王骏.自监督对比特征学习的多模态乳腺超声诊断[J].智能系统学报,2023,18(1):66-74.
作者姓名:丁维昌  施俊  王骏
作者单位:上海大学 通信与信息工程学院, 上海 200444
摘    要:超声图像的乳腺癌自动诊断具有重要的临床价值。然而,由于缺乏大量人工标注数据,构建高精度的自动诊断方法极具挑战。近年来,自监督对比学习在利用无标签自然图像产生具有辨别性和高度泛化性的特征方面展现出巨大潜力。然而,采用自然图像构建正负样本的方法在乳腺超声领域并不适用。为此,本文引入超声弹性图像(elastography ultrasound, EUS),利用超声图像的多模态特性,提出一种融合多模态信息的自监督对比学习方法。该方法采用同一病人的多模态超声图像构造正样本;采用不同病人的多模态超声图像构建负样本;基于模态一致性、旋转不变性和样本分离性来构建对比学习的目标学习准则。通过在嵌入空间中学习两种模态的统一特征表示,从而将EUS信息融入模型,提高了模型在下游B型超声分类任务中的表现。实验结果表明本文提出的方法能够在无标签的情况下充分挖掘多模态乳腺超声图像中的高阶语义特征,有效提高乳腺癌的诊断正确率。

关 键 词:自监督学习  对比学习  超声图像  弹性超声  B型超声  多模态  乳腺癌  计算机辅助诊断  深度学习

Multi-modality ultrasound diagnosis of the breast with self-supervised contrastive feature learning
DING Weichang,SHI Jun,WANG Jun.Multi-modality ultrasound diagnosis of the breast with self-supervised contrastive feature learning[J].CAAL Transactions on Intelligent Systems,2023,18(1):66-74.
Authors:DING Weichang  SHI Jun  WANG Jun
Affiliation:School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
Abstract:An automatic ultrasound-based diagnosis of breast cancer has important clinical value. However, high-precision automatic diagnosis methods are very difficult to construct because many labeled data are missing. Recently, self-supervised contrastive learning has shown great potential in using unlabeled natural images to generate discriminative and highly generalized features. However, this approach is not applicable to using natural images to construct positive and negative samples in the field of breast ultrasound. To this end, this work introduces the elastography ultrasound (EUS) image and proposes a self-supervised contrastive learning method integrating multimodal information based on the multimodal features of an EUS image. Specifically, positive and negative samples are constructed using multi-modality ultrasound images collected from the same and different patients, respectively. We construct the object learning criterion of contrastive learning based on modal consistency, rotation invariance, and sample separation. The EUS information is integrated into the model by learning the unified feature representation for both modalities in the embedding space, which improves model performance in the downstream B-mode ultrasound classification task. The experimental results show that our method can fully mine the high-level semantic features from unlabeled multimodal breast ultrasound images, thereby effectively improving the diagnosis accuracy of breast cancer.
Keywords:self-supervised learning  contrastive learning  ultrasound image  flexible ultrasound  B-mode ultrasound  multi-modality  breast cancer  computer-aided diagnosis  deep learning
点击此处可从《智能系统学报》浏览原始摘要信息
点击此处可从《智能系统学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号