首页 | 本学科首页   官方微博 | 高级检索  
     

多尺度深度特征提取的肝脏肿瘤CT图像分类
引用本文:毛静怡,宋余庆,刘哲.多尺度深度特征提取的肝脏肿瘤CT图像分类[J].中国图象图形学报,2021,26(7):1704-1715.
作者姓名:毛静怡  宋余庆  刘哲
作者单位:江苏大学计算机科学与通信工程学院, 镇江 212013
基金项目:国家自然科学基金项目(61976106,61772242,61572239);中国博士后科学基金项目(2017M611737);江苏省“六大人才高峰”高层次人才项目(DZXX-122);镇江市卫生计生科技重点项目(SHW2017019)
摘    要:目的 肝脏肿瘤是人体最具侵袭性的恶性肿瘤之一,传统的肿瘤诊断依靠观察患者的CT(computed tomography)图像,工作量大时易造成疲劳,难免会产生误诊,为此使用计算机辅助的方法进行诊断,但现有的深度学习方法中存在肿瘤分类准确率低、网络的特征表达能力和特征提取能力较弱等问题。对此,本文设计了一种多尺度深度特征提取的分类网络模型。方法 首先在原始CT图像中选取感兴趣区域,然后根据CT图像的头文件进行像素值转换,并进行数据增强来扩充构建数据集,最后将处理后的数据输入到本文提出的分类网络模型中输出分类结果。该网络通过多尺度特征提取模块来提取图像的多尺度特征并增加网络的感受野,使用深度特征提取模块降低背景噪声信息,并着重关注病灶区域有效特征,通过集成并行的空洞卷积使得尺度多元化,并将普通卷积用八度卷积替换来减少参数量,提升分类性能,最终实现了对肝脏肿瘤的精确分类。结果 本文模型达到了87.74%的最高准确率,比原始模型提升了9.92%;与现有主流分类网络进行比较,多项评价指标占优,达到了86.04%的召回率,87%的精准率,86.42%的F1分数;此外,通过消融实验进一步验证了所提方法的有效性。结论 本文方法可以较为准确地对肝脏肿瘤进行分类,将此方法结合到专业的医疗软件当中去,能够为医生早期的诊断和治疗提供可靠依据。

关 键 词:深度学习  肝脏肿瘤分类  多尺度特征  特征提取  空洞卷积
收稿时间:2020/8/22 0:00:00
修稿时间:2021/1/13 0:00:00

CT image classification of liver tumors based on multi-scale and deep feature extraction
Mao Jingyi,Song Yuqing,Liu Zhe.CT image classification of liver tumors based on multi-scale and deep feature extraction[J].Journal of Image and Graphics,2021,26(7):1704-1715.
Authors:Mao Jingyi  Song Yuqing  Liu Zhe
Affiliation:School of Computer Science and Communication Engineering, Jiangsu University, Zhenjiang 212013, China
Abstract:Objective Liver tumors are the most aggressive malignancies in the human body. The definition of lesion type and lesion period based on computed tomography(CT) images determines the diagnosis and strategy of the treatment, which requires professional knowledge and rich experience of experts to classify them. Fatigue is easily experienced when the workload is heavy, and even experienced senior experts have difficulty avoiding misdiagnosis. Deep learning can avoid the drawbacks of traditional machine learning that takes a certain amount of time to manually extract the features of the image and perform dimensionality reduction, and is capable of extracting high-dimensional features of an image. Using deep learning to assist doctors in diagnosis is important. In the existing medical image classification task, the challenge of the low accuracy of tumor classification, the weak capability of the feature extraction, and the rough dataset still remain. To address these tasks, this study presents a method with a multi-scale and deep feature extraction classification network. Method First, we extract the region of interest (ROI) according to the contours of the liver tumors that were labeled by experienced radiologists, along with the ROI of healthy livers. The ROI is extracted to capture the features of the lesion area and surrounding tissue, which is relative to the size of the lesion. Due to the different sizes of the lesion area, the size of the extracted ROI is also different. Then, the pixel value is converted and data augmentation is performed. The dataset is Hounsfield windows, the range of CT values is (-1 024, 3 071), and the range of digital imaging and communications in medicine(DICOM) image is (0, 4 096). The pixel values of DICOM images have to be converted to CT values. First, we read rescale_intercept and rescale_slope from the DICOM header file, and then we use the formula to convert. Thereafter, we limit the CT values of liver datasets to -100, 400] Hounsfield HU to avoid the influence of the background noise of the unrelated organs or tissues. We perform several data augmentation methods such as flipping, rotation, and transforming to expand the diversity of the datasets. Then, these images are sent into the MD_SENet for classification. The MD_SENet network is a SE_ResNet-like convolution neural network that can achieve end-to-end classification. The SE_ResNet learns the important features automatically from each channel to strengthen the useful features and suppress useless ones. MD_SENet network is much deeper than SE_ResNet. Our contributions are the following: 1) Hierarchical residual-like connections are used to improve multi-scale expression and increase the receptive field of each network layer. In the study, the image features after 1×1 convolution layers are divided into four groups. Each group of features passes through the 3×3 residual-like convolution groups, which improves the multi-scale feature extraction of networks and enhances the acquisition of focus areas features. 2) Channel attention and spatial attention are used to further focus on effective information on medical images. We let the feature images first go through the channel attention module, then we multiply its input and output to go through the spatial attention module. Then, we multiply the output of the spatial attention module and its input, which can pay more attention to the features of the lesion area and reduce the influence of background noise. 3) Atrous convolutions connected in parallel which refer to the spatial pyramid pooling, then we use 1×1 convolution layers to strengthen the feature. Finally, we concatenate the output and use softmax in classification. In this way, we can expand the receptive field and increase the image resolution, which can improve the feature expression ability and prevent the loss of information effectively. 4) The ordinary convolution is replaced by octave convolution to reduce the number of parameters and improve the classification performance. In this study, we compared the results of DenseNet, ResNet, MnasNet, MobileNet, ShuffleNet, SK_ResNet, and SE_ResNet with those of our MD_SENet, all of which were trained on the liver dataset. During the experiment, due to the limitation of graphics processing unit(GPU) memory, we set a batch size of 16 with Adam optimization and learning rate of 0.002 for 150 epochs. We used the dataset in Pytorch framework, Ubuntu 16.04. All experiments used the NVIDIA GeForce GTX 1060 Ti GPU to verify the effectiveness of our proposed method. Result Our training set consists of 4 096 images and the test set consists of 1 021 images for the liver dataset. The classification accuracy of our proposed method is 87.74% and is 9.92% higher than the baseline (SE_ResNet101). Our module achieves the best result compared with the state-of-the-art network and achieved 86.04% recall, 87% precision, 86.42% F1-score under various evaluation indicators. Ablation experiments are conducted to verify the effectiveness of the method. Conclusion In this study, we proposed a method to classify the liver tumors accurately. We combined the method into professional medical software so that we can provide a foundation that physicians can use in early diagnosis and treatment.
Keywords:deep learning  liver lesion classification  multi-scale features  feature extraction  dilated convolution
点击此处可从《中国图象图形学报》浏览原始摘要信息
点击此处可从《中国图象图形学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号