首页 | 本学科首页   官方微博 | 高级检索  
     


Rethinking pre-training on medical imaging
Affiliation:1. School of Information Science and Engineering, Huaqiao University, Xiamen, China;2. School of Engineering, Huaqiao University, Quanzhou, China;3. Xiamen Key Laboratory of Mobile Multimedia Communications, Xiamen, China;1. Department of Computer Science, Stanford University, Stanford, CA, USA;2. Department of Electrical Engineering, Stanford University, Stanford, CA, USA;3. Department of Neurology, Stanford University, Stanford, CA, USA;4. Department of Radiology, Stanford University, Stanford, CA, USA;5. Department of Biomedical Data Science, Stanford University, Stanford, CA, USA
Abstract:Transfer learning from natural image datasets, such as ImageNet, is common for applying deep learning to medical imaging. However, the modalities of natural and medical images differ considerably, and the reason for the latest medical research preferring ImageNet to medical data is questionable. In this study, we investigated the properties of medical pre-training and its transfer effectiveness on various medical tasks. Through an intuitive convolution-based analysis, we determined the modality characteristics of images. Surprisingly, medical pre-training showed exceptional performance for a classification task but not for a segmentation task since medical data are visually homogeneous and lack morphological information. Using data with diverse modalities helped overcome such drawbacks, resulting in medical pre-training achieving performance comparable to pre-training with ImageNet with considerably fewer samples than ImageNet for both aforementioned tasks. Finally, a study of learned representations and realistic scenarios indicated that while ImageNet is the best choice for medical imaging, medical pre-training has significant potential.
Keywords:Transfer learning  Medical image analysis  Convolutional neural network  Survival prediction
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号