首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
本论文针对乳腺癌病理图像分析提出新的方法进行图像特征提取和可疑区域标记。由于深度神经网络,例如 VGG,GoogleNet,ResNet 等,均需要大量的标注样本才能完成训练,而医疗影像图像的标记成本很高,并不能为训练复杂的网络提供足够的训练数据。本论文借鉴生成对抗网络(Generative Adversarial Network, GAN) 的思想,提出基于弱监督学习的病理图像可疑区域标记网络,首先利用少量有标记的病理图像数据来训练分类模型,即判断该图像是否是乳腺癌,然后通过融合该网络提取到的具有判别力的特征来对可疑区域进行标记。由本文提出的网络在已有的国外乳腺癌病理图像数据集上达到的平均准确率为 83.8%,比基于卷积神经网络 (Convolutional Neural Network,CNN) 的分类方法在准确率上分别高 3 个百分点,说明该网络提取到的特征具有更好的判别力,不仅能够提高分类模型的准确率,还更有助于对病理图像的可疑区域进行标记。  相似文献   

2.
为了提高乳腺癌诊断的效率以及准确性,本文提出一种基于改进的YOLOv3算法来构建一个乳腺超声肿瘤识别算法,辅助医生进行乳腺癌的诊断。首先在Res2Net网络上融入SE模块构建SE-Res2Net网络来取代原始YOLOv3中的特征提取网络,以此提升模型特征提取的能力。然后通过搭建一个新型下采样模块(downsample block)来解决原始模型中下采样操作容易出现信息丢失的不足。最后为了进一步提升模型特征提取的能力,结合残差连接网络以及密集连接网络的优点构建Res-DenseNet网络来替换原始模型的残差连接方式。实验结果表明:改进后的YOLOv3算法比原始YOLOv3算法的mAP提高了4.56%,取得较好的检测结果。  相似文献   

3.
环境的日益恶化导致癌症的发病率不断升高,2018年全球乳腺癌的发病率在所有癌症中已经位居首位。乳腺X线摄影价格实惠且易于操作,目前被认作是最好的乳腺癌筛查方法,也是早期发现乳腺癌最有效的方法。针对乳腺X线摄影不容易分辨、特征不明显等特点,提出了基于RNN+CNN的注意力记忆网络对其进行分类。注意力记忆网络包含注意力记忆模块和卷积残差模块。注意力记忆模块中,注意力模块提取乳腺X线摄影的特征,记忆模块在RNN网络加入注意力权重来模拟人对所提取关键信息的重点突出;卷积残差模块使用CNN对图像进行分类。该方法创新之处在于:提出注意力记忆网络用于乳腺X线摄影图像分类;所设计网络在RNN+CNN结构上引入注意力权重,提取图像关键信息以增强特征描述。在乳腺X线摄影INbreast数据集上的实验结果显示,注意力记忆网络的运行时间比预训练的Inceptionv2、ResNet50、VGG16网络少50%以上,同时达到更高的分类准确率。  相似文献   

4.
One of the fast-growing disease affecting women’s health seriously is breast cancer. It is highly essential to identify and detect breast cancer in the earlier stage. This paper used a novel advanced methodology than machine learning algorithms such as Deep learning algorithms to classify breast cancer accurately. Deep learning algorithms are fully automatic in learning, extracting, and classifying the features and are highly suitable for any image, from natural to medical images. Existing methods focused on using various conventional and machine learning methods for processing natural and medical images. It is inadequate for the image where the coarse structure matters most. Most of the input images are downscaled, where it is impossible to fetch all the hidden details to reach accuracy in classification. Whereas deep learning algorithms are high efficiency, fully automatic, have more learning capability using more hidden layers, fetch as much as possible hidden information from the input images, and provide an accurate prediction. Hence this paper uses AlexNet from a deep convolution neural network for classifying breast cancer in mammogram images. The performance of the proposed convolution network structure is evaluated by comparing it with the existing algorithms.  相似文献   

5.
This paper presents the methodology used for establishing a performance goal and identifying the diagnostic features in a program to develop an automated system for breast cancer detection based on thermographic principles. The receiver operating characteristic (ROC) curve approach is used to evaluate both observer classification and classification rules based on an observer's evaluation of diagnostic features. The multivariate logistic function is applied to two sets of observer evaluated feature sets using 623 normal and 122 breast cancer diagnosed subjects. It is shown that the observer outperforms the multivariate logistic function classifier based on the diagnostic features.  相似文献   

6.
目的 基于深度学习方法进行乳腺癌识别是一项具有挑战的任务,目前较多研究使用单一倍率下的乳腺组织病理图像作为模型的输入,忽略了乳腺组织病理图像固有的多倍率特点,而少数将不同倍率下的图像作为模型输入的研究,存在特征利用率较低以及不同倍率的图像之间缺乏信息交互等问题。方法 针对上述问题,提出一种基于多尺度和分组注意力机制的卷积神经网络改进策略。该策略主要包括信息交互模块和特征融合模块。前者通过空间注意力加强不同倍率的图像之间的相关性,然后将加权累加的结果反馈给原始分支进行动态选择实现特征流通;后者则利用一种分组注意力来提升特征的利用率,同时基于特征金字塔来消除图像之间的感受野差异。结果 本文将上述策略应用到多种卷积网络中,并与最新的方法进行比较。在Camelyon16公开数据集上进行五折交叉验证实验,并对每一项评价指标计算均值和标准差。相比于单一尺度图像作为输入的卷积网络,本文改进的方法在准确率上提升0.9%~1.1%,F1分数提升1.1%~1.2%;相较于对比方法中性能最好的TransPath网络,本文改进的DenseNet201(dense convolutional network)在准确率上提升0.6%,精确率提升0.8%,F1分数提升0.6%,并且各项指标的标准差低于Transpath,表明加入策略的网络具有更好的稳定性。结论 本文所提出的策略能弥补一般多尺度网络的缺陷,并具备一定的通用性,可获得更好的乳腺癌分类性能。  相似文献   

7.
This paper presents an automatic diagnosis system for detecting breast cancer based on association rules (AR) and neural network (NN). In this study, AR is used for reducing the dimension of breast cancer database and NN is used for intelligent classification. The proposed AR + NN system performance is compared with NN model. The dimension of input feature space is reduced from nine to four by using AR. In test stage, 3-fold cross validation method was applied to the Wisconsin breast cancer database to evaluate the proposed system performances. The correct classification rate of proposed system is 95.6%. This research demonstrated that the AR can be used for reducing the dimension of feature space and proposed AR + NN model can be used to obtain fast automatic diagnostic systems for other diseases.  相似文献   

8.
乳腺X线摄影技术是目前乳腺癌早期发现和诊断的重要手段。然而乳腺X线图像中肿块边缘模糊,分类相对困难,因此提升乳腺肿块的诊断精度从而及早预防和治疗仍是医学领域的一大挑战。针对乳腺肿块的特点,提出了一种结合密集卷积神经网络(DenseNet)和压缩激励(SE)模块的新网络(DSAMNet),该网络融合了二者优势,既加强特征重用,又实现特征提取过程中的特征重标定。根据SE模块嵌入DenseNet的不同位置,提出了模型SE-DenseNet-A、SE-DenseNet-B和SE-DenseNet-C。对SE-DenseNet的池化函数进行改进,提出了模型DSAMNet-A、DSAMNet-B和DSAMNet-C。综合不同结构和不同深度的网络模型在公开数据集CBIS-DDSM上进行训练和测试。实验结果表明,DSAMNet-B有更加优异的性能,其准确率比DenseNet模型的准确率提高了10.8%,AUC达到了0.929。  相似文献   

9.
为了准确识别X线图像中的微钙化簇以进行乳腺癌的辅助诊断与早期预防,结合细粒度级联增强网络(FCE-Net)与多尺度特征融合算法(MFF),提出微钙化簇目标检测方法.首先构建FCE-Net累加卷积模块层级权重,并增强多分支结构,得到细粒度卷积特征图.然后构建MFF候选检测网络,通过二倍上采样融合多尺度特征,得到目标置信度和区域坐标.最后在感兴趣区域池化层分类目标并调整边界框.在MIAS数据集上实验表明,结合FCE-Net与MFF可以提升微小目标的深层特征提取能力,同时增强目标分类与定位的准确度.  相似文献   

10.
目的 乳腺癌在女性中是致病严重且发病率较高的疾病,早期乳腺癌症检测是全世界需要解决的重要难题。如今乳腺癌的诊断方法有临床检查、影像学检查和组织病理学检查。在影像学检查中常用的方式是X光、CT (computed tomography)、磁共振等,其中乳房X光片已用于检测早期癌症,然而从本地乳房X线照片中手动分割肿块是一项非常耗时且容易出错的任务。因此,需要一个集成的计算机辅助诊断(computer aided diagnosis,CAD)系统来帮助放射科医生进行自动和精确的乳房肿块识别。方法 基于深度学习图像分割框架,对比了不同图像分割模型,同时在UNet结构上采用了Swin架构来代替分割任务中的下采样和上采样过程,实现局部和全局特征的交互。利用Transformer来获取更多的全局信息和不同层次特征来取代短连接,实现多尺度特征融合,从而精准分割。在分割模型阶段也采用了Multi-Attention ResNet分类网络对癌症区域的等级识别,更好地对乳腺癌进行诊断医疗。结果 本文模型在乳腺癌X光数据集INbreast上实现肿块的准确分割,IoU (intersection over union)值达到95.58%,Dice系数为93.45%,与其他的分割模型相比提高了4%~6%,将得到的二值化分割图像进行四分类,Accuracy值达到95.24%。结论 本文提出的TransAS-UNet图像分割方法具有良好的性能和临床意义,该方法优于对比的二维图像医学分割方法。  相似文献   

11.
Breast cancer is the most common cancer among women, except for skin cancer, but early detection of breast cancer improves the chances of survivability. Data mining is widely used for this purpose. As technology develops, large number of breast tumour features are being collected. Using all these features for cancer recognition is expensive and time-consuming. Feature extraction is necessary for increasing the classification accuracy. The goal of this work is to recognise breast cancer using extracted features. To reach this goal, a combination of clustering and classification is used. Particle swarm optimization is used to recognise tumour patterns. The membership degree of each tumour to the patterns is calculated and considered as a new feature. Support vector machine is then employed to classify tumours. Finally this method is analysed in terms of its accuracy, specificity, sensitivity and CPU time consuming using Wisconsin Diagnostic Breast Cancer data set.  相似文献   

12.
乳腺癌是女性最常见的恶性肿瘤之一,严重威胁患者健康,因此乳腺钼靶图像多分类对临床诊断乳腺癌具有十分重要的作用。传统卷积神经网络直接采用高级特征对乳腺钼靶图像进行多分类研究,此方法准确率不高。为了进一步提高分类准确率,构建人型网络模型进行分类。此结构通过堆叠的卷积层以及最大池化层来进行图片的低级特征进行提取,通过堆叠的卷积层以及上池化层将特征逐步返回到图片形式的特征图,通过堆叠的卷积层以及最大池化层再次提取到更高级的特征并与之前的低级特征进行级联,将级联的特征经过全局最大池化层进行池化并得到最终分类。在中山大学肿瘤防治中心的1 824幅乳腺钼靶图像做仿真实验,实验结果表明,该方法的准确率达到了74.54%,优于现有相关网络模型。  相似文献   

13.
14.
目的 为了提升基于单模态B型超声(B超)的乳腺癌计算机辅助诊断(computer-aided diagnosis,CAD)模型性能,提出一种基于两阶段深度迁移学习(two-stage deep transfer learning,TSDTL)的乳腺超声CAD算法,将超声弹性图像中的有效信息迁移至基于B超的乳腺癌CAD模型之中,进一步提升该CAD模型的性能。方法 在第1阶段的深度迁移学习中,提出将双模态超声图像重建任务作为一种自监督学习任务,训练一个关联多模态深度卷积神经网络模型,实现B超图像和超声弹性图像之间的信息交互迁移;在第2阶段的深度迁移学习中,基于隐式的特权信息学习(learning using privilaged information,LUPI)范式,进行基于双模态超声图像的乳腺肿瘤分类任务,通过标签信息引导下的分类进一步加强两个模态之间的特征融合与信息交互;采用单模态B超数据对所对应通道的分类网络进行微调,实现最终的乳腺癌B超图像分类模型。结果 实验在一个乳腺肿瘤双模超声数据集上进行算法性能验证。实验结果表明,通过迁移超声弹性图像的信息,TSDTL在基于B超的乳腺癌诊断任务中取得的平均分类准确率为87.84±2.08%、平均敏感度为88.89±3.70%、平均特异度为86.71±2.21%、平均约登指数为75.60±4.07%,优于直接基于单模态B超训练的分类模型以及多种典型迁移学习算法。结论 提出的TSDTL算法通过两阶段的深度迁移学习,将超声弹性图像的信息有效迁移至基于B超的乳腺癌CAD模型,提升了模型的诊断性能,具备潜在的应用可行性。  相似文献   

15.

One of the most important processes in the diagnosis of breast cancer, which is the leading mortality rate in women, is the detection of the mitosis stage at the cellular level. In literature, many studies have been proposed on the computer-aided diagnosis (CAD) system for detecting mitotic cells in breast cancer histopathological images. In this study, comparative evaluation of conventional and deep learning based feature extraction methods for automatic detection of mitosis in histopathological images are focused. While various handcrafted features are extracted with textural/spatial, statistical and shape-based methods in conventional approach, the convolutional neural network structure proposed on the deep learning approach aims to create an architecture that extracts the features of small cellular structures such as mitotic cells. Mitosis detection/counting is an important process that helps us assess how aggressive or malignant the cancer’s spread is. In the proposed study, approximately 180,000 non-mitotic and 748 mitotic cells are extracted for the evaluations. It is obvious that the classification stage cannot be performed properly due to the imbalanced numbers of mitotic and non-mitotic cells extracted from histopathological images. Hence, the random under-sampling boosting (RUSBoost) method is exploited to overcome this problem. The proposed framework is tested on mitosis detection in breast cancer histopathological images dataset provided from the International Conference on Pattern Recognition (ICPR) 2014 contest. In the results obtained with the deep learning approach, 79.42% recall, 96.78% precision and 86.97% F-measure values are achieved more successfully than handcrafted methods. A client/server-based framework has also been developed as a secondary decision support system for use by pathologists in hospitals. Thus, it is aimed that pathologists will be able to detect mitotic cells in various histopathological images more easily through necessary interfaces.

  相似文献   

16.
食管癌肿瘤的诊断方式主要是医生对胸部计算机断层扫描(CT)影像进行阅片。由于医生的主观判断易受外界环境的干扰,因此诊断结果与实际结果存在偏差。基于深度学习的图像分割网络对辅助诊断食管癌肿瘤具有重要意义。因食管在整体胸部CT影像中所占的区域较小且对比度较低,传统的图像分割网络难以准确地确定食管癌肿瘤的区域。为精准分割医学CT影像中的食管癌肿瘤,提出图像分割网络Concat-UNet。基于U-Net网络,采用编码解码模式的U型对称架构对网络中的卷积模块进行改进,并引入跳跃连接和批量归一化层,将卷积模块的原始输入与提取特征后的输出进行特征融合,以增强网络的特征提取能力。在此基础上,采用BCEWithLogits与Dice损失函数相结合的方式联合训练网络。实验结果表明,相比SegNet、ERFNet、U-Net等网络,Concat-UNet在食管癌数据集上的检测精确率为91.87%,相比基准网络U-Net提升了11.64个百分点,具有较优的分割效果。  相似文献   

17.
Breast cancer occurs when cells in the breast begin to grow out of control and invade nearby tissues or spread throughout the body. It is one of the leading causes of death in women. Cancer development appears to generate an increase in the temperature on the breast surface. The limitations of mammography as a screening modality, especially in young women with dense breasts, necessitated the development of novel and more effective screening strategies with high sensitivity and specificity. The aim of this study was to evaluate the feasibility of discrete thermal data (DTD) as a potential tool for the early detection of the breast cancer.Our protocol uses 1170, 16-sensor data collected from 54 individuals consisting of three different kinds of breast conditions: namely, normal, benign and cancerous breast. We compared two different kinds of neural network classifiers: the feedforward neural network and the radial basis function classifier. Temperature data from the 16 temperature sensors on the surface of the two breasts (eight sensors on each side) are fed as input to the classifiers. We demonstrated a sensitivity of 84% and 91% for these classifiers (feedforward and radial basis function, respectively) with a specificity of 100%. Our classifying systems are ready to run on large data sets.  相似文献   

18.
This paper describes an algorithm for constructing a single hidden layer feedforward neural network. A distinguishing feature of this algorithm is that it uses the quasi-Newton method to minimize the sequence of error functions associated with the growing network. Experimental results indicate that the algorithm is very efficient and robust. The algorithm was tested on two test problems. The first was the n-bit parity problem and the second was the breast cancer diagnosis problem from the University of Wisconsin Hospitals. For the n-bit parity problem, the algorithm was able to construct neural network having less than n hidden units that solved the problem for n=4,...,7. For the cancer diagnosis problem, the neural networks constructed by the algorithm had small number of hidden units and high accuracy rates on both the training data and the testing data.  相似文献   

19.
组织病理学图像是鉴别乳腺癌的黄金标准,所以对乳腺癌组织病理学图像的自动、精确的分类具有重要的临床应用价值。为了提高乳腺组织病理图像的分类准确率,从而满足临床应用的需求,提出了一种融合空间和通道特征的高精度乳腺癌分类方法。该方法使用颜色归一化来处理病理图像并使用数据增强扩充数据集,基于卷积神经网络(CNN)模型DenseNet和压缩和激励网络(SENet)融合病理图像的空间特征信息和通道特征信息,并根据压缩-激励(SE)模块的插入位置和数量,设计了三种不同的BCSCNet模型,分别为BCSCNetⅠ、BCSCNetⅡ、BCSCNetⅢ。在乳腺癌癌组织病理图像数据集(BreaKHis)上展开实验。通过实验对比,先是验证了对图像进行颜色归一化和数据增强能提高乳腺的分类准确率,然后发现所设计的三种乳腺癌分类模型中精度最高为BCSCNetⅢ。实验结果表明,BCSCNetⅢ的二分类准确率在99.05%~99.89%,比乳腺癌组织病理学图像分类网络(BHCNet)提升了0.42个百分点;其多分类的准确率在93.06%~95.72%,比BHCNet提升了2.41个百分点。证明了BCSCNet能准确地对乳腺癌组织病理图像进行分类,同时也为计算机辅助乳腺癌诊断提供了可靠的理论支撑。  相似文献   

20.
Breast cancer is the leading type of cancer diagnosed in women. For years human limitations in interpreting the thermograms possessed a considerable challenge, but with the introduction of computer assisted detection/diagnosis (CAD), this problem has been addressed. This review paper compares different approaches based on neural networks and fuzzy systems which have been implemented in different CAD designs. The greatest improvement in CAD systems was achieved with a combination of fuzzy logic and artificial neural networks in the form of FALCON-AART complementary learning fuzzy neural network (CLFNN). With a CAD design based on FALCON-AART, it was possible to achieve an overall accuracy of near 90%. This confirms that CAD systems are indeed a valuable addition to the efforts for the diagnosis of breast cancer. Lower cost and high performance of new infrared systems combined with accurate CAD designs can promote the use of thermography in many breast cancer centres worldwide.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号