首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
房颤是一种起源于心房的心脏疾病。据估计全球有超过3 000万人受其影响,虽然通过治疗可以降低患病风险,但房颤通常是隐匿的,很难及时诊断和干预。房颤的诊断方法主要有心脏触诊、光学体积描记术、血压监测振动法、心电图和基于影像的方法。房颤类型主要为阵发性房颤,前4种诊断方法不一定能捕捉到房颤发作,而且诊断周期长、成本高、准确率低及容易受医生的影响。左心房的解剖结构为房颤病理和研究进展提供了重要信息,基于医学影像的房颤分析需要准确分割左心房,通过分割结果计算房颤的临床指标,例如,射血分数、左心房体积、左心房应变及应变率,然后对左心房功能进行定量评估。采用影像的方法得出的诊断结果不易受人为干扰且具有处理大批量患者数据的能力,辅助医生及早发现房颤,对患者进行干预治疗,提高对房颤症状和临床诊断的认识,在临床实践中具有重大意义。本文将已有的分割方法归纳为传统方法、基于深度学习的方法以及传统与深度学习结合的方法。这些方法得到的结果为后续房颤分析提供了依据,但目前的分割方法许多都是半自动的,分割结果不够精确,训练数据集较小且依赖手工标注。本文总结了各种方法的优缺点,归纳了目前已有的公开数据集和房颤分析的临床应用,并展望了未来的发展趋势。  相似文献   

2.
通过肺部CT影像进行肺结节检测是肺癌早期筛查的重要手段,而候选结节的假阳性筛查是结节检测的关键部分.传统的结节检测方法严重依赖先验知识,流程繁琐,性能并不理想.在深度学习中,卷积神经网络可以在通用的学习过程中提取图像的特征.该文以密集神经网络为基础设计了一个三维结节假阳性筛查模型—三维卷积神经网络模型(TDN-CNN)...  相似文献   

3.
Liu  Liying  Si  Yain-Whar 《The Journal of supercomputing》2022,78(12):14191-14214

This paper proposes a novel deep learning-based approach for financial chart patterns classification. Convolutional neural networks (CNNs) have made notable achievements in image recognition and computer vision applications. These networks are usually based on two-dimensional convolutional neural networks (2D CNNs). In this paper, we describe the design and implementation of one-dimensional convolutional neural networks (1D CNNs) for the classification of chart patterns from financial time series. The proposed 1D CNN model is compared against support vector machine, extreme learning machine, long short-term memory, rule-based and dynamic time warping. Experimental results on synthetic datasets reveal that the accuracy of 1D CNN is highest among all the methods evaluated. Results on real datasets also reveal that chart patterns identified by 1D CNN are also the most recognized instances when they are compared to those classified by other methods.

  相似文献   

4.
脊柱磁共振(magnetic resonance,MR)图像精确分割是脊柱配准、三维重建等技术的前提。传统脊柱MR图像分割方法过程繁琐,精度低。为克服传统方法弊端,提出了一种基于深度学习的脊柱MR图像自动分割方法。该方法构建对称通道卷积神经网络提取多尺度图像特征,通过残差连接解决训练中网络退化问题,同时用跳跃连接层连接中间层特征减少信息丢失。在搭建的网络模型中加入卷积块注意力机制关注空间和通道中的有效特征。实验结果表明,该模型在测试集上的平均DSC系数为0.861?9,相比FCN、U-Net、DeeplabV3+和UNet++网络模型分别提高了15.34%、7.08%、5.79%、3.1%。该模型可应用于临床实践中提升脊柱MR图像的分割精度。  相似文献   

5.
目的 精准的危及器官(organs at risk,OARs)勾画是肿瘤放射治疗过程中的关键步骤。依赖人工的勾画方式不仅耗费时力,且勾画精度容易受图像质量及医生主观经验等因素的影响。本文提出了一种2D级联卷积神经网络(convolutional neural network,CNN)模型,用于放疗危及器官的自动分割。方法 模型主要包含分类器和分割网络两部分。分类器以VGG(visual geometry group)16为骨干结构,通过减少卷积层以及加入全局池化极大地降低了参数量和计算复杂度;分割网络则是以U-Net为基础,用双线性插值代替反卷积对特征图进行上采样,并引入Dropout层来缓解过拟合问题。在预测阶段,先利用分类器从输入图像中筛选出包含指定器官的切片,然后使用分割网络对选定切片进行分割,最后使用移除小连通域等方法对分割结果进一步优化。结果 本文所用数据集共包含89例宫颈癌患者的腹盆腔CT(computed tomography)图像,并以中国科学技术大学附属第一医院多位放射医师提供的手工勾画结果作为评估的金标准。在实验部分,本文提出的分类器在6种危及器官(左右股骨、左右股骨头、膀胱和直肠)上的平均分类精度、查准率、召回率和F1-Score分别为98.36%、96.64%、94.1%和95.34%。基于上述分类性能,本文分割方法在测试集上的平均Dice系数为92.94%。结论 与已有的CNN分割模型相比,本文方法获得了最佳的分割性能,先分类再分割的策略能够有效地避免标注稀疏问题并减少假阳性分割结果。此外,本文方法与专业放射医师在分割结果上具有良好的一致性,有助于在临床中实现更准确、快速的危及器官分割。  相似文献   

6.
ABSTRACT

Precise crop classification from multi-temporal remote sensing images has important applications such as yield estimation and food transportation planning. However, the mainstream convolutional neural networks based on 2D convolution collapse the time series information. In this study, a 3D fully convolutional neural network (FCN) embedded with a global pooling module and channel attention modules is proposed to extract discriminative spatiotemporal presentations of different types of crops from multi-temporal high-resolution satellite images. Firstly, a novel 3D FCN structure is introduced to replace 2D FCNs as well as to improve current 3D convolutional neural networks (CNNs) by providing a mean to learn distinctive spatiotemporal representations of each crop type from the reshaped multi-temporal images. Secondly, to strengthen the learning significance of the spatiotemporal representations, our approach includes 3D channel attention modules, which regulate the between-channel consistency of the features from the encoder and the decoder, and a 3D global pooling module, which selects the most distinctive features at the top of the encoder. Experiments were conducted using two data sets with different types of crops and time spans. Our results show that our method outperformed in both accuracy and efficiency, several mainstream 2D FCNs as well as a recent 3D CNN designed for crop classification. The experimental data and source code are made openly available at http://study.rsgis.whu.edu.cn/pages/download/.  相似文献   

7.
目的 显著性检测是图像和视觉领域一个基础问题,传统模型对于显著性物体的边界保留较好,但是对显著性目标的自信度不够高,召回率低,而深度学习模型对于显著性物体的自信度高,但是其结果边界粗糙,准确率较低。针对这两种模型各自的优缺点,提出一种显著性模型以综合利用两种方法的优点并抑制各自的不足。方法 首先改进最新的密集卷积网络,训练了一个基于该网络的全卷积网络(FCN)显著性模型,同时选取一个现有的基于超像素的显著性回归模型,在得到两种模型的显著性结果图后,提出一种融合算法,融合两种方法的结果以得到最终优化结果,该算法通过显著性结果Hadamard积和像素间显著性值的一对一非线性映射,将FCN结果与传统模型的结果相融合。结果 实验在4个数据集上与最新的10种方法进行了比较,在HKU-IS数据集中,相比于性能第2的模型,F值提高了2.6%;在MSRA数据集中,相比于性能第2的模型,F值提高了2.2%,MAE降低了5.6%;在DUT-OMRON数据集中,相比于性能第2的模型,F值提高了5.6%,MAE降低了17.4%。同时也在MSRA数据集中进行了对比实验以验证融合算法的有效性,对比实验结果表明提出的融合算法改善了显著性检测的效果。结论 本文所提出的显著性模型,综合了传统模型和深度学习模型的优点,使显著性检测结果更加准确。  相似文献   

8.
目的 为制定放疗计划并评估放疗效果,精确的PET(positron emission tomography)肿瘤分割在临床中至关重要。由于PET图像存在低信噪比和有限的空间分辨率等特点,为此提出一种应用预训练编码器的深度卷积U-Net自动肿瘤分割方法。方法 模型的编码器部分用ImageNet上预训练的VGG19编码器代替;引入基于Jaccard距离的损失函数满足对样本重新加权的需要;引入了DropBlock取代传统的正则化方法,有效避免过拟合。结果 PET数据库共包含1 309幅图像,专业的放射科医师提供了肿瘤的掩模、肿瘤的轮廓和高斯平滑后的轮廓作为模型的金标准。实验结果表明,本文方法对PET图像中的肿瘤分割具有较高的性能。Dice系数、Hausdorff距离、Jaccard指数、灵敏度和正预测值分别为0.862、1.735、0.769、0.894和0.899。最后,给出基于分割结果的3维可视化,与金标准的3维可视化相对比,本文方法分割结果可以达到金标准的88.5%,这使得在PET图像中准确地自动识别和连续测量肿瘤体积成为可能。结论 本文提出的肿瘤分割方法有助于实现更准确、稳定、快速的肿瘤分割。  相似文献   

9.
基于全卷积网络的图像语义分割方法综述   总被引:1,自引:0,他引:1  
自全卷积网络(Fully Convolutional Network,FCN)提出以后,应用深度学习技术在图像语义分割领域受到了许多计算机视觉和机器学习研究者的关注,现在这一方向已经成为人工智能方向的研究热点.FCN的核心思想是搭建一个全卷积网络,输入任意尺寸的图像,经过模型的有效学习和推理得到相同尺寸的输出.FCN的提出给图像语义分割领域提供了新的思路,但也存在很多的缺点,比如特征分辨率低、对象存在多尺度问题等.随着研究者不断的钻研,卷积神经网络在图像分割领域逐渐得到了优化和拓展,基于FCN的主流分割框架也层出不穷.图像语义分割对于场景理解的重要性日渐突出,被广泛应用到无人驾驶技术、无人机领域和医疗影像检测与分析等任务中.因此,对图像语义分割领域的研究将值得深入研究,使其能够更好在实际应用中大放异彩.  相似文献   

10.
残差神经网络(residual neural network,ResNet)及其优化是深度学习研究的热点之一,在医学图像领域应用广泛,在肿瘤、心脑血管和神经系统疾病等重大疾病的临床诊断、分期、转移、治疗决策和靶区勾画方面取得良好效果。本文对残差神经网络的学习优化进行了总结:阐述了残差神经网络学习算法优化,从激活函数、损失函数、参数优化算法、学习衰减率、归一化和正则化技术等6方面进行总结,其中激活函数的改进方法主要有Sigmoid、tanh、ReLU、PReLU(parameteric ReLU)、随机化ReLU(randomized leaky ReLU,RReLU)、ELU(exponential linear units)、Softplus函数、NoisySoftplus函数以及Maxout共9种;损失函数主要有交叉熵损失、均方损失、欧氏距离损失、对比损失、合页损失、Softmax-Loss、L-Softmax Loss、A-Softmax Loss、L2 Softmax Loss、Cosine Loss、Center Loss和焦点损失共12种;学习率衰减总结了8种,即分段常数衰减、多项式衰减、指数衰减、反时限衰减、自然指数衰减、余弦衰减、线性余弦衰减和噪声线性余弦衰减;归一化算法有批量归一化和提出批量重归一化算法;正则化方法主要有增加输入数据、数据增强、早停法、L1正则化、L2正则化、Dropout和Dropout Connect共7种。综述了残差网络模型在医学图像疾病诊断中的应用研究,梳理了残差神经网络在肺部肿瘤、皮肤疾病、乳腺癌、大脑疾病、糖尿病和血液病等6种疾病诊断中的应用研究;对深度学习在医学图像未来发展进行了总结和展望。  相似文献   

11.
In recent years, object-based segmentation methods and shallow-model classification algorithms have been widely integrated for remote sensing image supervised classification. However, as the image resolution increases, remote sensing images contain increasingly complex characteristics, leading to higher intraclass heterogeneity and interclass homogeneity and thus posing substantial challenges for the application of segmentation methods and shallow-model classification algorithms. As important methods of deep learning technology, convolutional neural networks (CNNs) can hierarchically extract higher-level spatial features from images, providing CNNs with a more powerful recognition ability for target detection and scene classification in high-resolution remote sensing images. However, the input of the traditional CNN is an image patch, the shape of which is scarcely consistent with a given segment. This inconsistency may lead to errors when directly using CNNs in object-based remote sensing classification: jagged errors may appear along the land cover boundaries, and some land cover areas may overexpand or shrink, leading to many obvious classification errors in the resulting image. To address the above problem, this paper proposes an object-based and heterogeneous segment filter convolutional neural network (OHSF-CNN) for high-resolution remote sensing image classi?cation. Before the CNN processes an image patch, the OHSF-CNN includes a heterogeneous segment filter (HSF) to process the input image. For the segments in the image patch that are obviously different from the segment to be classified, the HSF can differentiate them and reduce their negative influence on the CNN training and decision-making processes. Experimental results show that the OHSF-CNN not only can take full advantage of the recognition capabilities of deep learning methods but also can effectively avoid the jagged errors along land cover boundaries and the expansion/shrinkage of land cover areas originating from traditional CNN structures. Moreover, compared with the traditional methods, the proposed OHSF-CNN can achieve higher classification accuracy. Furthermore, the OHSF-CNN algorithm can serve as a bridge between deep learning technology and object-based segmentation algorithms thereby enabling the application of object-based segmentation methods to more complex high-resolution remote sensing images.  相似文献   

12.
目的 图像的变化检测是视觉领域的一个重要问题,传统的变化检测对光照变化、相机位姿差异过于敏感,使得在真实场景中检测结果较差。鉴于卷积神经网络(convolutional neural networks,CNN)可以提取图像中的深度语义特征,提出一种基于多尺度深度特征融合的变化检测模型,通过提取并融合图像的高级语义特征来克服检测噪音。方法 使用VGG(visual geometry group)16作为网络的基本模型,采用孪生网络结构,分别从参考图像和查询图像中提取不同网络层的深度特征。将两幅图像对应网络层的深度特征拼接后送入一个编码层,通过编码层逐步将高层与低层网络特征进行多尺度融合,充分结合高层的语义和低层的纹理特征,检测出准确的变化区域。使用卷积层对每一个编码层的特征进行运算产生对应尺度的预测结果。将不同尺度的预测结果融合得到进一步细化的检测结果。结果 与SC_SOBS(SC-self-organizing background subtraction)、SuBSENSE(self-balanced sensitivity segmenter)、FGCD(fine-grained change detection)和全卷积网络(fully convolutional network,FCN)4种检测方法进行对比。与性能第2的模型FCN相比,本文方法在VL_CMU_CD(visual localization of Carnegie Mellon University for change detection)数据集中,综合评价指标F1值和精度值分别提高了12.2%和24.4%;在PCD(panoramic change detection)数据集中,F1值和精度值分别提高了2.1%和17.7%;在CDnet(change detection net)数据集中,F1值和精度值分别提高了8.5%和5.8%。结论 本文提出的基于多尺度深度特征融合的变化检测方法,利用卷积神经网络的不同网络层特征,有效克服了光照和相机位姿差异,在不同数据集上均能得到较为鲁棒的变化检测结果。  相似文献   

13.
The principle restorative step in the treatment of ischemic stroke depends on how fast the lesion is delineated from the Magnetic Resonance Imaging (MRI) images. This will serve as a vital aid to estimate the extent of damage caused to the brain cells. However, manual delineation of the lesion is time-consuming and it is subjected to intra-observer and inter-observer variability. Most of the existing methods for ischemic lesion segmentation rely on extracting handcrafted features followed by application of a machine learning algorithm. Identifying such features demand multi-domain expertise in Neuro-radiology as well as Image processing. This can be accomplished by learning the features automatically using Convolutional Neural Network (CNN). To perform segmentation, the spatial arrangement of pixel needs to be preserved in addition to learning local features of an image. Hence, a deep supervised Fully Convolutional Network (FCN) is presented in this work to segment the ischemic lesion. The highlight of this research is the application of Leaky Rectified Linear Unit activation in the last two layers of the network for a precise reconstruction of the ischemic lesion. By doing so, the network was able to learn additional features which are not considered in the existing U-Net architecture. Also, an extensive analysis was conducted in this research to select optimal hyper-parameters for training the FCN. A mean segmentation accuracy of 0.70 has been achieved from the experiments conducted on ISLES 2015 dataset. Experimental observations show that our proposed FCN method is 10% better than the existing works in terms of Dice Coefficient.  相似文献   

14.
Jiang  Feng  Grigorev  Aleksei  Rho  Seungmin  Tian  Zhihong  Fu  YunSheng  Jifara  Worku  Adil  Khan  Liu  Shaohui 《Neural computing & applications》2018,29(5):1257-1265

The image semantic segmentation has been extensively studying. The modern methods rely on the deep convolutional neural networks, which can be trained to address this problem. A few years ago networks require the huge dataset to be trained. However, the recent advances in deep learning allow training networks on the small datasets, which is a critical issue for medical images, since the hospitals and research organizations usually do not provide the huge amount of data. In this paper, we address medical image semantic segmentation problem by applying the modern CNN model. Moreover, the recent achievements in deep learning allow processing the whole image per time by applying concepts of the fully convolutional neural network. Our qualitative and quantitate experiment results demonstrated that modern CNN can successfully tackle the medical image semantic segmentation problem.

  相似文献   

15.
Wang  Sheng  Lv  Lin-Tao  Yang  Hong-Cai  Lu  Di 《Multimedia Tools and Applications》2021,80(21-23):32409-32421

In the register detection of printing field, a new approach based on Zernike-CNNs is proposed. The edge feature of image is extracted by Zernike moments (ZMs), and a recursive algorithm of ZMs called Kintner method is derived. An improved convolutional neural networks (CNNs) are investigated to improve the accuracy of classification. Based on the classic convolutional neural network (CNN), the improved CNNs adopt parallel CNN to enhance local features, and adopt auxiliary classification part to modify classification layer weights. A printed image is trained with 7?×?400 samples and tested with 7?×?100 samples, and then the method in this paper is compared with other methods. In image processing, Zernike is compared with Sobel method, Laplacian of Gaussian (LoG) method, Smallest Univalue Segment Assimilating Nucleus (SUSAN) method, Finite Impusle Response (FIR) method, Multi-scale Morphological Gradient (MMG) method. In image classification, improved CNNs are compared with classical CNN. The experimental results show that Zernike-CNNs have the best performance, the mean square error (MSE) of the training samples reaches 0.0143, and the detection accuracy of training samples and test samples reached 91.43% and 94.85% respectively. The experiments reveal that Zernike-CNNs are a feasible approach for register detection.

  相似文献   

16.
目的 基于超声图像的乳腺病灶分割是实现乳腺癌计算机辅助诊断和定量分析的基本预处理步骤。由于乳腺超声图像病灶边缘通常较为模糊,而且缺乏大量已标注的分割图像,增加了基于深度学习的乳腺超声图像分割难度。本文提出一种混合监督双通道反馈U-Net(hybrid supervised dual-channel feedback U-Net,HSDF-U-Net)算法,提升乳腺超声图像分割的准确性。方法 HSDF-U-Net通过融合自监督学习和有监督分割实现混合监督学习,并且进一步通过设计双通道反馈U-Net网络提升图像分割准确性。为了改善标记数据有限的问题,首先在自监督学习框架基础上结合标注分割图像中的标签信息,设计一种边缘恢复的辅助任务,以实现对病灶边缘表征能力更强的预训练模型,然后迁移至下游图像分割任务。为了提升模型在辅助边缘恢复任务和下游分割任务的表现,将循环机制引入经典的U-Net网络,通过将反馈的输出结果重新送入另一个通道,构成双通道编码器,然后解码输出更精确的分割结果。结果 在两个公开的乳腺超声图像分割数据集上评估HSDF-U-Net算法性能。HSDF-U-Net对Dataset B数据集中的图像进行分割获得敏感度为0.848 0、Dice为0.826 1、平均对称表面距离为5.81的结果,在Dataset BUSI(breast ultrasound images)数据集上获得敏感度为0.803 9、Dice为0.803 1、平均对称表面距离为6.44的结果。与多种典型的U-Net分割算法相比,上述结果均有提升。结论 本文所提HSDF-U-Net算法提升了乳腺超声图像中的病灶分割的精度,具备潜在的应用价值。  相似文献   

17.
Word spotting has become a field of strong research interest in document image analysis over the last years. Recently, AttributeSVMs were proposed which predict a binary attribute representation (Almazán et al. in IEEE Trans Pattern Anal Mach Intell 36(12):2552–2566, 2014). At their time, this influential method defined the state of the art in segmentation-based word spotting. In this work, we present an approach for learning attribute representations with convolutional neural networks(CNNs). By taking a probabilistic perspective on training CNNs, we derive two different loss functions for binary and real-valued word string embeddings. In addition, we propose two different CNN architectures, specifically designed for word spotting. These architectures are able to be trained in an end-to-end fashion. In a number of experiments, we investigate the influence of different word string embeddings and optimization strategies. We show our attribute CNNs to achieve state-of-the-art results for segmentation-based word spotting on a large variety of data sets.  相似文献   

18.
准确分割肺结节在临床上具有重要意义。计算机断层扫描(computer tomography,CT)技术以其成像速度快、图像分辨率高等优点广泛应用于肺结节分割及功能评价中。为了进一步对肺部CT影像中的肺结节分割方法进行探索,本文对基于CT影像的肺结节分割方法研究进行综述。1)对传统的肺结节分割方法及其优缺点进行了归纳比较;2)重点介绍了包括深度学习、深度学习与传统方法相结合在内的肺结节分割方法;3)简单介绍了肺结节分割方法的常用评价指标,并结合部分方法的指标表现展望了肺结节分割方法研究领域的未来发展趋势。传统的肺结节分割方法各有优缺点和其适用的结节类型,深度学习分割方法因普适性好等优点成为该领域的研究热点。研究者们致力于如何提高分割结果的准确度、模型的鲁棒性及方法的普适性,为了实现此目的本文总结了各类方法的优缺点。基于CT影像的肺结节分割方法研究已经取得了不小的成就,但肺结节形状各异、密度不均匀,且部分结节与血管、胸膜等解剖结构粘连,给结节分割增加了困难,结节分割效果仍有很大提升空间。精度高、速度快的深度学习分割方法将会是研究者密切关注的方法,但该类方法仍需解决数据需求量大和网络模型超参数的确定等问题。  相似文献   

19.
口腔医学影像是进行临床口腔疾病检测、筛查、诊断和治疗评估的重要工具,对口腔影像进行准确分析对于后续治疗计划的制定至关重要。常规的口腔医学影像分析依赖于医师的水平和经验,存在阅片效率低、可重复性低以及定量分析欠缺的问题。深度学习可以从大样本数据中自动学习并获取优良的特征表达,提升各类机器学习任务的效率和性能,目前已广泛应用于医学影像分析处理的各类任务之中。基于深度学习的口腔医学影像处理是目前的研究热点,但由于口腔医学领域内在的特殊性和复杂性,以及口腔医学影像数据样本量通常较小的问题,给深度学习方法在相关学习任务和场景的应用带来了新的挑战。本文从口腔医学影像领域常用的二维X射线影像、三维点云/网格影像和锥形束计算机断层扫描影像3种影像出发,介绍深度学习技术在口腔医学影像处理及分析领域应用的思路和现状,分析了各算法的优缺点及该领域所面临的问题和挑战,并对未来的研究方向和可能开展的临床应用进行展望,以助力智慧口腔建设。  相似文献   

20.
目的 卷积神经网络(convolutional neural network,CNN)在计算机辅助诊断(computer-aided diagnosis,CAD)肺部疾病方面具有广泛的应用,其主要工作在于肺部实质的分割、肺结节检测以及病变分析,而肺实质的精确分割是肺结节检出和肺部疾病诊断的关键。因此,为了更好地适应计算机辅助诊断系统要求,提出一种融合注意力机制和密集空洞卷积的具有编码—解码模式的卷积神经网络,进行肺部分割。方法 将注意力机制引入网络的解码部分,通过增大关键信息权重以突出目标区域抑制背景像素干扰。为了获取更广更深的语义信息,将密集空洞卷积模块部署在网络中间,该模块集合了Inception、残差结构以及多尺度空洞卷积的优点,在不引起梯度爆炸和梯度消失的情况下,获得了更深层次的特征信息。针对分割网络常见的特征丢失等问题,对网络中的上/下采样模块进行改进,利用多个不同尺度的卷积核级联加宽网络,有效避免了特征丢失。结果 在LUNA (lung nodule analysis)数据集上与现有5种主流分割网络进行比较实验和消融实验,结果表明,本文模型得到的预测图更接近于标签图像。Dice相似系数、交并比(intersection over union,IoU)、准确度(accuracy,ACC)以及敏感度(sensitivity,SE)等评价指标均优于对比方法,相比于性能第2的模型,分别提高了0.443%,0.272%,0.512%以及0.374%。结论 本文提出了一种融合注意力机制与密集空洞卷积的肺部分割网络,相对于其他分割网络取得了更好的分割效果。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号