首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
在医学图像分割任务中,域偏移问题会影响训练好的分割模型在未见域的性能,因此,提高模型泛化性对于医学图像智能模型的实际应用至关重要。表示学习是目前解决域泛化问题的主流方法之一,大多使用图像级损失和一致性损失来监督图像生成,但是对医学图像微小形态特征的偏差不够敏感,会导致生成图像边缘不清晰,影响模型后续学习。为了提高模型的泛化性,提出一种半监督的基于特征级损失和可学习噪声的医学图像域泛化分割模型FLLN-DG,首先引入特征级损失改善生成图像边界不清晰的问题,其次引入可学习噪声组件,进一步增加数据多样性,提升模型泛化性。与基线模型相比,FLLN-DG在未见域的性能提升2%~4%,证明了特征级损失和可学习噪声组件的有效性,与nnUNet,SDNet+AUG,LDDG,SAML,Meta等典型域泛化模型相比,FLLN-DG也表现出更优越的性能。  相似文献   

2.
梁敏  汪西莉 《计算机学报》2022,(12):2619-2636
基于卷积神经网络的图像语义分割方法依赖于有真实标签的监督学习,但不能很好地推广到来源不同的无标签数据集中,目前无监督域适应给出了解决无标签目标域数据和有标签源域数据特征分布不一致的思路.但是在遥感图像处理领域,当源域和目标域的空间分辨率不相同时,无监督域适应方法应用效果不佳.本文结合图像超分辨率设计了一个新的端到端语义分割深度网络——结合超分辨率和域适应的语义分割模型(Semantic Segmentation Model Combining Super Resolution and Domain Adaption,SSM-SRDA),它可以缩小低分辨率源域和高分辨率目标域遥感图像的空间分辨率差异和特征分布差异,并完成无监督超分辨率和域适应语义分割任务.SSM-SRDA模型包括三个部分:结合超分辨率的语义分割网络,将低分辨率的源域图像通过超分辨率模块生成带有目标域风格的高分辨率图像,缩小空间分辨率差异的同时帮助特征提取网络学习域不变特征,并通过特征相似性FA-Loss模块增强语义分割深层网络高分辨率特征的细节结构信息;像素级域判别器,用来缩小超分辨后源域和目标域间的像素级特征差异;输出...  相似文献   

3.
精准分割心脏磁共振图像(MRI)分割对于心脏功能分析至关重要.当前基于数据驱动的神经网络模型极大地促进了心脏MRI分割的发展,然而对标注数据的依赖极大地限制了神经网络模型在心脏MRI分割领域的应用.为了降低神经网络模型对于标注数据的依赖,提出一种基于无监督空间一致性约束的半监督心脏MRI分割方法,在少量有标注数据的监督学习基础上,利用无标签数据在模型输入端和输出端分别进行空间变换后前后一致的特性,构建对于无标注数据的空间一致性约束.使用ACDC 2017心脏多组织分割数据集评估了所提出的方法,实验结果表明,相对于有监督学习,通过无监督数据的空间一致性约束能够显著提升模型的泛化能力;此外,相对于其他state-of-the-art的半监督方法,文中方法也拥有更优的泛化性能.  相似文献   

4.
目的 准确定位超声甲状腺结节对甲状腺癌早期诊断具有重要意义,但患者结节大小、形状以及位置的不确定性极大影响了结节分割的准确率和模型的泛化能力。为了提高超声甲状腺结节分割的精度,增强泛化性能并降低模型的参数量,辅助医生诊断疾病,减少误诊,提出一种面向甲状腺结节超声图像分割的多尺度特征融合“h”形网络。方法 首先提出一种网络框架,形状与字母h相似,由一个编码器和两个解码器组成,引入深度可分离卷积缩小网络尺寸。编码器用于提取图像特征,且构建增强下采样模块来减少下采样时造成的信息损失,增强解码器特征提取的能力。第1个解码器负责获取图像的初步分割信息;第2个解码器通过融合第1个解码器预先学习到的信息来增强结节的特征表达,提升分割精度,并设计了融合卷积池化金字塔实现多尺度特征融合,增强模型的泛化能力。结果 该网络在内部数据集上的Dice相似系数(Dice similarity coefficients, DSC)、豪斯多夫距离(Hausdorff distance,HD)、灵敏度(sensitivity,SEN)和特异度(specificity,SPE)分别为0.872 1、0.935 6、0.879 7和0.997 3,在公开数据集DDTI(digital database thyroid image)上,DSC和SPE分别为0.758 0和0.977 3,在数据集TN3K(thyroid nodule 3 thousand)上的重要指标DSC和HD分别为0.781 5和4.472 6,皆优于其他模型。结论 该网络模型以较低的参数量提升了甲状腺超声图像结节的分割效果,增强了泛化性能。  相似文献   

5.
针对现存的跨场景人脸活体检测模型泛化性能差、类间重叠等问题,提出了一种基于条件对抗域泛化的人脸活体检测方法。首先,该方法使用嵌入注意力机制的U-Net网络和ResNet-18编码器提取多个源域的特征,然后将提取的特征送入辅助分类器,并将特征编码器的输出和分类器预测的结果通过多线性映射的方法进行融合,再输入到域判别器中进行对抗训练,以实现特征和类层面对齐多个源域。其次,为了减少预测不准确的难迁移样本对域泛化造成的影响,采用了熵函数来控制样本的优先级,以提高域泛化的性能。此外,通过添加人脸深度图以进一步抓取活体与假体的区别特征,通过非对称三元组损失约束作为辅助监督,进一步提高类内紧凑性和类间区分性。在公开活体检测数据集上的对比实验验证了所提方法的有效性。  相似文献   

6.
目的 针对深度学习严重依赖大样本的问题,提出多源域混淆的双流深度迁移学习方法,提升了传统深度迁移学习中迁移特征的适用性。方法 采用多源域的迁移策略,增大源域对目标域迁移特征的覆盖率。提出两阶段适配学习的方法,获得域不变的深层特征表示和域间分类器相似的识别结果,将自然光图像2维特征和深度图像3维特征进行融合,提高小样本数据特征维度的同时抑制了复杂背景对目标识别的干扰。此外,为改善小样本机器学习中分类器的识别性能,在传统的softmax损失中引入中心损失,增强分类损失函数的惩罚监督能力。结果 在公开的少量手势样本数据集上进行对比实验,结果表明,相对于传统的识别模型和迁移模型,基于本文模型进行识别准确率更高,在以DenseNet-169为预训练网络的模型中,识别率达到了97.17%。结论 利用多源域数据集、两阶段适配学习、双流卷积融合以及复合损失函数,构建了多源域混淆的双流深度迁移学习模型。所提模型可增大源域和目标域的数据分布匹配率、丰富目标样本特征维度、提升损失函数的监督性能,改进任意小样本场景迁移特征的适用性。  相似文献   

7.
目的 在高分辨率遥感图像场景识别问题中,经典的监督机器学习算法大多需要充足的标记样本训练模型,而获取遥感图像的标注费时费力。为解决遥感图像场景识别中标记样本缺乏且不同数据集无法共享标记样本问题,提出一种结合对抗学习与变分自动编码机的迁移学习网络。方法 利用变分自动编码机(variational auto-encoders,VAE)在源域数据集上进行训练,分别获得编码器和分类器网络参数,并用源域编码器网络参数初始化目标域编码器。采用对抗学习的思想,引入判别网络,交替训练并更新目标域编码器与判别网络参数,使目标域与源域编码器提取的特征尽量相似,从而实现遥感图像源域到目标域的特征迁移。结果 利用两个遥感场景识别数据集进行实验,验证特征迁移算法的有效性,同时尝试利用SUN397自然场景数据集与遥感场景间的迁移识别,采用相关性对齐以及均衡分布适应两种迁移学习方法作为对比。两组遥感场景数据集间的实验中,相比于仅利用源域样本训练的网络,经过迁移学习后的网络场景识别精度提升约10%,利用少量目标域标记样本后提升更为明显;与对照实验结果相比,利用少量目标域标记样本时提出方法的识别精度提升均在3%之上,仅利用源域标记样本时提出方法场景识别精度提升了10%~40%;利用自然场景数据集时,方法仍能在一定程度上提升场景识别精度。结论 本文提出的对抗迁移学习网络可以在目标域样本缺乏的条件下,充分利用其他数据集中的样本信息,实现不同场景图像数据集间的特征迁移及场景识别,有效提升遥感图像的场景识别精度。  相似文献   

8.
马梓博  米悦  张波  张征  吴静云  黄海文  王文东 《软件学报》2023,34(10):4870-4915
近年来深度学习技术在诸多计算机视觉任务上取得了令人瞩目的进步,也让越来越多的研究者尝试将其应用于医学图像处理领域,如面向高通量医学图像(CT、MRI)的解剖结构分割等,旨在为医生提供诊断辅助,提高其阅片效率.由于训练医学图像处理的深度学习模型同样需要大量的标注数据,同一医疗机构的数据往往不能满足需求,而受设备和采集协议的差异的影响,不同医疗机构的数据具有很大的异质性,这导致通过某些医疗机构的数据训练得到模型很难在其他医疗机构的数据上取得可靠的结果.此外,不同的医疗数据在患者个体病情阶段的分布上也往往是十分不均匀的,这同样会降低模型的可靠性.为了减少数据异质性的影响,提高模型的泛化能力,域适应、多站点学习等技术应运而生.其中域适应技术作为迁移学习中的研究热点,旨在将源域上学习的知识迁移到未标记的目标域数据上;多站点学习和数据非独立同分布的联邦学习技术则旨在在多个数据集上学习一个共同的表示,以提高模型的鲁棒性.从域适应、多站点学习和数据非独立同分布的联邦学习技术入手,对近年来的相关方法和相关数据集进行了综述、分类和总结,为相关研究提供参考.  相似文献   

9.
为解决现有深度学习图像分割算法不能有效分割指针仪表图像中密集小目标的问题,提出基于多重感受野UNet的仪表图像分割方法.将自编码器结构和空洞卷积结构结合,使多尺度浅层特征和深层语义信息融合;以多种光照强度下采集的指针仪表数据训练模型,充分提升神经网络的泛化能力;并行调节空洞卷积参数,使神经网络学习到最优模型.实验结果表...  相似文献   

10.
在深度学习时代,目标检测方法不断发展,且在良好的视觉环境中已经具有较高的水平。但是,在特殊天气下,常规的目标检测方法的检测性能大幅下降,甚至失效,而特殊天气环境的行车安全一直是社会广泛关注的问题。为了解决上述问题,该文主要设计了一个目标检测器的模型,即引入循环解纠缠、自蒸馏方法的改进YOLO模型。在循环解纠缠模块,从输入图像中循环提取域不变特征,通过循环操作,可以在不依赖域相关注释的情况下,提高图像域特征和域不变特征的解缠能力;在自蒸馏模块,以提取到的域不变特征为教师对象,进一步提高泛化能力。并且该检测器在只有一个源域进行训练的情况下,面对许多未曾训练过的目标域上仍然表现良好,提高了检测器在未知域的鲁棒性。实验验证了模型在各种天气下城市场景目标检测的效果,实验数据表明,该方法优于基线约8百分点,相比基线方法获得了性能提升。  相似文献   

11.
Kearns  Michael  Sebastian Seung  H. 《Machine Learning》1995,18(2-3):255-276
We introduce a new formal model in which a learning algorithm must combine a collection of potentially poor but statistically independent hypothesis functions in order to approximate an unknown target function arbitrarily well. Our motivation includes the question of how to make optimal use of multiple independent runs of a mediocre learning algorithm, as well as settings in which the many hypotheses are obtained by a distributed population of identical learning agents.  相似文献   

12.
Auer  Peter  Long  Philip M.  Maass  Wolfgang  Woeginger  Gerhard J. 《Machine Learning》1995,18(2-3):187-230
The majority of results in computational learning theory are concerned with concept learning, i.e. with the special case of function learning for classes of functions with range {0, 1}. Much less is known about the theory of learning functions with a larger range such as or . In particular relatively few results exist about the general structure of common models for function learning, and there are only very few nontrivial function classes for which positive learning results have been exhibited in any of these models.We introduce in this paper the notion of a binary branching adversary tree for function learning, which allows us to give a somewhat surprising equivalent characterization of the optimal learning cost for learning a class of real-valued functions (in terms of a max-min definition which does not involve any learning model).Another general structural result of this paper relates the cost for learning a union of function classes to the learning costs for the individual function classes.Furthermore, we exhibit an efficient learning algorithm for learning convex piecewise linear functions from d into . Previously, the class of linear functions from d into was the only class of functions with multidimensional domain that was known to be learnable within the rigorous framework of a formal model for online learning.Finally we give a sufficient condition for an arbitrary class of functions from into that allows us to learn the class of all functions that can be written as the pointwise maximum ofk functions from . This allows us to exhibit a number of further nontrivial classes of functions from into for which there exist efficient learning algorithms.  相似文献   

13.
Transfer in variable-reward hierarchical reinforcement learning   总被引:2,自引:1,他引:1  
Transfer learning seeks to leverage previously learned tasks to achieve faster learning in a new task. In this paper, we consider transfer learning in the context of related but distinct Reinforcement Learning (RL) problems. In particular, our RL problems are derived from Semi-Markov Decision Processes (SMDPs) that share the same transition dynamics but have different reward functions that are linear in a set of reward features. We formally define the transfer learning problem in the context of RL as learning an efficient algorithm to solve any SMDP drawn from a fixed distribution after experiencing a finite number of them. Furthermore, we introduce an online algorithm to solve this problem, Variable-Reward Reinforcement Learning (VRRL), that compactly stores the optimal value functions for several SMDPs, and uses them to optimally initialize the value function for a new SMDP. We generalize our method to a hierarchical RL setting where the different SMDPs share the same task hierarchy. Our experimental results in a simplified real-time strategy domain show that significant transfer learning occurs in both flat and hierarchical settings. Transfer is especially effective in the hierarchical setting where the overall value functions are decomposed into subtask value functions which are more widely amenable to transfer across different SMDPs.  相似文献   

14.
This article studies self-directed learning, a variant of the on-line (or incremental) learning model in which the learner selects the presentation order for the instances. Alternatively, one can view this model as a variation of learning with membership queries in which the learner is only charged for membership queries for which it could not predict the outcome. We give tight bounds on the complexity of self-directed learning for the concept classes of monomials, monotone DNF formulas, and axis-parallel rectangles in {0, 1, , n – 1} d . These results demonstrate that the number of mistakes under self-directed learning can be surprisingly small. We then show that learning complexity in the model of self-directed learning is less than that of all other commonly studied on-line and query learning models. Next we explore the relationship between the complexity of self-directed learning and the Vapnik-Chervonenkis (VC-)dimension. We show that, in general, the VC-dimension and the self-directed learning complexity are incomparable. However, for some special cases, we show that the VC-dimension gives a lower bound for the self-directed learning complexity. Finally, we explore a relationship between Mitchell's version space algorithm and the existence of self-directed learning algorithms that make few mistakes.  相似文献   

15.
In this paper we initiate an investigation of generalizations of the Probably Approximately Correct (PAC) learning model that attempt to significantly weaken the target function assumptions. The ultimate goal in this direction is informally termed agnostic learning, in which we make virtually no assumptions on the target function. The name derives from the fact that as designers of learning algorithms, we give up the belief that Nature (as represented by the target function) has a simple or succinct explanation. We give a number of positive and negative results that provide an initial outline of the possibilities for agnostic learning. Our results include hardness results for the most obvious generalization of the PAC model to an agnostic setting, an efficient and general agnostic learning method based on dynamic programming, relationships between loss functions for agnostic learning, and an algorithm for a learning problem that involves hidden variables.  相似文献   

16.
不同程度的监督机制在自动文本分类中的应用   总被引:1,自引:0,他引:1  
自动文本分类技术涉及信息检索、模式识别及机器学习等领域。本文以监督的程度为线索,综述了分属全监督,非监督以及半监督学习策略的若干方法-NBC(Naive Bayes Classifier),FCM(Fuzzy C-Means),SOM(Self-Organizing Map),ssFCM(serni-supervised Fuzzy C-Means)gSOM(guided Self-Organizing Map),并应用于文本分类中。其中,gSOM是我们在SOM基础上发展得到的半监督形式。并以Reuters-21578为语料,研究了监督程度对分类效果的影响,从而提出了对实际文本分类工作的建议。  相似文献   

17.
We study a model of probably exactly correct (PExact) learning that can be viewed either as the Exact model (learning from equivalence queries only) relaxed so that counterexamples to equivalence queries are distributionally drawn rather than adversarially chosen or as the probably approximately correct (PAC) model strengthened to require a perfect hypothesis. We also introduce a model of probably almost exactly correct (PAExact) learning that requires a hypothesis with negligible error and thus lies between the PExact and PAC models. Unlike the Exact and PExact models, PAExact learning is applicable to classes of functions defined over infinite instance spaces. We obtain a number of separation results between these models. Of particular note are some positive results for efficient parallel learning in the PAExact model, which stand in stark contrast to earlier negative results for efficient parallel Exact learning.  相似文献   

18.
刘晓  毛宁 《数据采集与处理》2015,30(6):1310-1317
学习自动机(Learning automation,LA)是一种自适应决策器。其通过与一个随机环境不断交互学习从一个允许的动作集里选择最优的动作。在大多数传统的LA模型中,动作集总是被取作有限的。因此,对于连续参数学习问题,需要将动作空间离散化,并且学习的精度取决于离散化的粒度。本文提出一种新的连续动作集学习自动机(Continuous action set learning automaton,CALA),其动作集为一个可变区间,同时按照均匀分布方式选择输出动作。学习算法利用来自环境的二值反馈信号对动作区间的端点进行自适应更新。通过一个多模态学习问题的仿真实验,演示了新算法相对于3种现有CALA算法的优越性。  相似文献   

19.
Massive Open Online Courses (MOOCs) require individual learners to self-regulate their own learning, determining when, how and with what content and activities they engage. However, MOOCs attract a diverse range of learners, from a variety of learning and professional contexts. This study examines how a learner's current role and context influences their ability to self-regulate their learning in a MOOC: Introduction to Data Science offered by Coursera. The study compared the self-reported self-regulated learning behaviour between learners from different contexts and with different roles. Significant differences were identified between learners who were working as data professionals or studying towards a higher education degree and other learners in the MOOC. The study provides an insight into how an individual's context and role may impact their learning behaviour in MOOCs.  相似文献   

20.
Ram  Ashwin 《Machine Learning》1993,10(3):201-248
This article describes how a reasoner can improve its understanding of an incompletely understood domain through the application of what it already knows to novel problems in that domain. Case-based reasoning is the process of using past experiences stored in the reasoner's memory to understand novel situations or solve novel problems. However, this process assumes that past experiences are well understood and provide good lessons to be used for future situations. This assumption is usually false when one is learning about a novel domain, since situations encountered previously in this domain might not have been understood completely. Furthermore, the reasoner may not even have a case that adequately deals with the new situation, or may not be able to access the case using existing indices. We present a theory of incremental learning based on the revision of previously existing case knowledge in response to experiences in such situations. The theory has been implemented in a case-based story understanding program that can (a) learn a new case in situations where no case already exists, (b) learn how to index the case in memory, and (c) incrementally refine its understanding of the case by using it to reason about new situations, thus evolving a better understanding of its domain through experience. This research complements work in case-based reasoning by providing mechanisms by which a case library can be automatically built for use by a case-based reasoning program.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号