首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
非结构化表格文档结构性较低,模式多样且数据冗杂,但此类文档里潜藏大量有价值数据,数据高精度抽取对分析数据价值存在增值作用,为此提出基于深度学习的非结构化表格文档数据抽取方法.在数据抽取前,采用基于循环和卷积神经网络的文本分类方法,对非结构化表格文档实施分类,获取所需表格文档,由此缩小后续数据抽取范围,提高抽取效率与精度...  相似文献   

2.
针对当前基于深度学习的点云分割技术对点级别标注训练数据的依赖问题,提出一种基于伪标签生成与噪声标签学习的弱监督点云分割框架.首先,利用点云具有非局部相似性的特点,基于局部-非局部图对点云数据进行建模;其次,基于关系图卷积网络,以半监督的方式为稀疏标注的点云训练集数据生成高质量的伪标签;最后,针对生成的伪标签中存在噪声标签的问题,设计一种利用含噪声标签数据准确训练现有点云分割网络的渐进式噪声鲁棒损失函数.在ShapeNet Part与S3DIS等公开点云分割数据集上的实验结果表明,该框架在推理阶段不增加模型额外运算量的情况下,当使用10%的标签时,在ShapeNet Part数据集上的分割精度与全监督方法相差0.1%;当使用1%的标签时,在S3DIS数据集上的分割精度与全监督方法相差5.2%.  相似文献   

3.
莫建文  贾鹏 《自动化学报》2022,48(8):2088-2096
为了提高半监督深层生成模型的分类性能, 提出一种基于梯形网络和改进三训练法的半监督分类模型. 该模型在梯形网络框架有噪编码器的最高层添加3个分类器, 结合改进的三训练法提高图像分类性能. 首先, 用基于类别抽样的方法将有标记数据分为3份, 模型以有标记数据的标签误差和未标记数据的重构误差相结合的方式调整参数, 训练得到3个Large-margin Softmax分类器; 接着, 用改进的三训练法对未标记数据添加伪标签, 并对新的标记数据分配不同权重, 扩充训练集; 最后, 利用扩充的训练集更新模型. 训练完成后, 对分类器进行加权投票, 得到分类结果. 模型得到的梯形网络的特征有更好的低维流形表示, 可以有效地避免因为样本数据分布不均而导致的分类误差, 增强泛化能力. 模型分别在MNIST数据库, SVHN数据库和CIFAR10数据库上进行实验, 并且与其他半监督深层生成模型进行了比较, 结果表明本文所提出的模型得到了更高的分类精度.  相似文献   

4.
针对传统机器学习方法在完成分类任务时多数存在人工标记成本较高、泛化能力较弱的问题,提出一种标记组合半监督学习算法。基于集成学习的思想,利用有标记数据训练多个弱模型并进行组合,增强模型的泛化能力。对无标记数据进行预测,生成有噪声的标记并组合建模。在风险最小化的框架下,使模型收敛达到最优。实验结果表明,在2种有监督场景下与现有的支持向量机、分类与回归树、神经网络等算法相比,该算法具有较优的泛化能力。  相似文献   

5.
深层神经网络在文档摘要方面取得了很好的效果,其优势只有在大数据集下才能显示出来。为了解决在使用深度学习做柬语单文档抽取式摘要时语料标注不足的问题,提出一种将主动学习和深度学习相结合的方法。利用主动学习抽样策略选择出定量的文档,通过专家标注,结合深度学习中编码器解码器模型进行训练模型抽取得到摘要。实验结果表明,在训练语料显著标注不足的情况下,该方法能够有效地提升柬语单文档摘要的质量。  相似文献   

6.
叶育鑫  薛环  王璐  欧阳丹彤 《软件学报》2020,31(4):1025-1038
远监督关系抽取的最大优势是通过知识库和自然语言文本的自动对齐生成标记数据.这种简单的自动对齐机制在将人从繁重的样本标注工作中解放出来的同时,不可避免地会产生各种错误数据标记,进而影响构建高质量的关系抽取模型.针对远监督关系抽取任务中的标记噪声问题,提出“最终句子对齐的标签是基于某些未知因素所生成的带噪观测结果”这一假设.并在此假设的基础上,构建由编码层、基于噪声分布的注意力层、真实标签输出层和带噪观测层的新型关系抽取模型.模型利用自动标记的数据学习真实标签到噪声标签的转移概率,并在测试阶段,通过真实标签输出层得到最终的关系分类.随后,研究带噪观测模型与深度神经网络的结合,重点讨论基于深度神经网络编码的噪声分布注意力机制以及深度神经网络框架下不均衡样本的降噪处理.通过以上研究,进一步提升基于带噪观测远监督关系抽取模型的抽取精度和鲁棒性.最后,在公测数据集和同等参数设置下进行带噪观测远监督关系抽取模型的验证实验,通过分析样本噪声的分布情况,对在各种样本噪声分布下的带噪观测模型进行性能评价,并与现有的主流基线方法进行比较.结果显示,所提出的带噪观测模型具有更高的准确率和召回率.  相似文献   

7.
良好的特征表达是提高模型性能的关键,然而当前在多标记学习领域,特征表达依然采用人工设计的方式,所提取的特征抽象程度不高,包含的可区分性信息不足。针对此问题,提出了基于卷积神经网络的多标记分类模型ML_DCCNN,该模型利用卷积神经网络强大的特征提取能力,自动学习能刻画数据本质的特征。为了解决深度卷积神经网络预测精度高,但训练时间复杂度不低的问题,ML_DCCNN利用迁移学习方法缩减模型的训练时间,同时改进卷积神经网络的全连接层,提出双通道神经元,减少全连接层的参数量。实验表明,与传统的多标记分类算法以及已有的基于深度学习的多标记分类模型相比,ML_DCCNN保持了较高的分类精度并有效地提高了分类效率,具有一定的理论与实际价值。  相似文献   

8.
针对传统基于注意力机制的神经网络不能联合关注局部特征和旋转不变特征的问题,提出一种基于多分支神经网络模型的弱监督细粒度图像分类方法。首先,用轻量级类激活图(CAM)网络定位有潜在语义信息的局部区域,设计可变形卷积的残差网络ResNet-50和旋转不变编码的方向响应网络(ORN);其次,利用预训练模型分别初始化特征网络,并输入原图和以上局部区域分别对模型进行微调;最后,组合三个分支内损失和分支间损失优化整个网络,对测试集进行分类预测。所提方法在CUB-200-2011和FGVC_Aircraft数据集上的分类准确率分别达到87.7%和90.8%,与多注意力卷积神经网络(MA-CNN)方法相比,分别提高了1.2个百分点和0.9个百分点;在Aircraft_2数据集上的分类准确率达到91.8%,比ResNet-50网络提高了4.1个百分点。实验结果表明,所提方法有效提高了弱监督细粒度图像分类的准确率。  相似文献   

9.
罗萍  丁玲  杨雪  向阳 《计算机应用》2022,42(10):2990-2995
当前的事件检测模型严重依赖于人工标注的数据,在标注数据规模有限的情况下,事件检测任务中基于完全监督方法的深度学习模型经常会出现过拟合的问题,而基于弱监督学习的使用自动标注数据代替耗时的人工标注数据的方法又常常依赖于复杂的预定义规则。为了解决上述问题,就中文事件检测任务提出了一种基于BERT的混合文本对抗训练(BMAD)方法。所提方法基于数据增强和对抗学习设定了弱监督学习场景,并采用跨度抽取模型来完成事件检测任务。首先,为改善数据不足的问题,采用回译、Mix-Text等数据增强方法来增强数据并为事件检测任务创建弱监督学习场景;然后,使用一种对抗训练机制进行噪声学习,力求最大限度地生成近似真实样本的生成样本,并最终提高整个模型的鲁棒性。在广泛使用的真实数据集自动文档抽取(ACE)2005上进行实验,结果表明相较于NPN、TLNN、HCBNN等算法,所提方法在F1分数上获取了至少0.84个百分点的提升。  相似文献   

10.
针对现有深度知识追踪模型存在输入习题间复杂关系捕获能力弱、无法有效处理长序列输入数据等问题,提出了基于自注意力机制和双向GRU神经网络的深度知识追踪优化模型(KTSA-BiGRU)。首先,将学习者的历史学习交互序列数据映射为实值向量序列;其次,以实值向量序列作为输入训练双向GRU神经网络,利用双向GRU神经网络建模学习者的学习过程;最后,使用自注意力机制捕获练习题之间的关系,根据双向GRU神经网络输出的隐向量和注意力权重计算学习者正确回答下一问题的概率。实验在三个公共数据集上的性能分析优于现有的知识追踪模型,能提高深度知识追踪的预测精度。  相似文献   

11.
Auer  Peter  Long  Philip M.  Maass  Wolfgang  Woeginger  Gerhard J. 《Machine Learning》1995,18(2-3):187-230
The majority of results in computational learning theory are concerned with concept learning, i.e. with the special case of function learning for classes of functions with range {0, 1}. Much less is known about the theory of learning functions with a larger range such as or . In particular relatively few results exist about the general structure of common models for function learning, and there are only very few nontrivial function classes for which positive learning results have been exhibited in any of these models.We introduce in this paper the notion of a binary branching adversary tree for function learning, which allows us to give a somewhat surprising equivalent characterization of the optimal learning cost for learning a class of real-valued functions (in terms of a max-min definition which does not involve any learning model).Another general structural result of this paper relates the cost for learning a union of function classes to the learning costs for the individual function classes.Furthermore, we exhibit an efficient learning algorithm for learning convex piecewise linear functions from d into . Previously, the class of linear functions from d into was the only class of functions with multidimensional domain that was known to be learnable within the rigorous framework of a formal model for online learning.Finally we give a sufficient condition for an arbitrary class of functions from into that allows us to learn the class of all functions that can be written as the pointwise maximum ofk functions from . This allows us to exhibit a number of further nontrivial classes of functions from into for which there exist efficient learning algorithms.  相似文献   

12.
Kearns  Michael  Sebastian Seung  H. 《Machine Learning》1995,18(2-3):255-276
We introduce a new formal model in which a learning algorithm must combine a collection of potentially poor but statistically independent hypothesis functions in order to approximate an unknown target function arbitrarily well. Our motivation includes the question of how to make optimal use of multiple independent runs of a mediocre learning algorithm, as well as settings in which the many hypotheses are obtained by a distributed population of identical learning agents.  相似文献   

13.
In this paper we initiate an investigation of generalizations of the Probably Approximately Correct (PAC) learning model that attempt to significantly weaken the target function assumptions. The ultimate goal in this direction is informally termed agnostic learning, in which we make virtually no assumptions on the target function. The name derives from the fact that as designers of learning algorithms, we give up the belief that Nature (as represented by the target function) has a simple or succinct explanation. We give a number of positive and negative results that provide an initial outline of the possibilities for agnostic learning. Our results include hardness results for the most obvious generalization of the PAC model to an agnostic setting, an efficient and general agnostic learning method based on dynamic programming, relationships between loss functions for agnostic learning, and an algorithm for a learning problem that involves hidden variables.  相似文献   

14.
This article studies self-directed learning, a variant of the on-line (or incremental) learning model in which the learner selects the presentation order for the instances. Alternatively, one can view this model as a variation of learning with membership queries in which the learner is only charged for membership queries for which it could not predict the outcome. We give tight bounds on the complexity of self-directed learning for the concept classes of monomials, monotone DNF formulas, and axis-parallel rectangles in {0, 1, , n – 1} d . These results demonstrate that the number of mistakes under self-directed learning can be surprisingly small. We then show that learning complexity in the model of self-directed learning is less than that of all other commonly studied on-line and query learning models. Next we explore the relationship between the complexity of self-directed learning and the Vapnik-Chervonenkis (VC-)dimension. We show that, in general, the VC-dimension and the self-directed learning complexity are incomparable. However, for some special cases, we show that the VC-dimension gives a lower bound for the self-directed learning complexity. Finally, we explore a relationship between Mitchell's version space algorithm and the existence of self-directed learning algorithms that make few mistakes.  相似文献   

15.
Transfer in variable-reward hierarchical reinforcement learning   总被引:2,自引:1,他引:1  
Transfer learning seeks to leverage previously learned tasks to achieve faster learning in a new task. In this paper, we consider transfer learning in the context of related but distinct Reinforcement Learning (RL) problems. In particular, our RL problems are derived from Semi-Markov Decision Processes (SMDPs) that share the same transition dynamics but have different reward functions that are linear in a set of reward features. We formally define the transfer learning problem in the context of RL as learning an efficient algorithm to solve any SMDP drawn from a fixed distribution after experiencing a finite number of them. Furthermore, we introduce an online algorithm to solve this problem, Variable-Reward Reinforcement Learning (VRRL), that compactly stores the optimal value functions for several SMDPs, and uses them to optimally initialize the value function for a new SMDP. We generalize our method to a hierarchical RL setting where the different SMDPs share the same task hierarchy. Our experimental results in a simplified real-time strategy domain show that significant transfer learning occurs in both flat and hierarchical settings. Transfer is especially effective in the hierarchical setting where the overall value functions are decomposed into subtask value functions which are more widely amenable to transfer across different SMDPs.  相似文献   

16.
刘晓  毛宁 《数据采集与处理》2015,30(6):1310-1317
学习自动机(Learning automation,LA)是一种自适应决策器。其通过与一个随机环境不断交互学习从一个允许的动作集里选择最优的动作。在大多数传统的LA模型中,动作集总是被取作有限的。因此,对于连续参数学习问题,需要将动作空间离散化,并且学习的精度取决于离散化的粒度。本文提出一种新的连续动作集学习自动机(Continuous action set learning automaton,CALA),其动作集为一个可变区间,同时按照均匀分布方式选择输出动作。学习算法利用来自环境的二值反馈信号对动作区间的端点进行自适应更新。通过一个多模态学习问题的仿真实验,演示了新算法相对于3种现有CALA算法的优越性。  相似文献   

17.
Massive Open Online Courses (MOOCs) require individual learners to self-regulate their own learning, determining when, how and with what content and activities they engage. However, MOOCs attract a diverse range of learners, from a variety of learning and professional contexts. This study examines how a learner's current role and context influences their ability to self-regulate their learning in a MOOC: Introduction to Data Science offered by Coursera. The study compared the self-reported self-regulated learning behaviour between learners from different contexts and with different roles. Significant differences were identified between learners who were working as data professionals or studying towards a higher education degree and other learners in the MOOC. The study provides an insight into how an individual's context and role may impact their learning behaviour in MOOCs.  相似文献   

18.
We study a model of probably exactly correct (PExact) learning that can be viewed either as the Exact model (learning from equivalence queries only) relaxed so that counterexamples to equivalence queries are distributionally drawn rather than adversarially chosen or as the probably approximately correct (PAC) model strengthened to require a perfect hypothesis. We also introduce a model of probably almost exactly correct (PAExact) learning that requires a hypothesis with negligible error and thus lies between the PExact and PAC models. Unlike the Exact and PExact models, PAExact learning is applicable to classes of functions defined over infinite instance spaces. We obtain a number of separation results between these models. Of particular note are some positive results for efficient parallel learning in the PAExact model, which stand in stark contrast to earlier negative results for efficient parallel Exact learning.  相似文献   

19.
20.
不同程度的监督机制在自动文本分类中的应用   总被引:1,自引:0,他引:1  
自动文本分类技术涉及信息检索、模式识别及机器学习等领域。本文以监督的程度为线索,综述了分属全监督,非监督以及半监督学习策略的若干方法-NBC(Naive Bayes Classifier),FCM(Fuzzy C-Means),SOM(Self-Organizing Map),ssFCM(serni-supervised Fuzzy C-Means)gSOM(guided Self-Organizing Map),并应用于文本分类中。其中,gSOM是我们在SOM基础上发展得到的半监督形式。并以Reuters-21578为语料,研究了监督程度对分类效果的影响,从而提出了对实际文本分类工作的建议。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号