首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Few-shot intent detection is a practical challenge task, because new intents are frequently emerging and collecting large-scale data for them could be costly. Meta-learning, a promising technique for leveraging data from previous tasks to enable efficient learning of new tasks, has been a popular way to tackle this problem. However, the existing meta-learning models have been evidenced to be overfitting when the meta-training tasks are insufficient. To overcome this challenge, we present a novel self-supervised task augmentation with meta-learning framework, namely STAM. Firstly, we introduce the task augmentation, which explores two different strategies and combines them to extend meta-training tasks. Secondly, we devise two auxiliary losses for integrating self-supervised learning into meta-learning to learn more generalizable and transferable features. Experimental results show that STAM can achieve consistent and considerable performance improvement to existing state-of-the-art methods on four datasets.  相似文献   

2.
元学习研究综述   总被引:4,自引:0,他引:4  
深度学习在大量领域取得优异成果,但仍然存在着鲁棒性和泛化性较差、难以学习和适应未观测任务、极其依赖大规模数据等问题.近两年元学习在深度学习上的发展,为解决上述问题提供了新的视野.元学习是一种模仿生物利用先前已有的知识,从而快速学习新的未见事物能力的一种学习定式.元学习的目标是利用已学习的信息,快速适应未学习的新任务.这与实现通用人工智能的目标相契合,对元学习问题的研究也是提高模型的鲁棒性和泛化性的关键.近年来随着深度学习的发展,元学习再度成为热点,目前元学习的研究百家争鸣、百花齐放.本文从元学习的起源出发,系统地介绍元学习的发展历史,包括元学习的由来和原始定义,然后给出当前元学习的通用定义,同时总结当前元学习一些不同方向的研究成果,包括基于度量的元学习方法、基于强泛化新的初始化参数的元学习方法、基于梯度优化器的元学习方法、基于外部记忆单元的元学方法、基于数据增强的元学方法等.总结其共有的思想和存在的问题,对元学习的研究思想进行分类,并叙述不同方法和其相应的算法.最后论述了元学习研究中常用数据集和评判标准,并从元学习的自适应性、进化性、可解释性、连续性、可扩展性展望其未来发展趋势.  相似文献   

3.
小样本学习是视觉识别中的一个受关注的领域,旨在通过少量的数据来学习新的视觉概念。为了解决小样本问题,一些元学习方法提出从大量辅助任务中学习可迁移的知识并将其应用于目标任务上。为了更好地对知识进行迁移,提出了一种基于记忆的迁移学习方法。提出一种权重分解策略,将部分权重分解为冻结权重与可学习权重,在迁移学习中通过固定冻结权重,仅更新可学习权重的方式来减少模型需要学习的参数。通过一个额外的记忆模块来存储之前任务的经验,在学习新任务时,这些经验被用来初始化模型的参数状态,以此更好地进行迁移学习。通过在miniImageNet、tieredImageNet以及CUB数据集上的实验结果表明,相对于其他先进的方法,该方法在小样本分类任务上取得了具有竞争力甚至是更好的表现。  相似文献   

4.
具备学习能力是高等动物智能的典型表现特征, 为探明四足动物运动技能学习机理, 本文对四足机器人步 态学习任务进行研究, 复现了四足动物的节律步态学习过程. 近年来, 近端策略优化(PPO)算法作为深度强化学习 的典型代表, 普遍被用于四足机器人步态学习任务, 实验效果较好且仅需较少的超参数. 然而, 在多维输入输出场 景下, 其容易收敛到局部最优点, 表现为四足机器人学习到步态节律信号杂乱且重心震荡严重. 为解决上述问题, 在元学习启发下, 基于元学习具有刻画学习过程高维抽象表征优势, 本文提出了一种融合元学习和PPO思想的元近 端策略优化(MPPO)算法, 该算法可以让四足机器人进化学习到更优步态. 在PyBullet仿真平台上的仿真实验结果表 明, 本文提出的算法可以使四足机器人学会行走运动技能, 且与柔性行动者评价器(SAC)和PPO算法的对比实验显 示, 本文提出的MPPO算法具有步态节律信号更规律、行走速度更快等优势.  相似文献   

5.

The algorithm selection problem is defined as identifying the best-performing machine learning (ML) algorithm for a given combination of dataset, task, and evaluation measure. The human expertise required to evaluate the increasing number of ML algorithms available has resulted in the need to automate the algorithm selection task. Various approaches have emerged to handle the automatic algorithm selection challenge, including meta-learning. Meta-learning is a popular approach that leverages accumulated experience for future learning and typically involves dataset characterization. Existing meta-learning methods often represent a dataset using predefined features and thus cannot be generalized across different ML tasks, or alternatively, learn a dataset’s representation in a supervised manner and therefore are unable to deal with unsupervised tasks. In this study, we propose a novel learning-based task-agnostic method for producing dataset representations. Then, we introduce TRIO, a meta-learning approach, that utilizes the proposed dataset representations to accurately recommend top-performing algorithms for previously unseen datasets. TRIO first learns graphical representations for the datasets, using four tools to learn the latent interactions among dataset instances and then utilizes a graph convolutional neural network technique to extract embedding representations from the graphs obtained. We extensively evaluate the effectiveness of our approach on 337 datasets and 195 ML algorithms, demonstrating that TRIO significantly outperforms state-of-the-art methods for algorithm selection for both supervised (classification and regression) and unsupervised (clustering) tasks.

  相似文献   

6.
吕天根  洪日昌  何军  胡社教 《软件学报》2023,34(5):2068-2082
深度学习模型取得了令人瞩目的成绩,但其训练依赖于大量的标注样本,在标注样本匮乏的场景下模型表现不尽人意.针对这一问题,近年来以研究如何从少量样本快速学习的小样本学习被提了出来,方法主要采用元学习方式对模型进行训练,取得了不错的学习效果.但现有方法:1)通常仅基于样本的视觉特征来识别新类别,信息源较为单一; 2)元学习的使用使得模型从大量相似的小样本任务中学习通用的、可迁移的知识,不可避免地导致模型特征空间趋于一般化,存在样本特征表达不充分、不准确的问题.为解决上述问题,将预训练技术和多模态学习技术引入小样本学习过程,提出基于多模态引导的局部特征选择小样本学习方法.所提方法首先在包含大量样本的已知类别上进行模型预训练,旨在提升模型的特征表达能力;而后在元学习阶段,方法利用元学习对模型进行进一步优化,旨在提升模型的迁移能力或对小样本环境的适应能力,所提方法同时基于样本的视觉特征和文本特征进行局部特征选择来提升样本特征的表达能力,以避免元学习过程中模型特征表达能力的大幅下降;最后所提方法利用选择后的样本特征进行小样本学习.在MiniImageNet、CIFAR-FS和FC-100这3个基准数...  相似文献   

7.
Meta-learning is one of the latest research directions in machine learning, which is considered to be one of the most probably ways to realize strong artificial intelligence. Meta-learning focuses on seeking solutions for machines to learn like human beings do - to recognize things through only few sample data and quickly adapt to new tasks. Challenges occur in how to train an efficient machine model with limited labeled data, since the model is easily over-fitted. In this paper, we address this obvious but important problem and propose a metric-based meta-learning model, which combines attention mechanisms and ensemble learning method. In our model, we first design a dual path attention module which considers both channel attention and spatial attention module, and the attention modules have been stacked to conduct a meta-learner for few shot meta-learning. Then, we apply an ensemble method called snap-shot ensemble to the attention-based meta-learner in order to generate more models in a single episode. Features abstracted from the models are put into the metric-based architecture to compute a prototype for each class. Our proposed method intensifies the feature extracting ability of backbone network in meta-learner and reduces over-fitting through ensemble learning and metric learning method. Experimental results toward several meta-learning datasets show that our approach is effective.  相似文献   

8.
Many few-shot learning approaches have been designed under the meta-learning framework, which learns from a variety of learning tasks and generalizes to new tasks. These meta-learning approaches achieve the expected performance in the scenario where all samples are drawn from the same distributions (i.i.d. observations). However, in real-world applications, few-shot learning paradigm often suffers from data shift, i.e., samples in different tasks, even in the same task, could be drawn from various data distributions. Most existing few-shot learning approaches are not designed with the consideration of data shift, and thus show downgraded performance when data distribution shifts. However, it is non-trivial to address the data shift problem in few-shot learning, due to the limited number of labeled samples in each task. Targeting at addressing this problem, we propose a novel metric-based meta-learning framework to extract task-specific representations and task-shared representations with the help of knowledge graph. The data shift within/between tasks can thus be combated by the combination of task-shared and task-specific representations. The proposed model is evaluated on popular benchmarks and two constructed new challenging datasets. The evaluation results demonstrate its remarkable performance.  相似文献   

9.
刘鑫  景丽萍  于剑 《软件学报》2024,35(4):1587-1600
随着大数据、计算机与互联网等技术的不断进步,以机器学习和深度学习为代表的人工智能技术取得了巨大成功,尤其是最近不断涌现的各种大模型,极大地加速了人工智能技术在各个领域的应用.但这些技术的成功离不开海量训练数据和充足的计算资源,大大限制了这些方法在一些数据或计算资源匮乏领域的应用.因此,如何利用少量样本进行学习,也就是小样本学习成为以人工智能技术引领新一轮产业变革中一个十分重要的研究问题.小样本学习中最常用的方法是基于元学习的方法,这类方法通过在一系列相似的训练任务上学习解决这类任务的元知识,在新的测试任务上利用元知识可以进行快速学习.虽然这类方法在小样本分类任务上取得了不错的效果,但是这类方法的一个潜在假设是训练任务和测试任务来自同一分布.这意味着训练任务需要足够多才能使模型学到的元知识泛化到不断变化的测试任务中.但是在一些真正数据匮乏的应用场景,训练任务的数量也是难以保证的.为此,提出一种基于多样真实任务生成的鲁棒小样本分类方法(DATG).该方法通过对已有少量任务进行Mixup,可以生成更多的训练任务帮助模型进行学习.通过约束生成任务的多样性和真实性,该方法可以有效提高小样本分类方...  相似文献   

10.
Although over a thousand scientific papers address the topic of load forecasting every year, only a few are dedicated to finding a general framework for load forecasting that improves the performance, without depending on the unique characteristics of a certain task such as geographical location. Meta-learning, a powerful approach for algorithm selection has so far been demonstrated only on univariate time-series forecasting. Multivariate time-series forecasting is known to have better performance in load forecasting. In this paper we propose a meta-learning system for multivariate time-series forecasting as a general framework for load forecasting model selection. We show that a meta-learning system built on 65 load forecasting tasks returns lower forecasting error than 10 well-known forecasting algorithms on 4 load forecasting tasks for a recurrent real-life simulation. We introduce new metafeatures of fickleness, traversity, granularity and highest ACF. The meta-learning framework is parallelized, component-based and easily extendable.  相似文献   

11.
韩亚茹  闫连山  姚涛 《计算机应用》2022,42(7):2015-2021
随着移动互联网技术的发展,图像数据的规模越来越大,大规模图像检索任务已经成为了一个紧要的问题。由于检索速度快和存储消耗低,哈希算法受到了研究者的广泛关注。基于深度学习的哈希算法要达到较好的检索性能,需要一定数量的高质量训练数据来训练模型。然而现存的哈希方法通常忽视了数据集存在数据类别非平衡的问题,而这可能会降低检索性能。针对上述问题,提出了一种基于元学习网络的深度哈希检索算法。所提算法可以直接从数据中自动学习加权函数。该加权函数是只有一个隐含层的多层感知机(MLP),在少量无偏差元数据的指导下,加权函数的参数可以和模型训练过程中的参数同时进行优化更新。元学习网络参数的更新方程可以解释为:较符合元学习数据的样本权重将被提高,而不符合元学习数据的样本权重将被减小。基于元学习网络的深度哈希检索算法可以有效减少非平衡数据对图像检索的影响,并可以提高模型的鲁棒性。在CIFAR-10等广泛使用的基准数据集上进行的大量实验表明,在非平衡比率较大时,所提算法的平均准确率均值(mAP)最佳;在非平均比率为200的条件下,所提算法的mAP比中心相似度量化算法、非对称深度监督哈希(ADSH)算法和快速可扩展监督哈希(FSSH)算法分别提高0.54个百分点,30.93个百分点和48.43个百分点。  相似文献   

12.
目的 现有基于元学习的主流少样本学习方法假设训练任务和测试任务服从相同或相似的分布,然而在分布差异较大的跨域任务上,这些方法面临泛化能力弱、分类精度差等挑战。同时,基于迁移学习的少样本学习方法没有考虑到训练和测试阶段样本类别不一致的情况,在训练阶段未能留下足够的特征嵌入空间。为了提升模型在有限标注样本困境下的跨域图像分类能力,提出简洁的元迁移学习(compressed meta transfer learning,CMTL)方法。方法 基于元学习,对目标域中的支持集使用数据增强策略,构建新的辅助任务微调元训练参数,促使分类模型更加适用于域差异较大的目标任务。基于迁移学习,使用自压缩损失函数训练分类模型,以压缩源域中基类数据所占据的特征嵌入空间,微调阶段引导与源域分布差异较大的新类数据有更合适的特征表示。最后,将以上两种策略的分类预测融合视为最终的分类结果。结果 使用mini-ImageNet作为源域数据集进行训练,分别在EuroSAT(EuropeanSatellite)、ISIC(InternationalSkinImagingCollaboration)、CropDiseas(Cr...  相似文献   

13.
Multi-task learning is to improve the performance of the model by transferring and exploiting common knowledge among tasks. Existing MTL works mainly focus on the scenario where label sets among multiple tasks (MTs) are usually the same, thus they can be utilized for learning across the tasks. However, the real world has more general scenarios in which each task has only a small number of training samples and their label sets are just partially overlapped or even not. Learning such MTs is more challenging because of less correlation information available among these tasks. For this, we propose a framework to learn these tasks by jointly leveraging both abundant information from a learnt auxiliary big task with sufficiently many classes to cover those of all these tasks and the information shared among those partially-overlapped tasks. In our implementation of using the same neural network architecture of the learnt auxiliary task to learn individual tasks, the key idea is to utilize available label information to adaptively prune the hidden layer neurons of the auxiliary network to construct corresponding network for each task, while accompanying a joint learning across individual tasks. Extensive experimental results demonstrate that our proposed method is significantly competitive compared to state-of-the-art methods.  相似文献   

14.
In multi-agent reinforcement learning (MARL), the behaviors of each agent can influence the learning of others, and the agents have to search in an exponentially enlarged joint-action space. Hence, it is challenging for the multi-agent teams to explore in the environment. Agents may achieve suboptimal policies and fail to solve some complex tasks. To improve the exploring efficiency as well as the performance of MARL tasks, in this paper, we propose a new approach by transferring the knowledge across tasks. Differently from the traditional MARL algorithms, we first assume that the reward functions can be computed by linear combinations of a shared feature function and a set of task-specific weights. Then, we define a set of basic MARL tasks in the source domain and pre-train them as the basic knowledge for further use. Finally, once the weights for target tasks are available, it will be easier to get a well-performed policy to explore in the target domain. Hence, the learning process of agents for target tasks is speeded up by taking full use of the basic knowledge that was learned previously. We evaluate the proposed algorithm on two challenging MARL tasks: cooperative box-pushing and non-monotonic predator-prey. The experiment results have demonstrated the improved performance compared with state-of-the-art MARL algorithms.   相似文献   

15.
State-of-the-art statistical NLP systems for a variety of tasks learn from labeled training data that is often domain specific. However, there may be multiple domains or sources of interest on which the system must perform. For example, a spam filtering system must give high quality predictions for many users, each of whom receives emails from different sources and may make slightly different decisions about what is or is not spam. Rather than learning separate models for each domain, we explore systems that learn across multiple domains. We develop a new multi-domain online learning framework based on parameter combination from multiple classifiers. Our algorithms draw from multi-task learning and domain adaptation to adapt multiple source domain classifiers to a new target domain, learn across multiple similar domains, and learn across a large number of disparate domains. We evaluate our algorithms on two popular NLP domain adaptation tasks: sentiment classification and spam filtering.  相似文献   

16.
Recently, addressing the few-shot learning issue with meta-learning framework achieves great success. As we know, regularization is a powerful technique and widely used to improve machine learning algorithms. However, rare research focuses on designing appropriate meta-regularizations to further improve the generalization of meta-learning models in few-shot learning. In this paper, we propose a novel meta-contrastive loss that can be regarded as a regularization to fill this gap. The motivation of our method depends on the thought that the limited data in few-shot learning is just a small part of data sampled from the whole data distribution, and could lead to various bias representations of the whole data because of the different sampling parts. Thus, the models trained by a few training data (support set) and test data (query set) might misalign in the model space, making the model learned on the support set can not generalize well on the query data. The proposed meta-contrastive loss is designed to align the models of support and query sets to overcome this problem. The performance of the meta-learning model in few-shot learning can be improved. Extensive experiments demonstrate that our method can improve the performance of different gradient-based meta-learning models in various learning problems, e.g., few-shot regression and classification.  相似文献   

17.
In this paper, we investigate the use of hierarchical reinforcement learning (HRL) to speed up the acquisition of cooperative multi-agent tasks. We introduce a hierarchical multi-agent reinforcement learning (RL) framework, and propose a hierarchical multi-agent RL algorithm called Cooperative HRL. In this framework, agents are cooperative and homogeneous (use the same task decomposition). Learning is decentralized, with each agent learning three interrelated skills: how to perform each individual subtask, the order in which to carry them out, and how to coordinate with other agents. We define cooperative subtasks to be those subtasks in which coordination among agents significantly improves the performance of the overall task. Those levels of the hierarchy which include cooperative subtasks are called cooperation levels. A fundamental property of the proposed approach is that it allows agents to learn coordination faster by sharing information at the level of cooperative subtasks, rather than attempting to learn coordination at the level of primitive actions. We study the empirical performance of the Cooperative HRL algorithm using two testbeds: a simulated two-robot trash collection task, and a larger four-agent automated guided vehicle (AGV) scheduling problem. We compare the performance and speed of Cooperative HRL with other learning algorithms, as well as several well-known industrial AGV heuristics. We also address the issue of rational communication behavior among autonomous agents in this paper. The goal is for agents to learn both action and communication policies that together optimize the task given a communication cost. We extend the multi-agent HRL framework to include communication decisions and propose a cooperative multi-agent HRL algorithm called COM-Cooperative HRL. In this algorithm, we add a communication level to the hierarchical decomposition of the problem below each cooperation level. Before an agent makes a decision at a cooperative subtask, it decides if it is worthwhile to perform a communication action. A communication action has a certain cost and provides the agent with the actions selected by the other agents at a cooperation level. We demonstrate the efficiency of the COM-Cooperative HRL algorithm as well as the relation between the communication cost and the learned communication policy using a multi-agent taxi problem.  相似文献   

18.
如何从少数训练样本中学习并识别新的类别对于深度神经网络来说是一个具有挑战性的问题。针对如何解决少样本学习的问题,全面总结了现有的基于深度神经网络的少样本学习方法,涵盖了方法所用模型、数据集及评估结果等各个方面。具体地,针对基于深度神经网络的少样本学习方法,提出将其分为数据增强方法、迁移学习方法、度量学习方法和元学习方法四种类别;对于每个类别,进一步将其分为几个子类别,并且在每个类别与方法之间进行一系列比较,以显示各种方法的优劣和各自的特点。最后强调了现有方法的局限性,并指出了少样本学习研究领域未来的研究方向。  相似文献   

19.
Soares  Carlos  Brazdil  Pavel B.  Kuba  Petr 《Machine Learning》2004,54(3):195-209
The Support Vector Machine algorithm is sensitive to the choice of parameter settings. If these are not set correctly, the algorithm may have a substandard performance. Suggesting a good setting is thus an important problem. We propose a meta-learning methodology for this purpose and exploit information about the past performance of different settings. The methodology is applied to set the width of the Gaussian kernel. We carry out an extensive empirical evaluation, including comparisons with other methods (fixed default ranking; selection based on cross-validation and a heuristic method commonly used to set the width of the SVM kernel). We show that our methodology can select settings with low error while providing significant savings in time. Further work should be carried out to see how the methodology could be adapted to different parameter setting tasks.Supplementary material to this paper is available in electronic form at http://dx.doi.org/10.1023/B:MACH.0000015879.28004.9b  相似文献   

20.
In this paper, we present a novel meta-feature generation method in the context of meta-learning, which is based on rules that compare the performance of individual base learners in a one-against-one manner. In addition to these new meta-features, we also introduce a new meta-learner called Approximate Ranking Tree Forests (ART Forests) that performs very competitively when compared with several state-of-the-art meta-learners. Our experimental results are based on a large collection of datasets and show that the proposed new techniques can improve the overall performance of meta-learning for algorithm ranking significantly. A key point in our approach is that each performance figure of any base learner for any specific dataset is generated by optimising the parameters of the base learner separately for each dataset.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号