首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
通过研究基于回报函数学习的学徒学习的发展历史和目前的主要工作,概述了基于回报函数学习的学徒学习方法.分别在回报函数为线性和非线性条件下讨论,并且在线性条件下比较了2类方法——基于逆向增强学习(IRL)和最大化边际规划(MMP)的学徒学习.前者有较为快速的近似算法,但对于演示的最优性作了较强的假设;后者形式上更易于扩展,但计算量大.最后,提出了该领域现在还存在的问题和未来的研究方向,如把学徒学习应用于POMDP环境下,用PBVI等近似算法或者通过PCA等降维方法对数据进行学习特征的提取,从而减少高维度带来的大计算量问题.  相似文献   

2.
针对航迹探测领域中探测器获得的目标地理位置通常是同一帧下无法区分的多目标场景,需要利用目标位置信息还原各航迹并区分各目标的问题进行研究,提出采用深度强化学习复原目标航迹的方法。依据目标航迹的物理特点,提取数学模型,结合目标航迹的方向、曲率等提出轨迹曲率圆(TOC)奖励函数,使深度强化学习能够有效复原多目标航迹并区分各目标。首先描述多目标航迹复原问题,并将问题建模成深度强化学习能够处理的模型;结合TOC奖励函数对多目标航迹复原问题进行实验;最后给出该奖励函数的数学推导和物理解释。实验结果表明,TOC奖励函数驱动下的深度强化网络能够有效还原目标的航迹,在航向和航速方面切合实际目标航迹。  相似文献   

3.
零阶学习分类元系统ZCS(Zeroth-level Classifier System)作为一种基于遗传的机器学习技术(Genetics-Based Machine Learning),在解决多步学习问题上,已展现出应用价值。然而标准的ZCS系统采用折扣奖赏强化学习技术,难于适应更为广泛的应用领域。基于ZCS的现有框架,提出了一种采用平均奖赏强化学习技术(R-学习算法)的分类元系统,将ZCS中的折扣奖赏强化学习方法替换为R-学习算法,从而使ZCS一方面可应用于需要优化平均奖赏的问题领域,另一方面则可求解规模较大、需要动作长链支持的多步学习问题。实验显示,在多步学习问题中,该系统可给出满意解,且在维持动作长链,以及克服过泛化问题方面,具有更优的特性。  相似文献   

4.
Model-based average reward reinforcement learning   总被引:7,自引:0,他引:7  
《Artificial Intelligence》1998,100(1-2):177-224
Reinforcement Learning (RL) is the study of programs that improve their performance by receiving rewards and punishments from the environment. Most RL methods optimize the discounted total reward received by an agent, while, in many domains, the natural criterion is to optimize the average reward per time step. In this paper, we introduce a model-based Averagereward Reinforcement Learning method called H-learning and show that it converges more quickly and robustly than its discounted counterpart in the domain of scheduling a simulated Automatic Guided Vehicle (AGV). We also introduce a version of H-learning that automatically explores the unexplored parts of the state space, while always choosing greedy actions with respect to the current value function. We show that this “Auto-exploratory H-Learning” performs better than the previously studied exploration strategies. To scale H-learning to larger state spaces, we extend it to learn action models and reward functions in the form of dynamic Bayesian networks, and approximate its value function using local linear regression. We show that both of these extensions are effective in significantly reducing the space requirement of H-learning and making it converge faster in some AGV scheduling tasks.  相似文献   

5.
In classification problems, active learning is often adopted to alleviate the laborious human labeling efforts, by finding the most informative samples to query the labels. One of the most popular query strategy is selecting the most uncertain samples for the current classifier. The performance of such an active learning process heavily relies on the learned classifier before each query. Thus, stepwise classifier model/parameter selection is quite critical, which is, however, rarely studied in the literature. In this paper, we propose a novel active learning support vector machine algorithm with adaptive model selection. In this algorithm, before each new query, we trace the full solution path of the base classifier, and then perform efficient model selection using the unlabeled samples. This strategy significantly improves the active learning efficiency with comparatively inexpensive computational cost. Empirical results on both artificial and real world benchmark data sets show the encouraging gains brought by the proposed algorithm in terms of both classification accuracy and computational cost.  相似文献   

6.
Adaptive fuzzy command acquisition with reinforcement learning   总被引:2,自引:0,他引:2  
Proposes a four-layered adaptive fuzzy command acquisition network (AFCAN) for adaptively acquiring fuzzy command via interactions with the user or environment. It can catch the intended information from a sentence (command) given in natural language with fuzzy predicates. The intended information includes a meaningful semantic action and the fuzzy linguistic information of that action. The proposed AFCAN has three important features. First, we can make no restrictions whatever on the fuzzy command input, which is used to specify the desired information, and the network requires no acoustic, prosodic, grammar, and syntactic structure, Second, the linguistic information of an action is learned adaptively and it is represented by fuzzy numbers based on α-level sets. Third, the network can learn during the course of performing the task. The AFCAN can perform off-line as well as online learning. For the off-line learning, the mutual-information (MI) supervised learning scheme and the fuzzy backpropagation (FBP) learning scheme are employed when the training data are available in advance. The former learning scheme is used to learn meaningful semantic actions and the latter learn linguistic information. The AFCAN can also perform online learning interactively when it is in use for fuzzy command acquisition. For the online learning, the MI-reinforcement learning scheme and the fuzzy reinforcement learning scheme are developed for the online learning of meaningful actions and linguistic information, respectively. An experimental system is constructed to illustrate the performance and applicability of the proposed AFCAN  相似文献   

7.
This paper proposes a novel artificial neural network called fast learning network (FLN). In FLN, input weights and hidden layer biases are randomly generated, and the weight values of the connection between the output layer and the input layer and the weight values connecting the output node and the input nodes are analytically determined based on least squares methods. In order to test the FLN validity, it is applied to nine regression applications, and experimental results show that, compared with support vector machine, back propagation, extreme learning machine, the FLN with much more compact networks can achieve very good generalization performance and stability at a very fast training speed and a quick reaction of the trained network to new observations. In addition, in order to further test the FLN validity, it is applied to model the thermal efficiency and NO x emissions of a 330 WM coal-fired boiler and achieves very good prediction precision and generalization ability at a high learning speed.  相似文献   

8.
In relevance feedback algorithms, selective sampling is often used to reduce the cost of labeling and explore the unlabeled data. In this paper, we proposed an active learning algorithm, Co-SVM, to improve the performance of selective sampling in image retrieval. In Co-SVM algorithm, color and texture are naturally considered as sufficient and uncorrelated views of an image. SVM classifiers are learned in color and texture feature subspaces, respectively. Then the two classifiers are used to classify the unlabeled data. These unlabeled samples which are differently classified by the two classifiers are chose to label. The experimental results show that the proposed algorithm is beneficial to image retrieval.  相似文献   

9.
In active learning, the learner is required to measure the importance of unlabeled samples in a large dataset and select the best one iteratively. This sample selection process could be treated as a decision making problem, which evaluates, ranks, and makes choices from a finite set of alternatives. In many decision making problems, it usually applied multiple criteria since the performance is better than using a single criterion. Motivated by these facts, an active learning model based on multi-criteria decision making (MCMD) is proposed in this paper. After the investigation between any two unlabeled samples, a preference preorder is determined for each criterion. The dominated index and the dominating index are then defined and calculated to evaluate the informativeness of unlabeled samples, which provide an effective metric measure for sample selection. On the other hand, under multiple-instance learning (MIL) environment, the instances/samples are grouped into bags, a bag is negative only if all of its instances are negative, and is positive otherwise. Multiple-instance active learning (MIAL) aims to select and label the most informative bags from numerous unlabeled ones, and learn a MIL classifier for accurately predicting unseen bags by requesting as few labels as possible. It adopts a MIL algorithm as the base classifier, and follows an active learning procedure. In order to achieve a balance between learning efficiency and generalization capability, the proposed active learning model is restricted to a specific algorithm under MIL environment. Experimental results demonstrate the effectiveness of the proposed method.  相似文献   

10.
In multi-agent reinforcement learning systems, it is important to share a reward among all agents. We focus on theRationality Theorem of Profit Sharing 5) and analyze how to share a reward among all profit sharing agents. When an agent gets adirect reward R (R>0), anindirect reward μR (μ≥0) is given to the other agents. We have derived the necessary and sufficient condition to preserve the rationality as follows;
whereM andL are the maximum number of conflicting all rules and rational rules in the same sensory input,W andW o are the maximum episode length of adirect and anindirect-reward agents, andn is the number of agents. This theory is derived by avoiding the least desirable situation whose expected reward per an action is zero. Therefore, if we use this theorem, we can experience several efficient aspects of reward sharing. Through numerical examples, we confirm the effectiveness of this theorem. Kazuteru Miyazaki, Dr. Eng.: He is an associate professor in the Faculty of Assessment and Research for Degrees at National Institution for Academic Degrees. He obtained his BEng. form Meiji University in 1991, and his Dr. Eng. form Tokyo Institute of Technology in 1996. His research interests are in Machine Learning and Robotics. He has published over 30 research papers and received several awards. He is a member of the Japan Society of Mechanical Engineers (JSME), Japanese Society for Artificial Intelligence (JSAI), and the Society of Instrument and Control Engineers of Japan (SICE). Shigenobu Kobayashi, Dr. Eng.: He received his Dr. Eng. from Tokyo Institute of Technology in 1974. He is professor at Dept. of Computational Intelligence and Systems Science, Tokyo Institute of Technology. His research interests include artificial intelligence, emergent systems, evolutionary computation and reinforcement learning.  相似文献   

11.
12.
在许多分类任务中,存在大量未标记的样本,并且获取样本标签耗时且昂贵。利用主动学习算法确定最应被标记的关键样本,来构建高精度分类器,可以最大限度地减少标记成本。本文提出一种基于PageRank的主动学习算法(PAL),充分利用数据分布信息进行有效的样本选择。利用PageRank根据样本间的相似度关系依次计算邻域、分值矩阵和排名向量;选择代表样本,并根据其相似度关系构建二叉树,利用该二叉树对代表样本进行聚类,标记和预测;将代表样本作为训练集,对其他样本进行分类。实验采用8个公开数据集,与5种传统的分类算法和3种流行的主动学习算法比较,结果表明PAL算法能取得更好的分类效果。  相似文献   

13.
14.
针对输电线路树障清理作业任务对空中机器人平台稳定性、平动性和抗扰性的高要求,为克服传统平面配置多旋翼无人机姿态配合式位置移动的缺点,本文在全驱动多旋翼飞行器设计思想的启发下,提出并设计了一种无需姿态配合即可实现前后平移运动的非平面作业型多旋翼空中机器人.首先分别建立其姿态的运动学和动力学模型,然后采用自抗扰控制技术设计了该机器人的位置和姿态跟踪控制律.多组仿真和样机实验结果表明,本文所设计的非平面配置旋翼空中机器人在作业过程中的接触力扰动下具有良好稳定性、平动性和抗扰性.  相似文献   

15.

Learning from patient records may aid medical knowledge acquisition and decision making. Decision tree induction, based on ID3, is a well-known approach of learning from examples. In this article we introduce a new data representation formalism that extends the original ID3 algorithm. We propose a new algorithm, ID+, which adopts this representation scheme. ID+ provides the capability of modeling dependencies between attributes or attribute values and of handling multiple values per attribute. We demonstrate our work via a series of medical knowledge acquisition experiments that are based on a ''real-world'' application of acute abdominal pain in children. In the context of these experiments, we compare ID+ with C4.5, NewId, and a Naive Bayesian classifier. Results demonstrate that the rules acquired via ID+ improve decision tree clinical comprehensibility and complement explanations supported by the Naive Bayesian classifier, while in terms of classification, accuracy decrease is marginal.  相似文献   

16.
Learning from rewards generated by a human trainer observing an agent in action has been proven to be a powerful method for teaching autonomous agents to perform challenging tasks, especially for those non-technical users. Since the efficacy of this approach depends critically on the reward the trainer provides, we consider how the interaction between the trainer and the agent should be designed so as to increase the efficiency of the training process. This article investigates the influence of the agent’s socio-competitive feedback on the human trainer’s training behavior and the agent’s learning. The results of our user study with 85 participants suggest that the agent’s passive socio-competitive feedback—showing performance and score of agents trained by trainers in a leaderboard—substantially increases the engagement of the participants in the game task and improves the agents’ performance, even though the participants do not directly play the game but instead train the agent to do so. Moreover, making this feedback active—sending the trainer her agent’s performance relative to others—further induces more participants to train agents longer and improves the agent’s learning. Our further analysis shows that agents trained by trainers affected by both the passive and active social feedback could obtain a higher performance under a score mechanism that could be optimized from the trainer’s perspective and the agent’s additional active social feedback can keep participants to further train agents to learn policies that can obtain a higher performance under such a score mechanism.  相似文献   

17.
In classification problems, many different active learning techniques are often adopted to find the most informative samples for labeling in order to save human labors. Among them, active learning support vector machine (SVM) is one of the most representative approaches, in which model parameter is usually set as a fixed default value during the whole learning process. Note that model parameter is closely related to the training set. Hence dynamic parameter is desirable to make a satisfactory learning performance. To target this issue, we proposed a novel algorithm, called active learning SVM with regularization path, which can fit the entire solution path of SVM for every value of model parameters. In this algorithm, we first traced the entire solution path of the current classifier to find a series of candidate model parameters, and then used unlabeled samples to select the best model parameter. Besides, in the initial phase of training, we constructed a training sample sets by using an improved K-medoids cluster algorithm. Experimental results conducted from real-world data sets showed the effectiveness of the proposed algorithm for image classification problems.  相似文献   

18.
具有学习功能的智能遥控器   总被引:8,自引:3,他引:8  
安颖  刘丽娜 《微计算机信息》2005,21(3):23-23,63
红外遥控器种类繁多,控制对象单一。具有学习功能的遥控器以单片机为核心,能记忆遥控器编码,并能模拟发射,使一个遥控器可以控制多个电器或代替某一遥控器实现功能,是一种智能化的控制工具。  相似文献   

19.
As a recently proposed machine learning method, active learning of Gaussian processes can effectively use a small number of labeled examples to train a classifier, which in turn is used to select the most informative examples from unlabeled data for manual labeling. However, in the process of example selection, active learning usually need consider all the unlabeled data without exploiting the structural space connectivity among them. This will decrease the classification accuracy to some extent since the selected points may not be the most informative. To overcome this shortcoming, in this paper, we present a method which applies the manifold-preserving graph reduction (MPGR) algorithm to the traditional active learning method of Gaussian processes. MPGR is a simple and efficient example sparsification algorithm which can construct a subset to represent the global structure and simultaneously eliminate the influence of noisy points and outliers. Thereby, when actively selecting examples to label, we just choose from the subset constructed by MPGR instead of the whole unlabeled data. We report experimental results on multiple data sets which demonstrate that our method obtains better classification performance compared with the original active learning method of Gaussian processes.  相似文献   

20.
Activity recognition in smart environment has been investigated rigorously in recent years. Researchers are enhancing the underlying activity discovery and recognition process by adding various dimensions and functionalities. But one significant barrier still persists which is collecting the ground truth information. Ground truth is very important to initialize a supervised learning of activities. Due to a large variety in number of Activities of Daily Living (ADLs), acknowledging them in a supervised way is a non-trivial research problem. Most of the previous researches have referenced a subset of ADLs and to initialize their model, they acquire a vast amount of informative labeled training data. On the other hand to collect ground truth and differentiate ADLs, human intervention is indispensable. As a result it takes an immense effort and raises privacy concerns to collect a reasonable amount of labeled data. In this paper, we propose to use active learning to alleviate the labeling effort and ground truth data collection in activity recognition pipeline. We investigate and analyze different active learning strategies to scale activity recognition and propose a dynamic k-means clustering based active learning approach. Experimental results on real data traces from a retirement community-(IRB #HP-00064387) help validate the early promise of our approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号