首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The reinforcement and imitation learning paradigms have the potential to revolutionise robotics. Many successful developments have been reported in literature; however, these approaches have not been explored widely in robotics for construction. The objective of this paper is to consolidate, structure, and summarise research knowledge at the intersection of robotics, reinforcement learning, and construction. A two-strand approach to literature review was employed. A bottom-up approach to analyse in detail a selected number of relevant publications, and a top-down approach in which a large number of papers were analysed to identify common relevant themes and research trends. This study found that research on robotics for construction has not increased significantly since the 1980s, in terms of number of publications. Also, robotics for construction lacks the development of dedicated systems, which limits their effectiveness. Moreover, unlike manufacturing, construction's unstructured and dynamic characteristics are a major challenge for reinforcement and imitation learning approaches. This paper provides a very useful starting point to understating research on robotics for construction by (i) identifying the strengths and limitations of the reinforcement and imitation learning approaches, and (ii) by contextualising the construction robotics problem; both of which will aid to kick-start research on the subject or boost existing research efforts.  相似文献   

2.
This paper describes a syntactic approach to imitation learning that captures important task structures in the form of probabilistic activity grammars from a reasonably small number of samples under noisy conditions. We show that these learned grammars can be recursively applied to help recognize unforeseen, more complicated tasks that share underlying structures. The grammars enforce an observation to be consistent with the previously observed behaviors which can correct unexpected, out-of-context actions due to errors of the observer and/or demonstrator. To achieve this goal, our method (1) actively searches for frequently occurring action symbols that are subsets of input samples to uncover the hierarchical structure of the demonstration, and (2) considers the uncertainties of input symbols due to imperfect low-level detectors.We evaluate the proposed method using both synthetic data and two sets of real-world humanoid robot experiments. In our Towers of Hanoi experiment, the robot learns the important constraints of the puzzle after observing demonstrators solving it. In our Dance Imitation experiment, the robot learns 3 types of dances from human demonstrations. The results suggest that under reasonable amount of noise, our method is capable of capturing the reusable task structures and generalizing them to cope with recursions.  相似文献   

3.
针对机位再分配算法结果难以满足不同操作人员操作习惯的问题,提出一种符合实际业务人员操作习惯的机位再分配推荐算法。首先以航班特征属性和停机位的资源占用状态构建决策环境空间模型,将人工操作数据转换为多通道时空矩阵,再以卷积神经网络构建的生成对抗网络(generative adversarial network,GAN)拟合其序贯决策操作策略。仿真结果表明,可靠度在90%以上的调整动作占比最高达到84.4%。经过在三个数据集上的测试,模型对不同来源的操作数据具有较好的区分能力。对比不同扰动下的动态调整结果,算法能够得到航班—机位属性特征与原有人工操作属性特征接近的调整方案。  相似文献   

4.
模仿学习是强化学习与监督学习的结合,目标是通过观察专家演示,学习专家策略,从而加速强化学习。通过引入任务相关的额外信息,模仿学习相较于强化学习,可以更快地实现策略优化,为缓解低样本效率问题提供了解决方案。模仿学习已成为解决强化学习问题的一种流行框架,涌现出多种提高学习性能的算法和技术。通过与图形图像学的最新研究成果相结合,模仿学习已经在游戏人工智能(artificial intelligence,AI)、机器人控制和自动驾驶等领域发挥了重要作用。本文围绕模仿学习的年度发展,从行为克隆、逆强化学习、对抗式模仿学习、基于观察量的模仿学习和跨领域模仿学习等多个角度进行深入探讨,介绍了模仿学习在实际应用上的最新情况,比较了国内外研究现状,并展望了该领域未来的发展方向。旨在为研究人员和从业人员提供模仿学习的最新进展,从而为开展工作提供参考与便利。  相似文献   

5.
张立华  刘全  黄志刚  朱斐 《软件学报》2023,34(10):4772-4803
逆向强化学习(inverse reinforcement learning, IRL)也称为逆向最优控制(inverse optimal control, IOC),是强化学习和模仿学习领域的一种重要研究方法,该方法通过专家样本求解奖赏函数,并根据所得奖赏函数求解最优策略,以达到模仿专家策略的目的.近年来,逆向强化学习在模仿学习领域取得了丰富的研究成果,已广泛应用于汽车导航、路径推荐和机器人最优控制等问题中.首先介绍逆向强化学习理论基础,然后从奖赏函数构建方式出发,讨论分析基于线性奖赏函数和非线性奖赏函数的逆向强化学习算法,包括最大边际逆向强化学习算法、最大熵逆向强化学习算法、最大熵深度逆向强化学习算法和生成对抗模仿学习等.随后从逆向强化学习领域的前沿研究方向进行综述,比较和分析该领域代表性算法,包括状态动作信息不完全逆向强化学习、多智能体逆向强化学习、示范样本非最优逆向强化学习和指导逆向强化学习等.最后总结分析当前存在的关键问题,并从理论和应用方面探讨未来的发展方向.  相似文献   

6.
This paper introduces a model-free reinforcement learning technique that is used to solve a class of dynamic games known as dynamic graphical games. The graphical game results from multi-agent dynamical systems, where pinning control is used to make all the agents synchronize to the state of a command generator or a leader agent. Novel coupled Bellman equations and Hamiltonian functions are developed for the dynamic graphical games. The Hamiltonian mechanics are used to derive the necessary conditions for optimality. The solution for the dynamic graphical game is given in terms of the solution to a set of coupled Hamilton-Jacobi-Bellman equations developed herein. Nash equilibrium solution for the graphical game is given in terms of the solution to the underlying coupled Hamilton-Jacobi-Bellman equations. An online model-free policy iteration algorithm is developed to learn the Nash solution for the dynamic graphical game. This algorithm does not require any knowledge of the agents’ dynamics. A proof of convergence for this multi-agent learning algorithm is given under mild assumption about the inter-connectivity properties of the graph. A gradient descent technique with critic network structures is used to implement the policy iteration algorithm to solve the graphical game online in real-time.  相似文献   

7.
Bearings and tools are the important parts of the machine tool. And monitoring automatically the fault of bearings and the wear of tools under different working conditions is the necessary performance of the intelligent manufacturing system. In this paper, a multi-label imitation learning (MLIL) framework is proposed to monitor the tool wear and bearing fault under different working conditions. Specially, the multi-label samples with multiple sublabels are transformed into the imitation objects, and the MLIL develops a discriminator and a deep reinforcement learning (DRL) to imitate the feature from imitation objects. In detail, the DRL is implemented without setting the reward function to enhance the feature extraction ability of deep neural networks, and meanwhile the discriminator is used to discriminate the generations of DRL and imitation objects. As a result, the MLIL framework can not only deal with the correlation between multiple working conditions including different speeds and loads, but also distinguish the compound fault composed of coinstantaneous bearing fault and tool wear. Two cases demonstrate jointly the imitation ability of the MLIL framework on monitoring tool wear and bearing fault under different working conditions.  相似文献   

8.
Recently, a novel probabilistic model-building evolutionary algorithm (so called estimation of distribution algorithm, or EDA), named probabilistic model building genetic network programming (PMBGNP), has been proposed. PMBGNP uses graph structures for its individual representation, which shows higher expression ability than the classical EDAs. Hence, it extends EDAs to solve a range of problems, such as data mining and agent control. This paper is dedicated to propose a continuous version of PMBGNP for continuous optimization in agent control problems. Different from the other continuous EDAs, the proposed algorithm evolves the continuous variables by reinforcement learning (RL). We compare the performance with several state-of-the-art algorithms on a real mobile robot control problem. The results show that the proposed algorithm outperforms the others with statistically significant differences.  相似文献   

9.
The multimodal perception of intelligent robots is essential for achieving collision-free and efficient navigation. Autonomous navigation is enormously challenging when perception is acquired using only vision or LiDAR sensor data due to the lack of complementary information from different sensors. This paper proposes a simple yet efficient deep reinforcement learning (DRL) with sparse rewards and hindsight experience replay (HER) to achieve multimodal navigation. By adopting the depth images and pseudo-LiDAR data generated by an RGB-D camera as input, a multimodal fusion scheme is used to enhance the perception of the surrounding environment compared to using a single sensor. To alleviate the misleading way for the agent to navigate with dense rewards, the sparse rewards are intended to identify its tasks. Additionally, the HER technique is introduced to address the sparse reward navigation issue for accelerating optimal policy learning. The results show that the proposed model achieves state-of-the-art performance in terms of success, crash, and timeout rates, as well as generalization capability.  相似文献   

10.
《Automatica》2014,50(12):3038-3053
This paper introduces a new class of multi-agent discrete-time dynamic games, known in the literature as dynamic graphical games. For that reason a local performance index is defined for each agent that depends only on the local information available to each agent. Nash equilibrium policies and best-response policies are given in terms of the solutions to the discrete-time coupled Hamilton–Jacobi equations. Since in these games the interactions between the agents are prescribed by a communication graph structure we have to introduce a new notion of Nash equilibrium. It is proved that this notion holds if all agents are in Nash equilibrium and the graph is strongly connected. A novel reinforcement learning value iteration algorithm is given to solve the dynamic graphical games in an online manner along with its proof of convergence. The policies of the agents form a Nash equilibrium when all the agents in the neighborhood update their policies, and a best response outcome when the agents in the neighborhood are kept constant. The paper brings together discrete Hamiltonian mechanics, distributed multi-agent control, optimal control theory, and game theory to formulate and solve these multi-agent dynamic graphical games. A simulation example shows the effectiveness of the proposed approach in a leader-synchronization case along with optimality guarantees.  相似文献   

11.
为了实现两轮机器人的自平衡控制, 利用Skinner操作条件反射机理, 以概率自动机为平台, 融入模糊推理, 构造了模糊操作条件概率自动机(OCPA)仿生自主学习系统. 该学习系统是一个从状态集合到操作行为集合的随机映射, 采用操作条件反射学习机制, 从操作行为集合中随机学习作为控制系统控制信号的最优行为, 并利用学习到的操作行为取向值信息, 调整操作条件反射学习算法. 此外, 学习系统还引入行为熵, 以验证其自学习和自组织能力. 应用于两轮机器人自平衡控制的仿真结果, 验证了模糊OCPA学习系统的可行性.  相似文献   

12.
零次学习(ZSL)是迁移学习在图像识别领域一个重要的分支。其主要的学习方法是在不使用未见类 的情况下,通过训练可见类语义属性和视觉属性映射关系来对未见类样本进行识别,是当前图像识别领域的热点。 现有的 ZSL 模型存在语义属性和视觉属性的信息不对称,语义信息不能很好地描述视觉信息,从而出现了领域漂 移问题。未见类语义属性到视觉属性合成过程中部分视觉特征信息未被合成,影响了识别准确率。为了解决未见 类语义特征缺失和未见类视觉特征匹配合成问题,本文设计了属性语义与图谱语义融合增强的 ZSL 模型实现 ZSL 效果的提升。该模型学习过程中使用知识图谱关联视觉特征,同时考虑样本之间的属性联系,对可见类样本和未 见类样本语义信息进行了增强,采用对抗式的学习过程加强视觉特征的合成。该方法在 4 个典型的数据集上实验 表现出了较好的实验效果,模型也可以合成较为细致的视觉特征,优于目前已有的 ZSL 方法。  相似文献   

13.
It is assumed that future robots must coexist with human beings and behave as their companions. Consequently, the complexities of their tasks would increase. To cope with these complexities, scientists are inclined to adopt the anatomical functions of the brain for the mapping and the navigation in the field of robotics. While admitting the continuous works in improving the brain models and the cognitive mapping for robots’ navigation, we show, in this paper, that learning by imitation leads to a positive effect not only in human behavior but also in the behavior of a multi-robot system. We present the interest of low-level imitation strategy at individual and social levels in the case of robots. Particularly, we show that adding a simple imitation capability to the brain model for building a cognitive map improves the ability of individual cognitive map building and boosts sharing information in an unknown environment. Taking into account the notion of imitative behavior, we also show that the individual discoveries (i.e. goals) could have an effect at the social level and therefore inducing the learning of new behaviors at the individual level. To analyze and validate our hypothesis, a series of experiments has been performed with and without a low-level imitation strategy in the multi-robot system.  相似文献   

14.
赵海妮  焦健 《计算机应用》2022,42(6):1689-1694
渗透测试的核心问题是渗透测试路径的规划,手动规划依赖测试人员的经验,而自动生成渗透路径主要基于网络安全的先验知识和特定的漏洞或网络场景,所需成本高且缺乏灵活性。针对这些问题,提出一种基于强化学习的渗透路径推荐模型QLPT,通过多回合的漏洞选择和奖励反馈,最终给出针对渗透对象的最佳渗透路径。在开源靶场的渗透实验结果表明,与手动测试的渗透路径相比,所提模型推荐的路径具有较高一致性,验证了该模型的可行性与准确性;与自动化渗透测试框架Metasploit相比,该模型在适应所有渗透场景方面也更具灵活性。  相似文献   

15.
In the actual working site, the equipment often works in different working conditions while the manufacturing system is rather complicated. However, traditional multi-label learning methods need to use the pre-defined label sequence or synchronously predict all labels of the input sample in the fault diagnosis domain. Deep reinforcement learning (DRL) combines the perception ability of deep learning and the decision-making ability of reinforcement learning. Moreover, the curriculum learning mechanism follows the learning approach of humans from easy to complex. Consequently, an improved proximal policy optimization (PPO) method, which is a typical algorithm in DRL, is proposed as a novel method on multi-label classification in this paper. The improved PPO method could build a relationship between several predicted labels of input sample because of designing an action history vector, which encodes all history actions selected by the agent at current time step. In two rolling bearing experiments, the diagnostic results demonstrate that the proposed method provides a higher accuracy than traditional multi-label methods on fault recognition under complicated working conditions. Besides, the proposed method could distinguish the multiple labels of input samples following the curriculum mechanism from easy to complex, compared with the same network using the pre-defined label sequence.  相似文献   

16.
Richly formatted documents, such as financial disclosures, scientific articles, government regulations, widely exist on Web. However, since most of these documents are only for public reading, the styling information inside them is usually missing, making them improper or even burdensome to be displayed and edited in different formats and platforms. In this study we formulate the task of document styling restoration as an optimization problem, which aims to identify the styling settings on the document elements, e.g., lines, table cells, text, so that rendering with the output styling settings results in a document, where each element inside it holds the (closely) exact position with the one in the original document. Considering that each styling setting is a decision, this problem can be transformed as a multi-step decision-making task over all the document elements, and then be solved by reinforcement learning. Specifically, Monte-Carlo Tree Search (MCTS) is leveraged to explore the different styling settings, and the policy function is learnt under the supervision of the delayed rewards. As a case study, we restore the styling information inside tables, where structural and functional data in the documents are usually presented. Experiment shows that, our best reinforcement method successfully restores the stylings in 87.65% of the tables, with 25.75% absolute improvement over the greedymethod.We also discuss the tradeoff between the inference time and restoration success rate, and argue that although the reinforcement methods cannot be used in real-time scenarios, it is suitable for the offline tasks with high-quality requirement. Finally, this model has been applied in a PDF parser to support cross-format display.  相似文献   

17.
In this paper, a new formulation for the optimal tracking control problem (OTCP) of continuous-time nonlinear systems is presented. This formulation extends the integral reinforcement learning (IRL) technique, a method for solving optimal regulation problems, to learn the solution to the OTCP. Unlike existing solutions to the OTCP, the proposed method does not need to have or to identify knowledge of the system drift dynamics, and it also takes into account the input constraints a priori. An augmented system composed of the error system dynamics and the command generator dynamics is used to introduce a new nonquadratic discounted performance function for the OTCP. This encodes the input constrains into the optimization problem. A tracking Hamilton–Jacobi–Bellman (HJB) equation associated with this nonquadratic performance function is derived which gives the optimal control solution. An online IRL algorithm is presented to learn the solution to the tracking HJB equation without knowing the system drift dynamics. Convergence to a near-optimal control solution and stability of the whole system are shown under a persistence of excitation condition. Simulation examples are provided to show the effectiveness of the proposed method.  相似文献   

18.
本文针对波动鳍推进水下作业机器人的悬停控制问题开展研究. 首先, 给出了波动鳍推进水下作业机器人 的运动学模型、动力学模型和波动鳍的参数–力映射模型, 建立了基于马尔可夫决策过程的悬停控制训练框架. 其 次, 基于模型结构和训练策略, 使用强化学习的方法进行网络训练, 得到最佳的悬停控制器. 最终, 在室内水池中完 成了波动鳍推进水下作业机器人的悬停控制实验, 实验结果验证了所提方法的有效性.  相似文献   

19.
强化学习与生成式对抗网络结合方法研究进展   总被引:1,自引:0,他引:1       下载免费PDF全文
强化学习和生成式对抗网络是近年来人工智能领域的两个热门主题,在众多领域表现非常出色。近期出现较多关于两者结合的工作与报道,将强化学习交互式学习的优点与生成式对抗网络的启发自博弈思想相互融合。对两者结合的最新进展进行了梳理、比较与实验分析。对强化学习与生成式对抗网络的理论进行了概述;从强化学习改进生成式对抗网络、生成式对抗网络改进强化学习两个研究方向进行了阐述与比较,通过实验方式分析了这些方法在自然语言、机器控制领域的应用情况;展望了可能的发展趋势。  相似文献   

20.
深度学习图像数据增广方法研究综述   总被引:1,自引:0,他引:1       下载免费PDF全文
数据作为深度学习的驱动力,对于模型的训练至关重要。充足的训练数据不仅可以缓解模型在训练时的过拟合问题,而且可以进一步扩大参数搜索空间,帮助模型进一步朝着全局最优解优化。然而,在许多领域或任务中,获取到充足训练样本的难度和代价非常高。因此,数据增广成为一种常用的增加训练样本的手段。本文对目前深度学习中的图像数据增广方法进行研究综述,梳理了目前深度学习领域为缓解模型过拟合问题而提出的各类数据增广方法,按照方法本质原理的不同,将其分为单数据变形、多数据混合、学习数据分布和学习增广策略等4类方法,并以图像数据为主要研究对象,对各类算法进一步按照核心思想进行细分,并对方法的原理、适用场景和优缺点进行比较和分析,帮助研究者根据数据的特点选用合适的数据增广方法,为后续国内外研究者应用和发展研究数据增广方法提供基础。针对图像的数据增广方法,单数据变形方法主要可以分为几何变换、色域变换、清晰度变换、噪声注入和局部擦除等5种;多数据混合可按照图像维度的混合和特征空间下的混合进行划分;学习数据分布的方法主要基于生成对抗网络和图像风格迁移的应用进行划分;学习增广策略的典型方法则可以按照基于元学习和基于强化学习进行分类。目前,数据增广已然成为推进深度学习在各领域应用的一项重要技术,可以很有效地缓解训练数据不足带来的深度学习模型过拟合的问题,进一步提高模型的精度。在实际应用中可根据数据和任务的特点选择和组合最合适的方法,形成一套有效的数据增广方案,进而为深度学习方法的应用提供更强的动力。在未来,根据数据和任务基于强化学习探索最优的组合策略,基于元学习自适应地学习最优数据变形和混合方式,基于生成对抗网络进一步拟合真实数据分布以采样高质量的未知数据,基于风格迁移探索多模态数据互相转换的应用,这些研究方向十分值得探索并且具有广阔的发展前景。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号