首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We present a novel method for a robot to interactively learn, while executing, a joint human–robot task. We consider collaborative tasks realized by a team of a human operator and a robot helper that adapts to the human’s task execution preferences. Different human operators can have different abilities, experiences, and personal preferences so that a particular allocation of activities in the team is preferred over another. Our main goal is to have the robot learn the task and the preferences of the user to provide a more efficient and acceptable joint task execution. We cast concurrent multi-agent collaboration as a semi-Markov decision process and show how to model the team behavior and learn the expected robot behavior. We further propose an interactive learning framework and we evaluate it both in simulation and on a real robotic setup to show the system can effectively learn and adapt to human expectations.  相似文献   

2.
为了控制移动机器人在人群密集的复杂环境中高效友好地完成避障任务,本文提出了一种人群环境中基于深度强化学习的移动机器人避障算法。首先,针对深度强化学习算法中值函数网络学习能力不足的情况,基于行人交互(crowd interaction)对值函数网络做了改进,通过行人角度网格(angel pedestrian grid)对行人之间的交互信息进行提取,并通过注意力机制(attention mechanism)提取单个行人的时序特征,学习得到当前状态与历史轨迹状态的相对重要性以及对机器人避障策略的联合影响,为之后多层感知机的学习提供先验知识;其次,依据行人空间行为(human spatial behavior)设计强化学习的奖励函数,并对机器人角度变化过大的状态进行惩罚,实现了舒适避障的要求;最后,通过仿真实验验证了人群环境中基于深度强化学习的移动机器人避障算法在人群密集的复杂环境中的可行性与有效性。  相似文献   

3.
This paper presents a new reinforcement learning algorithm that enables collaborative learning between a robot and a human. The algorithm which is based on the Q(λ) approach expedites the learning process by taking advantage of human intelligence and expertise. The algorithm denoted as CQ(λ) provides the robot with self awareness to adaptively switch its collaboration level from autonomous (self performing, the robot decides which actions to take, according to its learning function) to semi-autonomous (a human advisor guides the robot and the robot combines this knowledge into its learning function). This awareness is represented by a self test of its learning performance. The approach of variable autonomy is demonstrated and evaluated using a fixed-arm robot for finding the optimal shaking policy to empty the contents of a plastic bag. A comparison between the CQ(λ) and the traditional Q(λ)-reinforcement learning algorithm, resulted in faster convergence for the CQ(λ) collaborative reinforcement learning algorithm.  相似文献   

4.
多轮对话推荐系统(CRS)以交互的方式获取用户的实时信息,相较于基于协同过滤等的传统推荐方法能够取得更好的推荐效果。然而现有的CRS存在用户偏好捕获不够准确、对话轮数要求过多以及推荐时机不恰当等问题。针对这些问题,提出一种基于深度强化学习且考虑用户多粒度反馈信息的对话推荐算法。不同于现有的CRS,所提算法在每轮对话中同时考虑用户对商品本身以及更细粒度的商品属性的反馈,然后根据收集的多粒度反馈对用户、商品和商品属性特征进行在线更新,并借助深度Q学习网络(DQN)算法分析每轮对话后的环境状态,从而帮助系统作出较为恰当合理的决策动作,使它能够在比较少的对话轮次的情况下分析用户购买商品的原因,更全面地挖掘用户的实时偏好。与对话路径推理(SCPR)算法相比,在Last. fm真实数据集上,算法的15轮推荐成功率提升了46.5%,15轮推荐轮次上缩短了0.314轮;在Yelp真实数据集上,算法保持了相同水平的推荐成功率,但在15轮推荐轮次上缩短了0.51轮。  相似文献   

5.
The genetic robot has many configurable genes that contribute to defining the robot’s personality. The large number of genes allows for a highly complex system, however it becomes increasingly difficult and time-consuming to ensure reliability, variability and consistency for the robot’s personality while manually initializing values for the individual genes. To overcome this difficulty, this paper proposes MBTI-EAGRP. It is a fully autonomic gene-generative algorithm for a genetic robot’s personality in a mobile phone. After grasping the user preferences through MBTI assessment using the neural network algorithm, the evolutionary algorithm generates and evolves a gene pool that customizes the robot’s genome so that it closely matches a simplified set of personality features preferred by the user. Finally, an evaluation procedure for individuals is carried out in a virtual environment using tailored perception scenarios and real MBTI measurements.  相似文献   

6.
哺乳动物的运动学习机制已得到广泛研究,犬科动物可以根据环境反馈的引导性信息自主地学习运动技能,对其提供更为特定的训练引导可以加快其对相关任务的学习速度.受上述启发,在软演员-评论家算法(SAC)的基础上提出一种基于期望状态奖励引导的强化学习算法(DSG-SAC),利用环境中的状态反馈机制来引导四足机器人进行有效探索,可以提高四足机器人仿生步态学习效果,并提高训练效率.在该算法中,策略网络与评价网络先近似拟合期望状态观测与当前状态的误差,再经过当前状态的正反馈后输出评价函数与动作,使四足机器人朝着期望的方向动作.将所提出算法在四足机器人上进行验证,通过实验结果可知,所提出的算法能够完成四足机器人的仿生步态学习.进一步,设计消融实验来探讨超参数温度系数和折扣因子对算法的影响,实验结果表明,改进后的算法具有比单纯的SAC算法更加优越的性能.  相似文献   

7.
Learning via human feedback in continuous state and action spaces   总被引:1,自引:1,他引:0  
This paper considers the problem of extending Training an Agent Manually via Evaluative Reinforcement (TAMER) in continuous state and action spaces. Investigative research using the TAMER framework enables a non-technical human to train an agent through a natural form of human feedback (negative or positive). The advantages of TAMER have been shown on tasks of training agents by only human feedback or combining human feedback with environment rewards. However, these methods are originally designed for discrete state-action, or continuous state-discrete action problems. This paper proposes an extension of TAMER to allow both continuous states and actions, called ACTAMER. The new framework utilizes any general function approximation of a human trainer’s feedback signal. Moreover, a combined capability of ACTAMER and reinforcement learning is also investigated and evaluated. The combination of human feedback and reinforcement learning is studied in both settings: sequential and simultaneous. Our experimental results demonstrate the proposed method successfully allowing a human to train an agent in two continuous state-action domains: Mountain Car and Cart-pole (balancing).  相似文献   

8.
This paper proposes an end-to-end learning from demonstration framework for teaching force-based manipulation tasks to robots. The strengths of this work are manyfold. First, we deal with the problem of learning through force perceptions exclusively. Second, we propose to exploit haptic feedback both as a means for improving teacher demonstrations and as a human–robot interaction tool, establishing a bidirectional communication channel between the teacher and the robot, in contrast to the works using kinesthetic teaching. Third, we address the well-known what to imitate? problem from a different point of view, based on the mutual information between perceptions and actions. Lastly, the teacher’s demonstrations are encoded using a Hidden Markov Model, and the robot execution phase is developed by implementing a modified version of Gaussian Mixture Regression that uses implicit temporal information from the probabilistic model, needed when tackling tasks with ambiguous perceptions. Experimental results show that the robot is able to learn and reproduce two different manipulation tasks, with a performance comparable to the teacher’s one.  相似文献   

9.
A compound control algorithm is supposed to a robot joint actuated by McKibben muscles, which combines both CMAC control and PID control. The CMAC feedforward compensator realizes the joint system’s dynamic model. The PID controller realizes the feedback control in order to guarantee the system’s stability. The compound controller’s output takes control of the system’s actions. By the CMAC learning process, the PID output tends to zero, and the final controlled action is directed by the CMAC controller. Digital simulation results prove that this compound control algorithm has the very high tracking capacity, interference immunity, and quick system response.  相似文献   

10.
《Knowledge》2006,19(5):324-332
We present a system for visual robotic docking using an omnidirectional camera coupled with the actor critic reinforcement learning algorithm. The system enables a PeopleBot robot to locate and approach a table so that it can pick an object from it using the pan-tilt camera mounted on the robot. We use a staged approach to solve this problem as there are distinct subtasks and different sensors used. Starting with random wandering of the robot until the table is located via a landmark, then a network trained via reinforcement allows the robot to turn to and approach the table. Once at the table the robot is to pick the object from it. We argue that our approach has a lot of potential allowing the learning of robot control for navigation and remove the need for internal maps of the environment. This is achieved by allowing the robot to learn couplings between motor actions and the position of a landmark.  相似文献   

11.
Recently, robot learning through deep reinforcement learning has incorporated various robot tasks through deep neural networks, without using specific control or recognition algorithms. However, this learning method is difficult to apply to the contact tasks of a robot, due to the exertion of excessive force from the random search process of reinforcement learning. Therefore, when applying reinforcement learning to contact tasks, solving the contact problem using an existing force controller is necessary. A neural-network-based movement primitive (NNMP) that generates a continuous trajectory which can be transmitted to the force controller and learned through a deep deterministic policy gradient (DDPG) algorithm is proposed for this study. In addition, an imitation learning algorithm suitable for NNMP is proposed such that the trajectories similar to the demonstration trajectory are stably generated. The performance of the proposed algorithms was verified using a square peg-in-hole assembly task with a tolerance of 0.1 mm. The results confirm that the complicated assembly trajectory can be learned stably through NNMP by the proposed imitation learning algorithm, and that the assembly trajectory is improved by learning the proposed NNMP through the DDPG algorithm.  相似文献   

12.
We propose an approach to efficiently teach robots how to perform dynamic manipulation tasks in cooperation with a human partner. The approach utilises human sensorimotor learning ability where the human tutor controls the robot through a multi-modal interface to make it perform the desired task. During the tutoring, the robot simultaneously learns the action policy of the tutor and through time gains full autonomy. We demonstrate our approach by an experiment where we taught a robot how to perform a wood sawing task with a human partner using a two-person cross-cut saw. The challenge of this experiment is that it requires precise coordination of the robot’s motion and compliance according to the partner’s actions. To transfer the sawing skill from the tutor to the robot we used Locally Weighted Regression for trajectory generalisation, and adaptive oscillators for adaptation of the robot to the partner’s motion.  相似文献   

13.
This paper addresses a new method for combination of supervised learning and reinforcement learning (RL). Applying supervised learning in robot navigation encounters serious challenges such as inconsistent and noisy data, difficulty for gathering training data, and high error in training data. RL capabilities such as training only by one evaluation scalar signal, and high degree of exploration have encouraged researchers to use RL in robot navigation problem. However, RL algorithms are time consuming as well as suffer from high failure rate in the training phase. Here, we propose Supervised Fuzzy Sarsa Learning (SFSL) as a novel idea for utilizing advantages of both supervised and reinforcement learning algorithms. A zero order Takagi–Sugeno fuzzy controller with some candidate actions for each rule is considered as the main module of robot's controller. The aim of training is to find the best action for each fuzzy rule. In the first step, a human supervisor drives an E-puck robot within the environment and the training data are gathered. In the second step as a hard tuning, the training data are used for initializing the value (worth) of each candidate action in the fuzzy rules. Afterwards, the fuzzy Sarsa learning module, as a critic-only based fuzzy reinforcement learner, fine tunes the parameters of conclusion parts of the fuzzy controller online. The proposed algorithm is used for driving E-puck robot in the environment with obstacles. The experiment results show that the proposed approach decreases the learning time and the number of failures; also it improves the quality of the robot's motion in the testing environments.  相似文献   

14.
Reinforcement learning (RL) is a popular method for solving the path planning problem of autonomous mobile robots in unknown environments. However, the primary difficulty faced by learning robots using the RL method is that they learn too slowly in obstacle-dense environments. To more efficiently solve the path planning problem of autonomous mobile robots in such environments, this paper presents a novel approach in which the robot’s learning process is divided into two phases. The first one is to accelerate the learning process for obtaining an optimal policy by developing the well-known Dyna-Q algorithm that trains the robot in learning actions for avoiding obstacles when following the vector direction. In this phase, the robot’s position is represented as a uniform grid. At each time step, the robot performs an action to move to one of its eight adjacent cells, so the path obtained from the optimal policy may be longer than the true shortest path. The second one is to train the robot in learning a collision-free smooth path for decreasing the number of the heading changes of the robot. The simulation results show that the proposed approach is efficient for the path planning problem of autonomous mobile robots in unknown environments with dense obstacles.  相似文献   

15.
To enable a natural and fluent human robot collaboration flow, it is critical for a robot to comprehend their human peers’ on-going actions, predict their behaviors in the near future, and plan its actions correspondingly. Specifically, the capability of making early predictions is important, so that the robot can foresee the precise timing of a turn-taking event and start motion planning and execution early enough to smooth the turn-taking transition. Such proactive behavior would reduce human’s waiting time, increase efficiency and enhance naturalness in collaborative task. To that end, this paper presents the design and implementation of an early turn-taking prediction algorithm, catered for physical human robot collaboration scenarios. Specifically, a robotic scrub nurse system which can comprehend surgeon’s multimodal communication cues and perform turn-taking prediction is presented. The developed algorithm was tested on a collected data set of simulated surgical procedures in a surgeon–nurse tandem. The proposed turn-taking prediction algorithm is found to be significantly superior to its algorithmic counterparts, and is more accurate than human baseline when little partial input is given (less than 30% of full action). After observing more information, the algorithm can achieve comparable performances as humans with a F1 score of 0.90.  相似文献   

16.
In minimally invasive surgery, tools go through narrow openings and manipulate soft organs to perform surgical tasks. There are limitations in current robot-assisted surgical systems due to the rigidity of robot tools. The aim of the STIFF-FLOP European project is to develop a soft robotic arm to perform surgical tasks. The flexibility of the robot allows the surgeon to move within organs to reach remote areas inside the body and perform challenging procedures in laparoscopy. This article addresses the problem of designing learning interfaces enabling the transfer of skills from human demonstration. Robot programming by demonstration encompasses a wide range of learning strategies, from simple mimicking of the demonstrator's actions to the higher level imitation of the underlying intent extracted from the demonstrations. By focusing on this last form, we study the problem of extracting an objective function explaining the demonstrations from an over-specified set of candidate reward functions, and using this information for self-refinement of the skill. In contrast to inverse reinforcement learning strategies that attempt to explain the observations with reward functions defined for the entire task (or a set of pre-defined reward profiles active for different parts of the task), the proposed approach is based on context-dependent reward-weighted learning, where the robot can learn the relevance of candidate objective functions with respect to the current phase of the task or encountered situation. The robot then exploits this information for skills refinement in the policy parameters space. The proposed approach is tested in simulation with a cutting task performed by the STIFF-FLOP flexible robot, using kinesthetic demonstrations from a Barrett WAM manipulator.  相似文献   

17.
提出一种改进深度强化学习算法(NDQN),解决传统Q-learning算法处理复杂地形中移动机器人路径规划时面临的维数灾难.提出一种将深度学习融于Q-learning框架中,以网络输出代替Q值表的深度强化学习方法.针对深度Q网络存在严重的过估计问题,利用更正函数对深度Q网络中的评价函数进行改进.将改进深度强化学习算法与...  相似文献   

18.
We described a new preteaching method for re-inforcement learning using a self-organizing map (SOM). The purpose is to increase the learning rate using a small amount of teaching data generated by a human expert. In our proposed method, the SOM is used to generate the initial teaching data for the reinforcement learning agent from a small amount of teaching data. The reinforcement learning function of the agent is initialized by using the teaching data generated by the SOM in order to increase the probability of selecting the optimal actions it estimates. Because the agent can get high rewards from the start of reinforcement learning, it is expected that the learning rate will increase. The results of a mobile robot simulation showed that the learning rate had increased even though the human expert had showed only a small amount of teaching data. This work was presented in part at the 7th International Symposium on Artificial Life and Robotics, Oita, Japan, January 16–18, 2002  相似文献   

19.
A walking-aid robot is an assistive device for enabling safe, stable and efficient locomotion in elderly or disabled individuals. In this paper, we propose a reinforcement learning-based shared control (RLSC) algorithm for intelligent walking-aid robot to address existing control problems in cooperative walking-aid robot system. Firstly, the intelligent walking-aid robot and the human walking intention estimation algorithm are introduced. Due to the limited physical and cognitive capabilities of elderly and disabled people, robot control input assistance is provided to maintain tactile comfort and a sense of stability. Then, considering the robot’s ability to autonomously adapt to different user operation habits and motor abilities, the RLSC algorithm is proposed. By dynamically adjusting user control weight according to different user control efficiencies and walking environments, the robot can improve the user’s degree of comfort when using the device and automatically adapting to user’s behaviour. Finally, the effectiveness of our algorithm is verified by experiments in a specified environment.  相似文献   

20.
Q-学习及其在智能机器人局部路径规划中的应用研究   总被引:9,自引:3,他引:6  
强化学习一词来自于行为心理学,这门学科把行为学习看成反复试验的过程,从而把环境状态映射成相应的动作.在设计智能机器人过程中,如何来实现行为主义的思想、在与环境的交互中学习行为动作? 文中把机器人在未知环境中为躲避障碍所采取的动作看作一种行为,采用强化学习方法来实现智能机器人避碰行为学习.Q-学习算法是类似于动态规划的一种强化学习方法,文中在介绍了Q-学习的基本算法之后,提出了具有竞争思想和自组织机制的Q-学习神经网络学习算法;然后研究了该算法在智能机器人局部路径规划中的应用,在文中的最后给出了详细的仿真结果  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号