首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
针对现有机器人路径规划强化学习算法收敛速度慢的问题,提出了一种基于人工势能场的移动机器人强化学习初始化方法.将机器人工作环境虚拟化为一个人工势能场,利用先验知识确定场中每点的势能值,它代表最优策略可获得的最大累积回报.例如障碍物区域势能值为零,目标点的势能值为全局最大.然后定义Q初始值为当前点的立即回报加上后继点的最大折算累积回报.改进算法通过Q值初始化,使得学习过程收敛速度更快,收敛过程更稳定.最后利用机器人在栅格地图中的路径对所提出的改进算法进行验证,结果表明该方法提高了初始阶段的学习效率,改善了算法性能.  相似文献   

2.
基于神经网络的连续状态空间Q学习已应用在机器人导航领域。针对神经网络易陷入局部极小,提出了将支持向量机与Q学习相结合的移动机器人导航方法。首先以研制的CASIA-I移动机器人和它的工作环境为实验平台,确定出Q学习的回报函数;然后利用支持向量机对Q学习的状态——动作对的Q值进行在线估计,同时,为了提高估计速度,引入滚动时间窗机制;最后对所提方法进行了实验,实验结果表明所提方法能够使机器人无碰撞的到达目的地。  相似文献   

3.
Path planning and obstacle avoidance are two challenging problems in the study of intelligent robots. In this paper, we develop a new method to alleviate these problems based on deep Q-learning with experience replay and heuristic knowledge. In this method, a neural network has been used to resolve the “curse of dimensionality” issue of the Q-table in reinforcement learning. When a robot is walking in an unknown environment, it collects experience data which is used for training a neural network; such a process is called experience replay. Heuristic knowledge helps the robot avoid blind exploration and provides more effective data for training the neural network. The simulation results show that in comparison with the existing methods, our method can converge to an optimal action strategy with less time and can explore a path in an unknown environment with fewer steps and larger average reward.   相似文献   

4.
A problem related to the use of reinforcement learning (RL) algorithms on real robot applications is the difficulty of measuring the learning level reached after some experience. Among the different RL algorithms, the Q-learning is the most widely used in accomplishing robotic tasks. The aim of this work is to a priori evaluate the optimal Q-values for problems where it is possible to compute the distance between the current state and the goal state of the system. Starting from the Q-learning updating formula the equations for the maximum Q-weights, for optimal and non-optimal actions, have been computed considering delayed and immediate rewards. Deterministic and non deterministic grid-world environments have been also considered to test in simulations the obtained equations. Besides the convergence rates of the Q-learning algorithm have been compared using different learning rate parameters.  相似文献   

5.
样本有限关联值递归Q学习算法及其收敛性证明   总被引:5,自引:0,他引:5  
一个激励学习Agent通过学习一个从状态到动作映射的最优策略来解决问题,求解最优决策一般有两种途径:一种是求最大奖赏方法,另一种最求最优费用方法,利用求解最优费用函数的方法给出了一种新的Q学习算法,Q学习算法是求解信息不完全Markov决策问题的一种有效激励学习方法。Watkins提出了Q学习的基本算法,尽管他证明了在满足一定条件下Q值学习的迭代公式的收敛性,但是在他给出的算法中,没有考虑到在迭代过程中初始状态与初始动作的选取对后继学习的影响,因此提出的关联值递归Q学习算法改进了原来的Q学习算法,并且这种算法有比较好的收敛性质,从求解最优费用函数的方法出发,给出了Q学习的关联值递归算法,这种方法的建立可以使得动态规划(DP)算法中的许多结论直接应用到Q学习的研究中来。  相似文献   

6.
针对传统煤矸石分拣机械臂控制算法如抓取函数法、基于费拉里法的动态目标抓取算法等依赖于精确的环境模型、且控制过程缺乏自适应性,传统深度确定性策略梯度(DDPG)等智能控制算法存在输出动作过大及稀疏奖励容易被淹没等问题,对传统DDPG算法中的神经网络结构和奖励函数进行了改进,提出了一种适合处理六自由度煤矸石分拣机械臂的基于强化学习的改进DDPG算法。煤矸石进入机械臂工作空间后,改进DDPG算法可根据相应传感器返回的煤矸石位置及机械臂状态进行决策,并向相应运动控制器输出一组关节角状态控制量,根据煤矸石位置及关节角状态控制量控制机械臂运动,使机械臂运动到煤矸石附近,实现煤矸石分拣。仿真实验结果表明:改进DDPG算法相较于传统DDPG算法具有无模型通用性强及在与环境交互中可自适应学习抓取姿态的优势,可率先收敛于探索过程中所遇的最大奖励值,利用改进DDPG算法控制的机械臂所学策略泛化性更好、输出的关节角状态控制量更小、煤矸石分拣效率更高。  相似文献   

7.
提出一种改进深度强化学习算法(NDQN),解决传统Q-learning算法处理复杂地形中移动机器人路径规划时面临的维数灾难.提出一种将深度学习融于Q-learning框架中,以网络输出代替Q值表的深度强化学习方法.针对深度Q网络存在严重的过估计问题,利用更正函数对深度Q网络中的评价函数进行改进.将改进深度强化学习算法与...  相似文献   

8.
基于有限样本的最优费用关联值递归Q学习算法   总被引:4,自引:2,他引:4  
一个激励学习Agent通过学习一个从状态到动作映射的最优策略来求解决策问题。求解最优决策一般有两种途径,一种是求最大奖赏方法,另一种是求最优费用方法。该文利用求解最优费用函数的方法给出了一种新的Q学习算法。Q学习算法是求解信息不完全Markov决策问题的一种有效激励学习方法。文章从求解最优费用函数的方法出发,给出了Q学习的关联值递归算法,这种方法的建立,可以使得动态规划(DP)算法中的许多结论直接应用到Q学习的研究中来。  相似文献   

9.
移动机器人在复杂环境中移动难以得到较优的路径,基于马尔可夫过程的Q学习(Q-learning)算法能通过试错学习取得较优的路径,但这种方法收敛速度慢,迭代次数多,且试错方式无法应用于真实的环境中。在Q-learning算法中加入引力势场作为初始环境先验信息,在其基础上对环境进行陷阱区域逐层搜索,剔除凹形陷阱区域[Q]值迭代,加快了路径规划的收敛速度。同时取消对障碍物的试错学习,使算法在初始状态就能有效避开障碍物,适用于真实环境中直接学习。利用python及pygame模块建立复杂地图,验证加入初始引力势场和陷阱搜索的改进Q-learning算法路径规划效果。仿真实验表明,改进算法能在较少的迭代次数后,快速有效地到达目标位置,且路径较优。  相似文献   

10.
Reinforcement learning is a learning scheme for finding the optimal policy to control a system, based on a scalar signal representing a reward or a punishment. If the observation of the system by the controller is sufficiently rich to represent the internal state of the system, the controller can achieve the optimal policy simply by learning reactive behavior. However, if the state of the controlled system cannot be assessed completely using current sensory observations, the controller must learn a dynamic behavior to achieve the optimal policy. In this paper, we propose a dynamic controller scheme which utilizes memory to uncover hidden states by using information about past system outputs, and makes control decisions using memory. This scheme integrates Q-learning, as proposed by Watkins, and recurrent neural networks of several types. It performs favorably in simulations which involve a task with hidden states. This work was presented, in part, at the International Symposium on Artificial Life and Robotics, Oita, Japan, February 18–20, 1996  相似文献   

11.
未知环境下基于有先验知识的滚动Q学习机器人路径规划   总被引:1,自引:0,他引:1  
胡俊  朱庆保 《控制与决策》2010,25(9):1364-1368
提出一种未知环境下基于有先验知识的滚动Q学习机器人路径规划算法.该算法在对Q值初始化时加入对环境的先验知识作为搜索启发信息,以避免学习初期的盲目性,可以提高收敛速度.同时,以滚动学习的方法解决大规模环境下机器人视野域范围有限以及因Q学习的状态空间增大而产生的维数灾难等问题.仿真实验结果表明,应用该算法,机器人可在复杂的未知环境中快速地规划出一条从起点到终点的优化避障路径,效果令人满意.  相似文献   

12.
传统Q算法对于机器人回报函数的定义较为宽泛,导致机器人的学习效率不高。为解决该问题,给出一种回报详细分类Q(RDC-Q)学习算法。综合机器人各个传感器的返回值,依据机器人距离障碍物的远近把机器人的状态划分为20个奖励状态和15个惩罚状态,对机器人每个时刻所获得的回报值按其状态的安全等级分类,使机器人趋向于安全等级更高的状态,从而帮助机器人更快更好地学习。通过在一个障碍物密集的环境中进行仿真实验,证明该算法收敛速度相对传统回报Q算法有明显提高。  相似文献   

13.
现有神经网络模糊测试技术在测试样本生成阶段通常对初始样本进行随机变异,导致生成样本质量不高,从而测试覆盖率不高;针对以上问题,提出一种基于强化学习算法的神经网络模糊测试技术,将模糊测试过程建模为马尔可夫决策过程,在该模型中,测试样本被看作环境状态,不同的变异方法被看作可供选择的动作空间,神经元覆盖率被看作奖励反馈,使用强化学习算法来学习最优的变异策略,指导生成最优测试样本,使其能够获得最高的神经元覆盖率;通过与现有的主流神经网络模糊测试方法的对比实验表明,基于强化学习算法的神经网络模糊测试技术,可以提升在不同粒度下的神经元覆盖。  相似文献   

14.
基于ART2的Q学习算法研究   总被引:1,自引:0,他引:1  
为了解决Q学习应用于连续状态空间的智能系统所面临的"维数灾难"问题,提出一种基于ART2的Q学习算法.通过引入ART2神经网络,让Q学习Agent针对任务学习一个适当的增量式的状态空间模式聚类,使Agent无需任何先验知识,即可在未知环境中进行行为决策和状态空间模式聚类两层在线学习,通过与环境交互来不断改进控制策略,从而提高学习精度.仿真实验表明,使用ARTQL算法的移动机器人能通过与环境交互学习来不断提高导航性能.  相似文献   

15.
Quad-Q-learning     
Develops the theory of quad-Q-learning which is a learning algorithm that evolved from Q-learning. Quad-Q-learning is applicable to problems that can be solved by "divide and conquer" techniques. Quad-Q-learning concerns an autonomous agent that learns without supervision to act optimally to achieve specified goals. The learning agent acts in an environment that can be characterized by a state. In the Q-learning environment, when an action is taken, a reward is received and a single new state results. The objective of Q-learning is to learn a policy function that maps states to actions so as to maximize a function of the rewards such as the sum of rewards. However, with respect to quad-Q-learning, when an action is taken from a state either an immediate reward and no new state results, or no reward is received and four new states result from taking that action. The environment in which quad-Q-learning operates can thus be viewed as a hierarchy of states where lower level states are the children of higher level states. The hierarchical aspect of quad-Q-learning leads to a bottom up view of learning that improves the efficiency of learning at higher levels in the hierarchy. The objective of quad-Q-learning is to maximize the sum of rewards obtained from each of the environments that result as actions are taken. Two versions of quad-Q-learning are discussed; these are discrete state and mixed discrete and continuous state quad-Q-learning. The discrete state version is only applicable to problems with small numbers of states. Scaling up to problems with practical numbers of states requires a continuous state learning method. Continuous state learning can be accomplished using functional approximation methods. Application of quad-Q-learning to image compression is briefly described.  相似文献   

16.
为解决多自由度双足机器人步行控制中高维非线性规划难题,挖掘不确定环境下双足机器人自主运动潜力,提出了一种改进的基于深度确定性策略梯度算法(DDPG)的双足机器人步态规划方案。把双足机器人多关节自由度控制问题转化为非线性函数的多目标优化求解问题,采用DDPG算法来求解。为解决全局逼近网络求解过程收敛慢的问题,采用径向基(RBF)神经网络进行非线性函数值的计算,并采用梯度下降算法更新神经网络权值,采用SumTree来筛选优质样本。通过ROS、Gazebo、Tensorflow的联合仿真平台对双足机器人进行了模拟学习训练。经数据仿真验证,改进后的DDPG算法平均达到最大累积奖励的时间提前了45.7%,成功率也提升了8.9%,且经训练后的关节姿态角度具有更好的平滑度。  相似文献   

17.
针对连续空间下的强化学习控制问题,提出了一种基于自组织模糊RBF网络的Q学习方法.网络的输入为状态,输出为连续动作及其Q值,从而实现了“连续状态—连续动作”的映射关系.首先将连续动作空间离散化为确定数目的离散动作,采用完全贪婪策略选取具有最大Q值的离散动作作为每条模糊规则的局部获胜动作.然后采用命令融合机制对获胜的离散动作按其效用值进行加权,得到实际作用于系统的连续动作.另外,为简化网络结构和提高学习速度,采用改进的RAN算法和梯度下降法分别对网络的结构和参数进行在线自适应调整.倒立摆平衡控制的仿真结果验证了所提Q学习方法的有效性.  相似文献   

18.
Mahadevan  Sridhar 《Machine Learning》1996,22(1-3):159-195
This paper presents a detailed study of average reward reinforcement learning, an undiscounted optimality framework that is more appropriate for cyclical tasks than the much better studied discounted framework. A wide spectrum of average reward algorithms are described, ranging from synchronous dynamic programming methods to several (provably convergent) asynchronous algorithms from optimal control and learning automata. A general sensitive discount optimality metric calledn-discount-optimality is introduced, and used to compare the various algorithms. The overview identifies a key similarity across several asynchronous algorithms that is crucial to their convergence, namely independent estimation of the average reward and the relative values. The overview also uncovers a surprising limitation shared by the different algorithms while several algorithms can provably generategain-optimal policies that maximize average reward, none of them can reliably filter these to producebias-optimal (orT-optimal) policies that also maximize the finite reward to absorbing goal states. This paper also presents a detailed empirical study of R-learning, an average reward reinforcement learning method, using two empirical testbeds: a stochastic grid world domain and a simulated robot environment. A detailed sensitivity analysis of R-learning is carried out to test its dependence on learning rates and exploration levels. The results suggest that R-learning is quite sensitive to exploration strategies and can fall into sub-optimal limit cycles. The performance of R-learning is also compared with that of Q-learning, the best studied discounted RL method. Here, the results suggest that R-learning can be fine-tuned to give better performance than Q-learning in both domains.  相似文献   

19.
为了解决传统深度强化学习在室内未知环境下移动机器人路径规划中存在探索能力差和环境状态空间奖励稀疏的问题,提出了一种基于深度图像信息的改进深度强化学习算法。利用Kinect视觉传感器直接获取的深度图像信息和目标位置信息作为网络的输入,以机器人的线速度和角速度作为下一步动作指令的输出。设计了改进的奖惩函数,提高了算法的奖励值,优化了状态空间,在一定程度上缓解了奖励稀疏的问题。仿真结果表明,改进算法提高了机器人的探索能力,优化了路径轨迹,使机器人有效地避开了障碍物,规划出更短的路径,简单环境下比DQN算法的平均路径长度缩短了21.4%,复杂环境下平均路径长度缩短了11.3%。  相似文献   

20.
针对目前智能移动机器人在未知环境中学习遇到的如学习主动性、实时性差,无法在线积累学习的知识和经验等问题,受心理学中内部动机的启发,提出一种内部动机驱动的移动机器人未知环境在线自主学习方法,在一定程度上弥补目前该领域存在的一些问题。该方法通过在移动机器人Q学习的框架下,将奖励机制用基于心理学启发的内部动机取代,提高其对于未知环境的学习主动性,同时,采用增量自组织神经网络代替经典Q学习中的查找表,实现输入输出空间的映射,使得机器人能够在线增量地学习未知环境。实验结果表明,通过内部动机驱动的方法,移动机器人对于未知环境的学习主动性得到了提高,智能程度有了明显改进。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号