首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 187 毫秒
1.
为了适应越来越复杂的非结构化环境,设计了一种基于球铰链连接和柔性支撑杆结合的线驱动连续型机械臂,并基于常曲率模型的假设建立连续型机器人的运动学模型,研究连续型机器人驱动映射关系,利用MATLAB进行运动学和驱动映射的仿真,仿真结果表明连续型机器人的空间优越性。搭建三关节连续型机器人样机平台,基于连续型机器人的特点设计末端关节跟随手柄操作模式,并在样机平台上实验验证,实验结果表明了运动学模型和驱动映射关系的合理性和正确性以及操控方式的可行性。  相似文献   

2.
针对定基座机器人在复杂环境下作业能力不足的问题,研制出电动力液压四足双臂机器人,将浮动基座与双臂系统的优势有机结合,能够代替人员完成复杂环境下应急处置、工程作业等任务。详细阐述了四足双臂机器人的机械结构、机载电液动力系统、分布式控制系统以及仿真与操作训练平台的设计与实现。提出基于全身虚拟模型的足底力分配方法与足臂协调运动规划方法,实现了躯干浮动基座与双臂系统的联动,大大提升了机器人的作业能力和效率。通过搭建的仿真与操作训练平台完成单臂作业以及双臂协同作业的仿真,验证了所提出控制方法的有效性,并对机器人操作员进行操作训练。在实际样机实验中,测试了单臂抓取以及双臂协同抓取的能力,证明了四足双臂机器人能够满足复杂环境下移动作业的需求。  相似文献   

3.
闫赟 《现代计算机》2022,(11):117-120
为了提升移动抓取机器人的智能化水平,以ROS为基础,研究并实现了室内移动抓取机器人系统原型。该系统通过激光雷达感知周围环境信息,利用Gmapping算法实现机器人的即时定位与地图构建,利用move_base算法实现机器人自主导航,利用基于深度学习的DOPE算法对目标的位姿进行估计,最终控制机械臂完成对目标的抓取。实验结果表明,在室内环境下,该系统能够完成移动与抓取任务,并具有较好的应用效果。  相似文献   

4.
为完成基于Kinect的多臂协调精细操作的任务,搭建了一套双机械臂操作系统,利用Kinect作为视觉传感器对场景进行实时检测,并利用基于工作空间的RRT算法对其中一台七自由度机械臂末端进行路径规划完成目标的自主抓取。根据手眼协调控制技术,利用另外一台六自由度机械臂末端的摄像机采集的图像误差控制机械臂的运动,并利用粒子滤波算法对目标进行实时跟踪。通过设计一套双臂协作完成物体交接的实验系统,完成了多臂协同操作的任务,并验证了实验方法的可靠性。  相似文献   

5.
未知环境下基于有先验知识的滚动Q学习机器人路径规划   总被引:1,自引:0,他引:1  
胡俊  朱庆保 《控制与决策》2010,25(9):1364-1368
提出一种未知环境下基于有先验知识的滚动Q学习机器人路径规划算法.该算法在对Q值初始化时加入对环境的先验知识作为搜索启发信息,以避免学习初期的盲目性,可以提高收敛速度.同时,以滚动学习的方法解决大规模环境下机器人视野域范围有限以及因Q学习的状态空间增大而产生的维数灾难等问题.仿真实验结果表明,应用该算法,机器人可在复杂的未知环境中快速地规划出一条从起点到终点的优化避障路径,效果令人满意.  相似文献   

6.
针对挖掘机的自主作业场景,提出基于强化学习的时间最优轨迹规划方法.首先,搭建仿真环境用于产生数据,以动臂、斗杆和铲斗关节的角度、角速度为状态观测变量,以各关节的角加速度值为动作信息,通过状态观测信息实现仿真环境与自主学习算法的交互;然后,设计以动臂、斗杆和铲斗关节运动是否超出允许范围、完成任务 总时间和目标相对距离为奖励函数对策略网络参数进行训练;最后,利用改进的近端策略优化算法(proximal policy optimization, PPO)实现挖掘机的时间最优轨迹规划.与此同时,与不同连续动作空间的强化学习算法进行对比,实验结果表明:所提出优化算法效率更高,收敛速度更快,作业轨迹更平滑,可有效避免各关节受到较大冲击,有助于挖掘机高效、平稳地作业.  相似文献   

7.
进行机械臂角度控制器设计过程中,为提高机器人机械臂灵活性,降低关节角度控制误差,设计一种细菌觅食算法的嵌入式机械臂角度控制器。首先,构建机械臂动力学模型以获取机械臂的柔性特征及其关节位置,根据获取的信息确定角度控制器的硬件逻辑结构和算法。然后,使用ARM微处理器嵌入式操作系统,设计包含移动控制终端和机械臂控制端的控制器硬件结构。最后,采用细菌觅食算法优化控制器参数,并实现代码完成机器人机械臂角度的精准跟踪控制。仿真分析结果表明:所提方法具有较高的位姿跟踪精度、角度控制误差小、稳定性强,能够保证机械臂关节角度无超调,具有极高的机器工程应用价值。  相似文献   

8.
为完成机械臂在非特定复杂背景环境下的自主抓取,通过设计RGB-D相机对场景内的物体进行实时检测,采用基于深度学习的目标检测定位方法,并对相机-机械臂-目标物体的三维标定模型进行研究。将物体的三维坐标信息通过ROS话题机制发送给机械臂,并通过moveIT编程规划抓取规划。 通过设计一套基于ROS的视觉检测和机械臂抓取系统,将计算机视觉检测技术以及机械臂运动规划抓取应用在机器人操作系统ROS平台上。实验结果表明,该系统可以实时高效地操作机器人来完成指定的控制作业,提高了系统对环境的适应能力,该系统具有抓取准确、物体识别准确率高的特点,解决了传统机械臂操控中的不足。  相似文献   

9.
为更加精准地检测或维修高压柜设备,降低在复杂环境下受到的影响,提出基于学习自动机的机器人协调操作感知控制方法。定量描述机械臂柔性连杆形变,依照拉格朗日定理推导机器人操作系统动力学规律与振动方程;使用导纳理论计算机器臂作用力与预期速率的关系,得到协调操作约束条件;将协调操作感知控制转换成二次型问题,利用学习自动机方法控制运动行为,保证机器人在规定时间内完成变电站高压柜分合闸和更换断路器等工作的协调操作感知控制。仿真结果表明,所提方法协调操作感知控制精度高、效率快,提高了机器人运动的柔顺性与同步性。  相似文献   

10.
连续型机器人因其具有柔顺大变形、灵巧运动等特点,已成为未来提升机器人安全性和交互性的发展趋势,而数字孪生是实现机器人-环境-人之间共融共存的重要技术保障.本文以张拉整体连续型柔性臂为研究对象,结合数字孪生和虚拟仿真等技术,让张拉整体柔性臂在虚拟空间和实际物理空间中得以深度融合.搭建数据通讯架构实现数据实时传输和驱动,以提升柔性臂与人的协同工作效率,并可在复杂的环境中通过碰撞检测反馈实现动态避障.进一步,开发了一款基于动力学的张拉整体柔性臂数字孪生系统,并通过虚实双向操控验证了所建系统的有效性,为机器人远程智能监测与控制提供了参考.  相似文献   

11.
四足机器人关节众多、运动方式复杂,步态规划是四足机器人运动控制的基础。传统的算法多基于仿生原理,缺乏广泛适应性。 在建立运动学方程的基础上,提出了一种基于改进蚁群算法的步态规划算法。该算法利用了四足机器人4条腿运动的线性无关性,将步态规划问题转换为在四维空间里求取最长路径问题。仿真结果表明,该算法得出了满足约束条件的所有步态,最后通过机器人样机检验,验证了该算法求取结果的有效性和合理性。  相似文献   

12.
基本Q学习算法应用于路径规划时,动作选择的随机性导致算法前期搜索效率较低,规划耗时长,甚至不能找到完整的可行路径,故提出一种改进蚁群与动态Q学习融合的机器人路径规划算法.利用精英蚂蚁模型和排序蚂蚁模型的信息素增量机制,设计了一种新的信息素增量更新方法,以提高机器人的探索效率;利用改进蚁群算法的信息素矩阵为Q表赋值,以减少机器人初期的无效探索;设计了一种动态选择策略,同时提高收敛速度和算法稳定性.在不同障碍物等级的二维静态栅格地图下进行的仿真结果表明,所提方法能够有效减少寻优过程中的迭代次数与寻优耗时.  相似文献   

13.
针对传统Q-learning算法在复杂环境下移动机器人路径规划问题中容易产生维数灾难的问题,提出一种改进方法。该方法将深度学习融于Q-learming框架中,以网络输出代替Q值表,解决维数灾难问题。通过构建记忆回放矩阵和双层网络结构打断数据相关性,提高算法收敛性。最后,通过栅格法建立仿真环境建模,在不同复杂程度上的地图上进行仿真实验,对比实验验证了传统Q-learming难以在大状态空间下进行路径规划,深度强化学习能够在复杂状态环境下进行良好的路径规划。  相似文献   

14.
This paper presents a new algorithm of path planning for mobile robots, which utilises the characteristics of the obstacle border and fuzzy logical reasoning. The environment topology or working space is described by the time-variable grid method that can be further described by the moving obstacles and the variation of path safety. Based on the algorithm, a new path planning approach for mobile robots in an unknown environment has been developed. The path planning approach can let a mobile robot find a safe path from the current position to the goal based on a sensor system. The two types of machine learning: advancing learning and exploitation learning or trial learning are explored, and both are applied to the learning of mobile robot path planning algorithm. Comparison with A* path planning approach and various simulation results are given to demonstrate the efficiency of the algorithm. This path planning approach can also be applied to computer games.  相似文献   

15.
由于强大的自主学习能力, 强化学习方法逐渐成为机器人导航问题的研究热点, 但是复杂的未知环境对算法的运行效率和收敛速度提出了考验。提出一种新的机器人导航Q学习算法, 首先用三个离散的变量来定义环境状态空间, 然后分别设计了两部分奖赏函数, 结合对导航达到目标有利的知识来启发引导机器人的学习过程。实验在Simbad仿真平台上进行, 结果表明本文提出的算法很好地完成了机器人在未知环境中的导航任务, 收敛性能也有其优越性。  相似文献   

16.
Dual-arm reconfigurable robot is a new type of robot. It can adapt to different tasks by changing its different end-effector modules which have standard connectors. Especially, in fast and flexible assembly, it is very important to research the collision-free planning of dual-arm reconfigurable robots. It is to find a continuous, collision-free path in an environment containing obstacles. A new approach to the real-time collision-free motion planning of dual-arm reconfigurable robots is used in the paper. This method is based on configuration space (C-Space). The method of configuration space and the concepts reachable manifold and contact manifold are successfully applied to the collision-free motion planning of dual-arm robot. The complexity of dual-arm robots’ collision-free planning will reduce to a search in a dispersed C-Space. With this algorithm, a real-time optimum path is found. And when the start point and the end point of the dual-arm robot are specified, the algorithm will successfully get the collision-free path real time. A verification of this algorithm is made in the dual-arm horizontal articulated robot SCARATES, and the simulation and experiment ascertain that the algorithm is feasible and effective.  相似文献   

17.
Cooperative strategy based on adaptive Q-learning for robot soccer systems   总被引:1,自引:0,他引:1  
The objective of this paper is to develop a self-learning cooperative strategy for robot soccer systems. The strategy enables robots to cooperate and coordinate with each other to achieve the objectives of offense and defense. Through the mechanism of learning, the robots can learn from experiences in either successes or failures, and utilize these experiences to improve the performance gradually. The cooperative strategy is built using a hierarchical architecture. The first layer of the structure is responsible for assigning each role, that is, how many defenders and sidekicks should be played according to the positional states. The second layer is for the role assignment related to the decision from the previous layer. We develop two algorithms for assignment of the roles, the attacker, the defenders, and the sidekicks. The last layer is the behavior layer in which robots execute their behavior commands and tasks based on their roles. The attacker is responsible for chasing the ball and attacking. The sidekicks are responsible for finding good positions, and the defenders are responsible for defending competitor scoring. The robots' roles are not fixed. They can dynamically exchange their roles with each other. In the aspect of learning, we develop an adaptive Q-learning method which is modified form the traditional Q-learning. A simple ant experiment shows that Q-learning is more effective than the traditional techniques, and it is also successfully applied to the learning of the cooperative strategy.  相似文献   

18.
A modular robot can be built with a shape and function that matches the working environment. We developed a four-arm modular robot system which can be configured in a planar structure. A learning mechanism is incorporated in each module constituting the robot. We aim to control the overall shape of the robot by an accumulation of the autonomous actions resulting from the individual learning functions. Considering that the overall shape of a modular robot depends on the learning conditions in each module, this control method can be treated as a dispersion control learning method. The learning object is cooperative motion between adjacent modules. The learning process proceeds based on Q-learning by trial and error. We confirmed the effectiveness of the proposed technique by computer simulation.  相似文献   

19.
《Advanced Robotics》2013,27(1):83-99
Reinforcement learning can be an adaptive and flexible control method for autonomous system. It does not need a priori knowledge; behaviors to accomplish given tasks are obtained automatically by repeating trial and error. However, with increasing complexity of the system, the learning costs are increased exponentially. Thus, application to complex systems, like a many redundant d.o.f. robot and multi-agent system, is very difficult. In the previous works in this field, applications were restricted to simple robots and small multi-agent systems, and because of restricted functions of the simple systems that have less redundancy, effectiveness of reinforcement learning is restricted. In our previous works, we had taken these problems into consideration and had proposed new reinforcement learning algorithm, 'Q-learning with dynamic structuring of exploration space based on GA (QDSEGA)'. Effectiveness of QDSEGA for redundant robots has been demonstrated using a 12-legged robot and a 50-link manipulator. However, previous works on QDSEGA were restricted to redundant robots and it was impossible to apply it to multi mobile robots. In this paper, we extend our previous work on QDSEGA by combining a rule-based distributed control and propose a hybrid autonomous control method for multi mobile robots. To demonstrate the effectiveness of the proposed method, simulations of a transportation task by 10 mobile robots are carried out. As a result, effective behaviors have been obtained.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号