首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Learning to Recognize and Grasp Objects   总被引:1,自引:0,他引:1  
Pauli  Josef 《Machine Learning》1998,31(1-3):239-258
We apply techniques of computer vision and neural network learning to get a versatile robot manipulator. All work conducted follows the principle of autonomous learning from visual demonstration. The user must demonstra te the relevant objects, situations, and/or actions, and the robot vision system must learn from those. For approaching and grasping technical objects three principal tasks have to be done—calibrating the camera-robot coordination, detecting the desired object in the images, and choosing a stable grasping pose. These procedures are based on (nonlinear) functions, which are not known a priori and therefore have to be learned. We uniformly approximate the necessary functions by networks of gaussian basis functions (GBF networks). By modifying the number of basis functions and/or the size of the gaussian support the quality of the function approximation changes. The appropriate configuration is learned in the training phase and applied during the operation phase. All experiments are carried out in real world applications using an industrial articulation robot manipulator and the computer vision system KHOROS.  相似文献   

2.
针对传统机械臂局限于按既定流程对固定位姿的特定物体进行机械化抓取,设计了一种基于机器视觉的非特定物体的智能抓取系统;系统通过特定的卷积神经网络对深度相机采集到的图像进行目标定位,并在图像上预测出一个该目标的可靠抓取位置,系统进一步将抓取位置信息反馈给机械臂,机械臂根据该信息完成对目标物体的抓取操作;系统基于机器人操作系统,硬件之间通过机器人操作系统的话题机制传递必要信息;最终经多次实验结果表明,通过改进的快速搜索随机树运动规划算法,桌面型机械臂能够根据神经网络模型反馈的的标记位置对不同位姿的非特定物体进行实时有效的抓取,在一定程度上提高了机械臂的自主能力,弥补了传统机械臂的不足.  相似文献   

3.
于涵  李一染  毕书博  刘迎圆  安康 《计算机工程》2021,47(1):298-304,311
在传统基于固定视觉的排爆机器人抓取系统中,相机视觉易被遮挡且不能保证拍摄清晰度。基于随动视觉技术,提出一种将深度相机置于机械手末端并随机械手运动的排爆机器人自主抓取系统。利用深度相机计算目标物体的三维坐标,采用坐标转换方法将目标物体的位置坐标信息实时转换至机器人全局坐标系,并研究相机坐标系、机器人全局坐标系与末端执行器手爪工具坐标系三者的动态映射关系,实现排爆机器人的自主抓取。实验结果表明,与传统固定视觉方法相比,随动视觉方法可在误差2cm内,使得机器人机械手爪准确到达目标物体所在位置,且当机器人距离目标物体100cm~150cm时,抓取效果最佳。  相似文献   

4.
李昕  刘路 《计算机工程》2012,38(23):158-161,165
为实现机器人灵活的自定位,并使其准确地抓取物体,提出一种基于视觉与无线射频识别(RFID)技术的机器人自定位抓取算法。构建网格化环境,利用RFID技术确定机器人的初始位置、行进路线和方向,使用视觉系统获取物体的空间坐标,将其转换到手臂坐标系,采用改进的D-H模型对手臂进行建模,并给出机械臂逆解抓取算法。实验结果表明,该算法使得机器人定位的成功率达到76.7%,抓取成功率高达90%。  相似文献   

5.
采用工业相机、工业投影机、普通摄像头、计算机和机械臂开发了一套具有三维立体视觉的机械臂智能抓取分类系统。该系统采用自编软件实现了对工业相机、工业投影机的自动控制和同步,通过前期研究提出的双波长条纹投影三维形貌测量法获取了物体的高度信息,结合opencv技术和普通摄像头获取的物体二维平行面信息,实现了物体的自动识别和分类;利用串口通信协议,将上述处理后的数据传送至机械臂,系统进行几何姿态解算,实现了智能抓取,并能根据抓手上压力反馈自动调节抓手张合程度,实现自适应抓取。经实验证明该系统能通过自带的快速三维形貌获取装置实现准确、快速的抓取工作范围内的任意形状的物体并实现智能分类。  相似文献   

6.
We present an approach for controlling robotic interactions with objects, using synthetic images generated by morphing shapes. In particular, we attempt the problem of positioning an eye-in-hand robotic system with respect to objects in the workspace for grasping and manipulation. In our formulation, the grasp position (and consequently the approach trajectory of the manipulator), varies with each object. The proposed solution to the problem consists of two parts. First, based on a model-based object recognition framework, images of the objects taken at the desired grasp pose are stored in a database. The recognition and identification of the grasp position for an unknown input object (selected from the family of recognizable objects) occurs by morphing its contour to the templates in the database and using the virtual energy spent during the morph as a dissimilarity measure. In the second step, the images synthesized during the morph are used to guide the eye-in-hand system and execute the grasp. The proposed method requires minimal calibration of the system. Furthermore, it conjoins techniques from shape recognition, computer graphics, and vision-based robot control in a unified engineering amework. Potential applications range from recognition and positioning with respect to partially-occluded or deformable objects to planning robotic grasping based on human demonstration.  相似文献   

7.
In this paper, we present a strategy for fast grasping of unknown objects by mobile robots through automatic determination of the number of robots. An object handling system consisting of a Gripper robot and a Lifter robot is designed. The Gripper robot moves around an unknown object to acquire partial shape information for determination of grasping points. The object is transported if it can be lifted by the Gripper robot. Otherwise, if all grasping trials fail, a Lifter robot is used. In order to maximize use of the Gripper robot’s payload, the detected grasping points that apply the largest force to the gripper are selected for the Gripper robot when the object is grasped by two mobile robots. The object is measured using odometry and scanned data acquired while the Gripper robot moves around the object. Then, the contact point for calculating the insert position for the Lifter robot can be acquired quickly. Finally, a strategy for fast grasping of known objects by considering the transition between stable states is used to realize grasping of unknown objects. The proposed approach is tested in experiments, which find that a wide variety of objects can be grasped quickly with one or two mobile robots.  相似文献   

8.
在比赛过程中类人足球机器人的视觉系统需要对足球、球门以及对阵双方机器人进行识别. 考虑到算法的快速性及有效性,采用基于颜色信息的算法对球及球门进行识别,通过球及球门的颜色阈值提取图片中球与球门可能的位置,再由球与球门的背景色或面积信息确定球与球门的正确位置. 对双方机器人的识别,首先提取机器人的特征,然后通过在线实时的监督学习方法训练一组级联分类器,通过训练好的分类器对双方机器人进行检测. 实验表明算法能够快速有效地识别场上目标,且算法具有较好的鲁棒性.  相似文献   

9.
崔涛  李凤鸣  宋锐  李贻斌 《控制与决策》2022,37(6):1445-1452
针对机器人在多类别物体不同任务下的抓取决策问题,提出基于多约束条件的抓取策略学习方法.该方法以抓取对象特征和抓取任务属性为机器人抓取策略约束,通过映射人类抓取习惯规划抓取模式,并采用物体方向包围盒(OBB)建立机器人抓取规则,建立多约束条件的抓取模型.利用深度径向基(DRBF)网络模型结合减聚类算法(SCM)实现抓取策略的学习,两种算法的结合旨在提高学习鲁棒性与精确性.搭建以Refiex 1型灵巧手和AUBO六自由度机械臂组成的实验平台,对多类别物体进行抓取实验.实验结果表明,所提出方法使机器人有效学习到对多物体不同任务的最优抓取策略,具有良好的抓取决策能力.  相似文献   

10.
In this paper, we present a strategy for fast grasping of unknown objects based on the partial shape information from range sensors for a mobile robot with a parallel-jaw gripper. The proposed method can realize fast grasping of an unknown object without needing complete information of the object or learning from grasping experience. Information regarding the shape of the object is acquired by a 2D range sensor installed on the robot at an inclined angle to the ground. Features for determining the maximal contact area are extracted directly from the partial shape information of the unknown object to determine the candidate grasping points. Note that since the shape and mass are unknown before grasping, a successful and stable grasp cannot be in fact guaranteed. Thus, after performing a grasping trial, the mobile robot uses the 2D range sensor to judge whether the object can be lifted. If a grasping trial fails, the mobile robot will quickly find other candidate grasping points for another trial until a successful and stable grasp is realized. The proposed approach has been tested in experiments, which found that a mobile robot with a parallel-jaw gripper can successfully grasp a wide variety of objects using the proposed algorithm. The results illustrate the validity of the proposed algorithm in term of the grasping time.  相似文献   

11.
本文为动力学控制工业机器人提出了一种综合学习算法,这种学习算法可将以前所学的信息用于新的控制输入.这种控制方法不需要事先知道机器人动力学,它易于应用于特殊的控制问题或修改以适应实际系统中的变化,控制方法在时间上是有效的,且很适合于定点实现.学习控制算法的有效性通过4自由度的直接驱动机器人前两个关节在重复运动中的计算机仿真实验得到了验证.  相似文献   

12.
Active Learning for Vision-Based Robot Grasping   总被引:1,自引:0,他引:1  
Salganicoff  Marcos  Ungar  Lyle H.  Bajcsy  Ruzena 《Machine Learning》1996,23(2-3):251-278
Reliable vision-based grasping has proved elusive outside of controlled environments. One approach towards building more flexible and domain-independent robot grasping systems is to employ learning to adapt the robot's perceptual and motor system to the task. However, one pitfall in robot perceptual and motor learning is that the cost of gathering the learning set may be unacceptably high. Active learning algorithms address this shortcoming by intelligently selecting actions so as to decrease the number of examples necessary to achieve good performance and also avoid separate training and execution phases, leading to higher autonomy. We describe the IE-ID3 algorithm, which extends the Interval Estimation (IE) active learning approach from discrete to real-valued learning domains by combining IE with a classification tree learning algorithm (ID-3). We present a robot system which rapidly learns to select the grasp approach directions using IE-ID3 given simplified superquadric shape approximations of objects. Initial results on a small set of objects show that a robot with a laser scanner system can rapidly learn to pick up new objects, and simulation studies show the superiority of the active learning approach for a simulated grasping task using larger sets of objects. Extensions of the approach and future areas of research incorporating more sophisticated perceptual and action representation are discussed  相似文献   

13.
丁祥峰  孙怡宁  卢朝洪  骆敏舟 《控制工程》2005,12(4):302-304,309
对融合了视觉、滑觉、角位移等多种传感器的欠驱动空间机械手爪,研究其对不同形状、质地的物体实现自适应抓取控制。通过传感器反馈控制机械手运动、抓取力,提高机械手的自主能力。在抓取模式选择中,采用基于专家系统的抓取规划,根据物体不同的形状、尺寸选择不同的抓取模式;在抓取力控制中,通过由PVDF制作的滑觉传感器反馈,采用基于滑觉信号的模糊控制方法,对不同质地的物体选择不同的控制参数。通过实验研究验证基于多感知的控制方法对各种物体可以进行可靠的抓取。  相似文献   

14.
In this paper, we present an affordance learning system for robotic grasping. The system involves three important aspects: the affordance memory, synergy-based exploration, and a grasping control strategy using local sensor feedback. The affordance memory is modeled with a modified growing neural gas network that allows affordances to be learned quickly from a small dataset of human grasping and object features. After being trained offline, the affordance memory is used in the system to generate online motor commands for reaching and grasping control of the robot. When grasping new objects, the system can explore various grasp postures efficiently in the low dimensional synergy space because the synergies automatically avoid abnormal postures that are more likely to lead to failed grasps. Experimental results demonstrated that the affordance memory can generalize to grasp new objects and predict the effect of the grasp (i.e., the tactile patterns).  相似文献   

15.
On-line computation of forward and inverse Jacobian matrices is essential in robot manipulator controllers, where high-speed robot motion is required. The complexity of Jacobian calculation is such that the computational burden is large, and parallel processing is necessary if on-line computation is to be achieved. Various algorithms and parallel-processing networks suitable for this are considered. All algorithms have been implemented on transputer networks and computation times measured. The paper emphasises the importance of including communication overheads in comparisons of the computational efficiency of alternative algorithms and processor networks. Theoretical processing times based on computer cycle times and arithmetic operation counts are shown to be a false basis for comparison. Whilst considering the specific case of computation of Jacobian matrices for a robot manipulator, the paper provides a useful example of the considerations and constraints involved in distributing any algorithm across a multi-processor network.  相似文献   

16.
Fuentes  Olac  Nelson  Randal C. 《Machine Learning》1998,31(1-3):223-237
We present a method for autonomous learning of dextrous manipulation skills with multifingered robot hands. We use heuristics derived from observations made on human hands to reduce the degrees of freedom of the task and make learning tractable. Our approach consists of learning and storing a few basic manipulation primitives for a few prototypical objects and then using an associative memory to obtain the required parameters for new objects and/or manipulations. The parameter space of the robot is searched using a modified version of the evolution strategy, which is robust to the noise normally present in real-world complex robotic tasks. Given the difficulty of modeling and simulating accurately the interactions of multiple fingers and an object, and to ensure that the learned skills are applicable in the real world, our system does not rely on simulation; all the experimentation is performed by a physical robot, in this case the 16-degree-of-freedom Utah/MIT hand. E xperimental results show that accurate dextrous manipulation skills can be learned by the robot in a short period of time. We also show the application of the learned primitives to perform an assembly task and how the primitives generalize to objects that are different from those used during the learning phase.  相似文献   

17.
We present a method for autonomous learning of dextrous manipulation skills with multifingered robot hands. We use heuristics derived from observations made on human hands to reduce the degrees of freedom of the task and make learning tractable. Our approach consists of learning and storing a few basic manipulation primitives for a few prototypical objects and then using an associative memory to obtain the required parameters for new objects and/or manipulations. The parameter space of the robot is searched using a modified version of the evolution strategy, which is robust to the noise normally present in real-world complex robotic tasks. Given the difficulty of modeling and simulating accurately the interactions of multiple fingers and an object, and to ensure that the learned skills are applicable in the real world, our system does not rely on simulation; all the experimentation is performed by a physical robot, in this case the 16-degree-of-freedom Utah/MIT hand. Experimental results show that accurate dextrous manipulation skills can be learned by the robot in a short period of time. We also show the application of the learned primitives to perform an assembly task and how the primitives generalize to objects that are different from those used during the learning phase.  相似文献   

18.
GripSee: A Gesture-Controlled Robot for Object Perception and Manipulation   总被引:3,自引:0,他引:3  
We have designed a research platform for a perceptually guided robot, which also serves as a demonstrator for a coming generation of service robots. In order to operate semi-autonomously, these require a capacity for learning about their environment and tasks, and will have to interact directly with their human operators. Thus, they must be supplied with skills in the fields of human-computer interaction, vision, and manipulation. GripSee is able to autonomously grasp and manipulate objects on a table in front of it. The choice of object, the grip to be used, and the desired final position are indicated by an operator using hand gestures. Grasping is performed similar to human behavior: the object is first fixated, then its form, size, orientation, and position are determined, a grip is planned, and finally the object is grasped, moved to a new position, and released. As a final example for useful autonomous behavior we show how the calibration of the robot's image-to-world coordinate transform can be learned from experience, thus making detailed and unstable calibration of this important subsystem superfluous. The integration concepts developed at our institute have led to a flexible library of robot skills that can be easily recombined for a variety of useful behaviors.  相似文献   

19.
针对机器人示教编程方法导致的工件位置固定、抓取效率低下的问题,研究神经网络在机器人视觉识别与抓取规划中的应用,建立了视觉引导方案,通过YOLOV5神经网络模型开发视觉识别系统,识别物体的种类,同时获取待抓取物体定位点坐标。提出了机器人六点手眼标定原理并进行标定实验,提出了针对俯视图为圆形或长方形物体的定位方法。最后针对3种物体进行了180次的抓取实验,实验的综合平均抓取成功率约为92.8%,验证了视觉识别和抓取机器人系统具备实际应用的可能性,有效提高了抓取效率。  相似文献   

20.
抓取目标多样性、位姿随机性严重制约了机器人抓取的任务适应性,为提高机器人抓取成功率,提出一种融合多尺度特征的机器人抓取位姿估计方法.该方法以RGD信息为输入,采用ResNet-50主干网络,融合FPN(feature pyramid networks)获得多尺度特征作为抓取生成网络的输入,以生成抓取候选框;并将抓取方向...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号