首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present an approach for controlling robotic interactions with objects, using synthetic images generated by morphing shapes. In particular, we attempt the problem of positioning an eye-in-hand robotic system with respect to objects in the workspace for grasping and manipulation. In our formulation, the grasp position (and consequently the approach trajectory of the manipulator), varies with each object. The proposed solution to the problem consists of two parts. First, based on a model-based object recognition framework, images of the objects taken at the desired grasp pose are stored in a database. The recognition and identification of the grasp position for an unknown input object (selected from the family of recognizable objects) occurs by morphing its contour to the templates in the database and using the virtual energy spent during the morph as a dissimilarity measure. In the second step, the images synthesized during the morph are used to guide the eye-in-hand system and execute the grasp. The proposed method requires minimal calibration of the system. Furthermore, it conjoins techniques from shape recognition, computer graphics, and vision-based robot control in a unified engineering amework. Potential applications range from recognition and positioning with respect to partially-occluded or deformable objects to planning robotic grasping based on human demonstration.  相似文献   

2.
In this paper, we present a strategy for fast grasping of unknown objects based on the partial shape information from range sensors for a mobile robot with a parallel-jaw gripper. The proposed method can realize fast grasping of an unknown object without needing complete information of the object or learning from grasping experience. Information regarding the shape of the object is acquired by a 2D range sensor installed on the robot at an inclined angle to the ground. Features for determining the maximal contact area are extracted directly from the partial shape information of the unknown object to determine the candidate grasping points. Note that since the shape and mass are unknown before grasping, a successful and stable grasp cannot be in fact guaranteed. Thus, after performing a grasping trial, the mobile robot uses the 2D range sensor to judge whether the object can be lifted. If a grasping trial fails, the mobile robot will quickly find other candidate grasping points for another trial until a successful and stable grasp is realized. The proposed approach has been tested in experiments, which found that a mobile robot with a parallel-jaw gripper can successfully grasp a wide variety of objects using the proposed algorithm. The results illustrate the validity of the proposed algorithm in term of the grasping time.  相似文献   

3.
The ability to grasp unknown objects still remains an unsolved problem in the robotics community. One of the challenges is to choose an appropriate grasp configuration, i.e., the 6D pose of the hand relative to the object and its finger configuration. In this paper, we introduce an algorithm that is based on the assumption that similarly shaped objects can be grasped in a similar way. It is able to synthesize good grasp poses for unknown objects by finding the best matching object shape templates associated with previously demonstrated grasps. The grasp selection algorithm is able to improve over time by using the information of previous grasp attempts to adapt the ranking of the templates to new situations. We tested our approach on two different platforms, the Willow Garage PR2 and the Barrett WAM robot, which have very different hand kinematics. Furthermore, we compared our algorithm with other grasp planners and demonstrated its superior performance. The results presented in this paper show that the algorithm is able to find good grasp configurations for a large set of unknown objects from a relatively small set of demonstrations, and does improve its performance over time.  相似文献   

4.
Neuro-psychological findings have shown that human perception of objects is based on part decomposition. Most objects are made of multiple parts which are likely to be the entities actually involved in grasp affordances. Therefore, automatic object recognition and robot grasping should take advantage from 3D shape segmentation. This paper presents an approach toward planning robot grasps across similar objects by part correspondence. The novelty of the method lies in the topological decomposition of objects that enables high-level semantic grasp planning.In particular, given a 3D model of an object, the representation is initially segmented by computing its Reeb graph. Then, automatic object recognition and part annotation are performed by applying a shape retrieval algorithm. After the recognition phase, queries are accepted for planning grasps on individual parts of the object. Finally, a robot grasp planner is invoked for finding stable grasps on the selected part of the object. Grasps are evaluated according to a widely used quality measure. Experiments performed in a simulated environment on a reasonably large dataset show the potential of topological segmentation to highlight candidate parts suitable for grasping.  相似文献   

5.
6.
针对非结构化环境中任意位姿的未知物体,提出了一种基于点云特征的机器人六自由度抓取位姿检测方法,以解决直接从点云中获取目标抓取位姿的难题.首先,根据点云的基本几何信息生成抓取候选,并通过力平衡等方法优化这些候选;然后,利用可直接处理点云的卷积神经网络ConvPoint评估样本,得分最高的抓取将被执行,其中抓取位姿采样和评估网络都是以原始点云作为输入;最后,利用仿真和实际抓取实验进行测试.结果表明,该方法在常用对象上实现了88.33%的抓取成功率,并可以有效地拓展到抓取其他形状的未知物体.  相似文献   

7.
Humans have an incredible capacity to manipulate objects using dextrous hands. A large number of studies indicate that robot learning by demonstration is a promising strategy to improve robotic manipulation and grasping performance. Concerning this subject we can ask: How does a robot learn how to grasp? This work presents a method that allows a robot to learn new grasps. The method is based on neural network retraining. With this approach we aim to enable a robot to learn new grasps through a supervisor. The proposed method can be applied for 2D and 3D cases. Extensive object databases were generated to evaluate the method performance in both 2D and 3D cases. A total of 8100 abstract shapes were generated for 2D cases and 11700 abstract shapes for 3D cases. Simulation results with a computational supervisor show that a robotic system can learn new grasps and improve its performance through the proposed HRH (Hopfield-RBF-Hopfield) grasp learning approach.  相似文献   

8.
This paper presents a simple grasp planning method for a multi-fingered hand. Its purpose is to compute a context-independent and dense set or list of grasps, instead of just a small set of grasps regarded as optimal with respect to a given criterion. By context-independent, we mean that only the robot hand and the object to grasp are considered. The environment and the position of the robot base with respect to the object are considered in a further stage. Such a dense set can be computed offline and then used to let the robot quickly choose a grasp adapted to a specific situation. This can be useful for manipulation planning of pick-and-place tasks. Another application is human–robot interaction when the human and robot have to hand over objects to each other. If human and robot have to work together with a predefined set of objects, grasp lists can be employed to allow a fast interaction.The proposed method uses a dense sampling of the possible hand approaches based on a simple but efficient shape feature. As this leads to many finger inverse kinematics tests, hierarchical data structures are employed to reduce the computation times. The data structures allow a fast determination of the points where the fingers can realize a contact with the object surface. The grasps are ranked according to a grasp quality criterion so that the robot will first parse the list from best to worse quality grasps, until it finds a grasp that is valid for a particular situation.  相似文献   

9.
Grasp synthesis for unknown objects is a challenging problem as the algorithms are expected to cope with missing object shape information. This missing information is a function of the vision sensor viewpoint. The majority of the grasp synthesis algorithms in literature synthesize a grasp by using one single image of the target object and making assumptions on the missing shape information. On the contrary, this paper proposes the use of robot's depth sensor actively: we propose an active vision methodology that optimizes the viewpoint of the sensor for increasing the quality of the synthesized grasp over time. By this way, we aim to relax the assumptions on the sensor's viewpoint and boost the success rates of the grasp synthesis algorithms. A reinforcement learning technique is employed to obtain a viewpoint optimization policy, and a training process and automated training data generation procedure are presented. The methodology is applied to a simple force-moment balance-based grasp synthesis algorithm, and a thousand simulations with five objects are conducted with random initial poses in which the grasp synthesis algorithm was not able to obtain a good grasp with the initial viewpoint. In 94% of these cases, the policy achieved to find a successful grasp.  相似文献   

10.
Data-driven grasp synthesis using shape matching and task-based pruning   总被引:1,自引:0,他引:1  
Human grasps, especially whole-hand grasps, are difficult to animate because of the high number of degrees of freedom of the hand and the need for the hand to conform naturally to the object surface. Captured human motion data provides us with a rich source of examples of natural grasps. However, for each new object, we are faced with the problem of selecting the best grasp from the database and adapting it to that object. This paper presents a data-driven approach to grasp synthesis. We begin with a database of captured human grasps. To identify candidate grasps for a new object, we introduce a novel shape matching algorithm that matches hand shape to object shape by identifying collections of features having similar relative placements and surface normals. This step returns many grasp candidates, which are clustered and pruned by choosing the grasp best suited for the intended task. For pruning undesirable grasps, we develop an anatomically-based grasp quality measure specific to the human hand. Examples of grasp synthesis are shown for a variety of objects not present in the original database. This algorithm should be useful both as an animator tool for posing the hand and for automatic grasp synthesis in virtual environments.  相似文献   

11.
In this paper, we present a strategy for fast grasping of unknown objects by mobile robots through automatic determination of the number of robots. An object handling system consisting of a Gripper robot and a Lifter robot is designed. The Gripper robot moves around an unknown object to acquire partial shape information for determination of grasping points. The object is transported if it can be lifted by the Gripper robot. Otherwise, if all grasping trials fail, a Lifter robot is used. In order to maximize use of the Gripper robot’s payload, the detected grasping points that apply the largest force to the gripper are selected for the Gripper robot when the object is grasped by two mobile robots. The object is measured using odometry and scanned data acquired while the Gripper robot moves around the object. Then, the contact point for calculating the insert position for the Lifter robot can be acquired quickly. Finally, a strategy for fast grasping of known objects by considering the transition between stable states is used to realize grasping of unknown objects. The proposed approach is tested in experiments, which find that a wide variety of objects can be grasped quickly with one or two mobile robots.  相似文献   

12.
In this paper, we present visibility-based spatial reasoning techniques for real-time object manipulation in cluttered environments. When a robot is requested to manipulate an object, a collision-free path should be determined to access, grasp, and move the target object. This often requires processing of time-consuming motion planning routines, making real-time object manipulation difficult or infeasible, especially in a robot with a high DOF and/or in a highly cluttered environment. This paper places special emphasis on developing real-time motion planning, in particular, for accessing and removing an object in a cluttered workspace, as a local planner that can be integrated with a general motion planner for improved overall efficiency. In the proposed approach, the access direction of the object to grasp is determined through visibility query, and the removal direction to retrieve the object grasped by the gripper is computed using an environment map. The experimental results demonstrate that the proposed approach, when implemented by graphics hardware, is fast and robust enough to manipulate 3D objects in real-time applications.  相似文献   

13.
The paper discusses the scale-dependent grasp. Suppose that a human approaches an object initially placed on a table and finally achieves an enveloping grasp. Under such initial and final conditions, he (or she) unconsciously changes the grasp strategy according to the size of objects, even though they have similar geometry. We call the grasp planning the scale-dependent grasp. We find that grasp patterns are also changed according to the surface friction and the geometry of cross section in addition to the scale of object. Focusing on column objects, we first classify the grasp patterns and extract the essential motions so that we can construct grasp strategies applicable to multifingered robot hands. The grasp strategies constructed for robot hands are verified by experiments. We also consider how a robot hand can recognize the failure mode and how it can switch from one to another  相似文献   

14.
Grasping is a fundamental skill for robots which work for manipulation tasks. Grasping of unknown objects remains a big challenge. Precision grasping of unknown objects is even harder. Due to imperfection of sensor measurements and lack of prior knowledge of objects, robots have to handle the uncertainty effectively. In previous work (Chen and Wichert 2015), we use a probabilistic framework to tackle precision grasping of model-based objects. In this paper, we extend the probabilistic framework to tackle the problem of precision grasping of unknown objects. We first propose an object model called probabilistic signed distance function (p-SDF) to represent unknown object surface. p-SDF models measurement uncertainty explicitly and allows measurement from multiple sensors to be fused in real time. Based on the surface representation, we propose a model to evaluate the likelihood of grasp success for antipodal grasps. This model uses four heuristics to model the condition of force closure and perceptual uncertainty. A two step simulated annealing approach is further proposed to search and optimize a precision grasp. We use the object representation as a bridge to unify grasp synthesis and grasp execution. Our grasp execution is performed in a closed-loop, so that robots can actively reduce the uncertainty and react to external perturbations during a grasping process. We perform extensive grasping experiments using real world challenging objects and demonstrate that our method achieves high robustness and accuracy in grasping unknown objects.  相似文献   

15.
The Programming by Demonstration (PbD) technique aims at teaching a robot to accomplish a task by learning from a human demonstration. In a manipulation context, recognizing the demonstrator's hand gestures, specifically when and how objects are grasped, plays a significant role. Here, a system is presented that uses both hand shape and contact-point information obtained from a data glove and tactile sensors to recognize continuous human-grasp sequences. The sensor fusion, grasp classification, and task segmentation are made by a hidden Markov model recognizer. Twelve different grasp types from a general, task-independent taxonomy are recognized. An accuracy of up to 95% could be achieved for a multiple-user system.  相似文献   

16.
刘亚欣  王斯瑶  姚玉峰  杨熹  钟鸣 《控制与决策》2020,35(12):2817-2828
作为机器人在工厂、家居等环境中最常用的基础动作,机器人自主抓取有着广泛的应用前景,近十年来研究人员对其给予了较高的关注,然而,在非结构环境下任意物体任意姿态的准确抓取仍然是一项具有挑战性和复杂性的研究.机器人抓取涉及3个主要方面:检测、规划和控制.作为第1步,检测物体并生成抓取位姿是成功抓取的前提,有助于后续抓取路径的规划和整个抓取动作的实现.鉴于此,以检测为主进行文献综述,从分析法和经验法两大方面介绍抓取检测技术,从是否具有抓取物体先验知识的角度出发,将经验法分成已知物体和未知物体的抓取,并详细描述未知物体抓取中每种分类所包含的典型抓取检测方法及其相关特点.最后展望机器人抓取检测技术的发展方向,为相关研究提供一定的参考.  相似文献   

17.
In this paper, we develop a soft climbing robot made of silicone. Octopus-like behaviour is realized by a simple mechanism utilizing the dynamics of the soft body, and the robot can grasp various objects of unknown shape. In addition, by inching its truck, it can climb various columnar objects. Experiments, using pipes, long balloons, and natural trees, are conducted to evaluate the effectiveness of the proposed robot.  相似文献   

18.
Recently, robots are introduced to warehouses and factories for automation and are expected to execute dual-arm manipulation as human does and to manipulate large, heavy and unbalanced objects. We focus on target picking task in the cluttered environment and aim to realize a robot picking system which the robot selects and executes proper grasping motion from single-arm and dual-arm motion. In this paper, we propose a few-experiential learning-based target picking system with selective dual-arm grasping. In our system, a robot first learns grasping points and object semantic and instance label with automatically synthesized dataset. The robot then executes and collects grasp trial experiences in the real world and retrains the grasping point prediction model with the collected trial experiences. Finally, the robot evaluates candidate pairs of grasping object instance, strategy and points and selects to execute the optimal grasping motion. In the experiments, we evaluated our system by conducting target picking task experiments with a dual-arm humanoid robot Baxter in the cluttered environment as warehouse.  相似文献   

19.
A virtual reality system enabling high-level programming of robot grasps is described. The system is designed to support programming by demonstration (PbD), an approach aimed at simplifying robot programming and empowering even unexperienced users with the ability to easily transfer knowledge to a robotic system. Programming robot grasps from human demonstrations requires an analysis phase, comprising learning and classification of human grasps, as well as a synthesis phase, where an appropriate human-demonstrated grasp is imitated and adapted to a specific robotic device and object to be grasped. The virtual reality system described in this paper supports both phases, thereby enabling end-to-end imitation-based programming of robot grasps. Moreover, as in the PbD approach robot environment interactions are no longer explicitly programmed, the system includes a method for automatic environment reconstruction that relieves the designer from manually editing the pose of the objects in the scene and enables intelligent manipulation. A workspace modeling technique based on monocular vision and computation of edge-face graphs is proposed. The modeling algorithm works in real time and supports registration of multiple views. Object recognition and workspace reconstruction features, along with grasp analysis and synthesis, have been tested in simulated tasks involving 3D user interaction and programming of assembly operations. Experiments reported in the paper assess the capabilities of the three main components of the system: the grasp recognizer, the vision-based environment modeling system, and the grasp synthesizer.  相似文献   

20.
Low-level cues in an image not only allow to infer higher-level information like the presence of an object, but the inverse is also true. Category-level object recognition has now reached a level of maturity and accuracy that allows to successfully feed back its output to other processes. This is what we refer to as cognitive feedback. In this paper, we study one particular form of cognitive feedback, where the ability to recognize objects of a given category is exploited to infer different kinds of meta-data annotations for images of previously unseen object instances, in particular information on 3D shape. Meta-data can be discrete, real- or vector-valued. Our approach builds on the Implicit Shape Model of Leibe and Schiele [B. Leibe, A. Leonardis, B. Schiele, Robust object detection with interleaved categorization and segmentation, International Journal of Computer Vision 77 (1–3) (2008) 259–289], and extends it to transfer annotations from training images to test images. We focus on the inference of approximative 3D shape information about objects in a single 2D image. In experiments, we illustrate how our method can infer depth maps, surface normals and part labels for previously unseen object instances.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号