首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 578 毫秒
1.
Recently, robots are introduced to warehouses and factories for automation and are expected to execute dual-arm manipulation as human does and to manipulate large, heavy and unbalanced objects. We focus on target picking task in the cluttered environment and aim to realize a robot picking system which the robot selects and executes proper grasping motion from single-arm and dual-arm motion. In this paper, we propose a few-experiential learning-based target picking system with selective dual-arm grasping. In our system, a robot first learns grasping points and object semantic and instance label with automatically synthesized dataset. The robot then executes and collects grasp trial experiences in the real world and retrains the grasping point prediction model with the collected trial experiences. Finally, the robot evaluates candidate pairs of grasping object instance, strategy and points and selects to execute the optimal grasping motion. In the experiments, we evaluated our system by conducting target picking task experiments with a dual-arm humanoid robot Baxter in the cluttered environment as warehouse.  相似文献   

2.
旋翼飞行机器人是面向空中自主作业需求,将旋翼飞行器与多自由度机械臂相结合所提出的新型机器人.该机器人作业过程中旋翼飞行器、机械臂与作业目标之间的动态相对运动以及与作业目标接触过程中未建模外力、力矩扰动使自主控制受到极大挑战.本文将针对旋翼飞行机器人的结构演变及关键技术、作业机构集成技术进行综述.从动力学建模及动力学特性分析、动态运动约束/力约束下的协调规划、非结构环境下的运动和作业控制、面向任务动态操作的环境感知、面向任务的实验系统构建与实验验证五个方面初步构建了旋翼飞行机器人自主作业理论体系.  相似文献   

3.
Humans excel in manipulation tasks, a basic skill for our survival and a key feature in our manmade world of artefacts and devices. In this work, we study how humans manipulate simple daily objects, and construct a probabilistic representation model for the tasks and objects useful for autonomous grasping and manipulation by robotic hands. Human demonstrations of predefined object manipulation tasks are recorded from both the human hand and object points of view. The multimodal data acquisition system records human gaze, hand and fingers 6D pose, finger flexure, tactile forces distributed on the inside of the hand, colour images and stereo depth map, and also object 6D pose and object tactile forces using instrumented objects. From the acquired data, relevant features are detected concerning motion patterns, tactile forces and hand-object states. This will enable modelling a class of tasks from sets of repeated demonstrations of the same task, so that a generalised probabilistic representation is derived to be used for task planning in artificial systems. An object centred probabilistic volumetric model is proposed to fuse the multimodal data and map contact regions, gaze, and tactile forces during stable grasps. This model is refined by segmenting the volume into components approximated by superquadrics, and overlaying the contact points used taking into account the task context. Results show that the features extracted are sufficient to distinguish key patterns that characterise each stage of the manipulation tasks, ranging from simple object displacement, where the same grasp is employed during manipulation (homogeneous manipulation) to more complex interactions such as object reorientation, fine positioning, and sequential in-hand rotation (dexterous manipulation). The framework presented retains the relevant data from human demonstrations, concerning both the manipulation and object characteristics, to be used by future grasp planning in artificial systems performing autonomous grasping.  相似文献   

4.
Underwater intervention is a favorite and difficult task for AUVs. To realize the underwater manipulation for the small size spherical underwater robot SUR-II, a father–son underwater intervention robotic system (FUIRS) is proposed in our group. The FUIRS employs a novel biomimetic microrobot to realize an underwater manipulation task. This paper describes the biomimetic microrobot which is inspired by an octopus. The son robot can realize basic underwater motion, i.e. grasping motion, object detection and swimming motion. To enhance the payload, a novel buoyancy force adjustment method was proposed which can provides 11.8 mN additional buoyancy force to overcome the weight of the object in water. Finally, three underwater manipulation experiments are carried out to verify the performance of the son robot. One is carried by swimming motion and buoyancy adjustment; the other two are only carried by buoyancy adjustment. And the experimental results show that the son robot can realize the underwater manipulation of different shape and size objects successfully. The swimming motion can reduce the time cost of underwater manipulation remarkably.  相似文献   

5.
The emerging field of service robots demands new systems with increased flexibility. The flexibility of a robot system can be increased in many different ways. Mobile manipulation—the coordinated use of manipulation capabilities and mobility—is an approach to increase robots flexibility with regard to their motion capabilities. Most mobile manipulators that are currently under development use a single arm on a mobile platform. The use of a two-arm manipulator system allows increased manipulation capabilities, especially when large, heavy, or non-rigid objects must be manipulated. This article is concerned with motion control for mobile two-arm systems. These systems require new schemes for motion coordination and control. A coordination scheme called transparent coordination is presented that allows for an arbitrary number of manipulators on a mobile platform. Furthermore, a reactive control scheme is proposed to enable the platform to support sensor-guided manipulator motion. Finally, this article introduces a collision avoidance scheme for mobile two-arm robots. This scheme surveys the vehicle motion to avoid platform collisions and arm collisions caused by self-motion of the robot. © 1996 John Wiley & Sons, Inc.  相似文献   

6.
We present a method for autonomous learning of dextrous manipulation skills with multifingered robot hands. We use heuristics derived from observations made on human hands to reduce the degrees of freedom of the task and make learning tractable. Our approach consists of learning and storing a few basic manipulation primitives for a few prototypical objects and then using an associative memory to obtain the required parameters for new objects and/or manipulations. The parameter space of the robot is searched using a modified version of the evolution strategy, which is robust to the noise normally present in real-world complex robotic tasks. Given the difficulty of modeling and simulating accurately the interactions of multiple fingers and an object, and to ensure that the learned skills are applicable in the real world, our system does not rely on simulation; all the experimentation is performed by a physical robot, in this case the 16-degree-of-freedom Utah/MIT hand. Experimental results show that accurate dextrous manipulation skills can be learned by the robot in a short period of time. We also show the application of the learned primitives to perform an assembly task and how the primitives generalize to objects that are different from those used during the learning phase.  相似文献   

7.
Fuentes  Olac  Nelson  Randal C. 《Machine Learning》1998,31(1-3):223-237
We present a method for autonomous learning of dextrous manipulation skills with multifingered robot hands. We use heuristics derived from observations made on human hands to reduce the degrees of freedom of the task and make learning tractable. Our approach consists of learning and storing a few basic manipulation primitives for a few prototypical objects and then using an associative memory to obtain the required parameters for new objects and/or manipulations. The parameter space of the robot is searched using a modified version of the evolution strategy, which is robust to the noise normally present in real-world complex robotic tasks. Given the difficulty of modeling and simulating accurately the interactions of multiple fingers and an object, and to ensure that the learned skills are applicable in the real world, our system does not rely on simulation; all the experimentation is performed by a physical robot, in this case the 16-degree-of-freedom Utah/MIT hand. E xperimental results show that accurate dextrous manipulation skills can be learned by the robot in a short period of time. We also show the application of the learned primitives to perform an assembly task and how the primitives generalize to objects that are different from those used during the learning phase.  相似文献   

8.
We present an approach for controlling robotic interactions with objects, using synthetic images generated by morphing shapes. In particular, we attempt the problem of positioning an eye-in-hand robotic system with respect to objects in the workspace for grasping and manipulation. In our formulation, the grasp position (and consequently the approach trajectory of the manipulator), varies with each object. The proposed solution to the problem consists of two parts. First, based on a model-based object recognition framework, images of the objects taken at the desired grasp pose are stored in a database. The recognition and identification of the grasp position for an unknown input object (selected from the family of recognizable objects) occurs by morphing its contour to the templates in the database and using the virtual energy spent during the morph as a dissimilarity measure. In the second step, the images synthesized during the morph are used to guide the eye-in-hand system and execute the grasp. The proposed method requires minimal calibration of the system. Furthermore, it conjoins techniques from shape recognition, computer graphics, and vision-based robot control in a unified engineering amework. Potential applications range from recognition and positioning with respect to partially-occluded or deformable objects to planning robotic grasping based on human demonstration.  相似文献   

9.
The ability to grasp unknown objects still remains an unsolved problem in the robotics community. One of the challenges is to choose an appropriate grasp configuration, i.e., the 6D pose of the hand relative to the object and its finger configuration. In this paper, we introduce an algorithm that is based on the assumption that similarly shaped objects can be grasped in a similar way. It is able to synthesize good grasp poses for unknown objects by finding the best matching object shape templates associated with previously demonstrated grasps. The grasp selection algorithm is able to improve over time by using the information of previous grasp attempts to adapt the ranking of the templates to new situations. We tested our approach on two different platforms, the Willow Garage PR2 and the Barrett WAM robot, which have very different hand kinematics. Furthermore, we compared our algorithm with other grasp planners and demonstrated its superior performance. The results presented in this paper show that the algorithm is able to find good grasp configurations for a large set of unknown objects from a relatively small set of demonstrations, and does improve its performance over time.  相似文献   

10.
In this paper, we present visibility-based spatial reasoning techniques for real-time object manipulation in cluttered environments. When a robot is requested to manipulate an object, a collision-free path should be determined to access, grasp, and move the target object. This often requires processing of time-consuming motion planning routines, making real-time object manipulation difficult or infeasible, especially in a robot with a high DOF and/or in a highly cluttered environment. This paper places special emphasis on developing real-time motion planning, in particular, for accessing and removing an object in a cluttered workspace, as a local planner that can be integrated with a general motion planner for improved overall efficiency. In the proposed approach, the access direction of the object to grasp is determined through visibility query, and the removal direction to retrieve the object grasped by the gripper is computed using an environment map. The experimental results demonstrate that the proposed approach, when implemented by graphics hardware, is fast and robust enough to manipulate 3D objects in real-time applications.  相似文献   

11.
The following study deals with motion optimization of robot arms having to transfer mobile objects grasped when moving. This approach is aimed at performing repetitive transfer tasks at a rapid rate without interrupting the dynamics of both the manipulator and the moving object. The junction location of the robot gripper with the object, together with grasp conditions, are partly defined by a set of local constraints. Thus, optimizing the robot motion in the approach phase of the transfer task leads to the statement of an optimal junction problem between the robot and the moving object. This optimal control problem is characterized by constrained final state and unknown traveling time. In such a case, Pontryagin"s maximum principle is a powerful mathematical tool for solving this optimization problem. Three simulated results of removing a mobile object on a conveyor belt are presented; the object is grasped in motion by a planar three-link manipulator.  相似文献   

12.
We investigate the problem of a robot searching for an object. This requires reasoning about both perception and manipulation: some objects are moved because the target may be hidden behind them, while others are moved because they block the manipulator’s access to other objects. We contribute a formulation of the object search by manipulation problem using visibility and accessibility relations between objects. We also propose a greedy algorithm and show that it is optimal under certain conditions. We propose a second algorithm which takes advantage of the structure of the visibility and accessibility relations between objects to quickly generate plans. Our empirical evaluation strongly suggests that our algorithm is optimal under all conditions. We support this claim with a partial proof. Finally, we demonstrate an implementation of both algorithms on a real robot using a real object detection system.  相似文献   

13.
This paper describes an object rearrangement system for an autonomous mobile robot. The objective of the robot is to autonomously explore and learn about an environment, to detect changes in the environment on a later visit after object disturbances and finally, to move objects back to their original positions. In the implementation, it is assumed that the robot does not have any prior knowledge of the environment and the positions of the objects. The system exploits Simultaneous Localisation and Mapping (SLAM) and autonomous exploration techniques to achieve the task. These techniques allow the robot to perform localisation and mapping which is required to perform the object rearrangement task autonomously. The system includes an arrangement change detector, object tracking and map update that work with a Polar Scan Match (PSM) Extended Kalman Filter (EKF) SLAM system. In addition, a path planning technique for dragging and pushing an object is also presented in this paper. Experimental results of the integrated approach are shown to demonstrate that the proposed approach provides real-time autonomous object rearrangements by a mobile robot in an initially unknown real environment. Experiments also show the limits of the system by investigating failure modes.  相似文献   

14.
This paper describes an intuitive approach for a cognitive grasp of a robot. The cognitive grasp means the chain of processes that make a robot to learn and execute a grasping method for unknown objects like a human. In the learning step, a robot looks around a target object to estimate the 3D shape and understands the grasp type for the object through a human demonstration. In the execution step, the robot correlates an unknown object to one of known grasp types by comparing the shape similarity of the target object based on previously learned models. For this cognitive grasp, we mainly deal with two functionalities such as reconstructing an unknown 3D object and classifying the object by grasp types. In the experiment, we evaluate the performance of object classification according to the grasp types for 20 objects via human demonstration.  相似文献   

15.
This paper presents a novel object–object affordance learning approach that enables intelligent robots to learn the interactive functionalities of objects from human demonstrations in everyday environments. Instead of considering a single object, we model the interactive motions between paired objects in a human–object–object way. The innate interaction-affordance knowledge of the paired objects are learned from a labeled training dataset that contains a set of relative motions of the paired objects, human actions, and object labels. The learned knowledge is represented with a Bayesian Network, and the network can be used to improve the recognition reliability of both objects and human actions and to generate proper manipulation motion for a robot if a pair of objects is recognized. This paper also presents an image-based visual servoing approach that uses the learned motion features of the affordance in interaction as the control goals to control a robot to perform manipulation tasks.  相似文献   

16.
In this work, we present WALK‐MAN, a humanoid platform that has been developed to operate in realistic unstructured environment, and demonstrate new skills including powerful manipulation, robust balanced locomotion, high‐strength capabilities, and physical sturdiness. To enable these capabilities, WALK‐MAN design and actuation are based on the most recent advancements of series elastic actuator drives with unique performance features that differentiate the robot from previous state‐of‐the‐art compliant actuated robots. Physical interaction performance is benefited by both active and passive adaptation, thanks to WALK‐MAN actuation that combines customized high‐performance modules with tuned torque/velocity curves and transmission elasticity for high‐speed adaptation response and motion reactions to disturbances. WALK‐MAN design also includes innovative design optimization features that consider the selection of kinematic structure and the placement of the actuators with the body structure to maximize the robot performance. Physical robustness is ensured with the integration of elastic transmission, proprioceptive sensing, and control. The WALK‐MAN hardware was designed and built in 11 months, and the prototype of the robot was ready four months before DARPA Robotics Challenge (DRC) Finals. The motion generation of WALK‐MAN is based on the unified motion‐generation framework of whole‐body locomotion and manipulation (termed loco‐manipulation). WALK‐MAN is able to execute simple loco‐manipulation behaviors synthesized by combining different primitives defining the behavior of the center of gravity, the motion of the hands, legs, and head, the body attitude and posture, and the constrained body parts such as joint limits and contacts. The motion‐generation framework including the specific motion modules and software architecture is discussed in detail. A rich perception system allows the robot to perceive and generate 3D representations of the environment as well as detect contacts and sense physical interaction force and moments. The operator station that pilots use to control the robot provides a rich pilot interface with different control modes and a number of teleoperated or semiautonomous command features. The capability of the robot and the performance of the individual motion control and perception modules were validated during the DRC in which the robot was able to demonstrate exceptional physical resilience and execute some of the tasks during the competition.  相似文献   

17.
A key challenge in autonomous mobile manipulation is the ability to determine, in real time, how to safely execute complex tasks when placed in unknown or changing world. Addressing this issue for Intervention Autonomous Underwater Vehicles (I‐AUVs), operating in potentially unstructured environment is becoming essential. Our research focuses on using motion planning to increase the I‐AUVs autonomy, and on addressing three major challenges: (a) producing consistent deterministic trajectories, (b) addressing the high dimensionality of the system and its impact on the real‐time response, and (c) coordinating the motion between the floating vehicle and the arm. The latter challenge is of high importance to achieve the accuracy required for manipulation, especially considering the floating nature of the AUV and the control challenges that come with it. In this study, for the first time, we demonstrate experimental results performing manipulation in unknown environment. The Multirepresentation, Multiheuristic A* (MR‐MHA*) search‐based planner, previously tested only in simulation and in a known a priori environment, is now extended to control Girona500 I‐AUV performing a Valve‐Turning intervention in a water tank. To this aim, the AUV was upgraded with an in‐house‐developed laser scanner to gather three‐dimensional (3D) point clouds for building, in real time, an occupancy grid map (octomap) of the environment. The MR‐MHA* motion planner used this octomap to plan, in real time, collision‐free trajectories. To achieve the accuracy required to complete the task, a vision‐based navigation method was employed. In addition, to reinforce the safety, accounting for the localization uncertainty, a cost function was introduced to keep minimum clearance in the planning. Moreover a visual‐servoing method had to be implemented to complete the last step of the manipulation with the desired accuracy. Lastly, we further analyzed the approach performance from both loose‐coupling and clearance perspectives. Our results show the success and efficiency of the approach to meet the desired behavior, as well as the ability to adapt to unknown environments.  相似文献   

18.
A time-of-flight camera can help a service robot to sense its 3D environment. In this paper, we introduce our methods for sensor calibration and 3D data segmentation to use it to automatically plan grasps and manipulation actions for a service robot. Impedance control is intensively used to further compensate the modeling error and to apply the computed forces. The methods are further demonstrated in three service robotic applications. Sensor-based motion planning allows the robot to move within dynamic and cluttered environment without collision. Unknown objects can be detected and grasped. In the autonomous ice cream serving scenario, the robot captures the surface of ice cream and plans a manipulation trajectory to scoop a portion of ice cream.  相似文献   

19.
The Mohamed Bin Zayed International Robotics Challenge (MBZIRC) 2017 has defined ambitious new benchmarks to advance the state‐of‐the‐art in autonomous operation of ground‐based and flying robots. In this study, we describe our winning entry to MBZIRC Challenge 2: the mobile manipulation robot Mario. It is capable of autonomously solving a valve manipulation task using a wrench tool detected, grasped, and finally used to turn a valve stem. Mario’s omnidirectional base allows both fast locomotion and precise close approach to the manipulation panel. We describe an efficient detector for medium‐sized objects in three‐dimensional laser scans and apply it to detect the manipulation panel. An object detection architecture based on deep neural networks is used to find and select the correct tool from grayscale images. Parametrized motion primitives are adapted online to percepts of the tool and valve stem to turn the stem. We report in detail on our winning performance at the challenge and discuss lessons learned.  相似文献   

20.
《Advanced Robotics》2013,27(5):527-546
Prediction of dynamic features is an important task for determining the manipulation strategies of an object. This paper presents a technique for predicting dynamics of objects relative to the robot's motion from visual images. During the training phase, the authors use the recurrent neural network with parametric bias (RNNPB) to self-organize the dynamics of objects manipulated by the robot into the PB space. The acquired PB values, static images of objects and robot motor values are input into a hierarchical neural network to link the images to dynamic features (PB values). The neural network extracts prominent features that each induce object dynamics. For prediction of the motion sequence of an unknown object, the static image of the object and robot motor value are input into the neural network to calculate the PB values. By inputting the PB values into the closed loop RNNPB, the predicted movements of the object relative to the robot motion are calculated recursively. Experiments were conducted with the humanoid robot Robovie-IIs pushing objects at different heights. The results of the experiment predicting the dynamics of target objects proved that the technique is efficient for predicting the dynamics of the objects.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号