首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Recently, robots are introduced to warehouses and factories for automation and are expected to execute dual-arm manipulation as human does and to manipulate large, heavy and unbalanced objects. We focus on target picking task in the cluttered environment and aim to realize a robot picking system which the robot selects and executes proper grasping motion from single-arm and dual-arm motion. In this paper, we propose a few-experiential learning-based target picking system with selective dual-arm grasping. In our system, a robot first learns grasping points and object semantic and instance label with automatically synthesized dataset. The robot then executes and collects grasp trial experiences in the real world and retrains the grasping point prediction model with the collected trial experiences. Finally, the robot evaluates candidate pairs of grasping object instance, strategy and points and selects to execute the optimal grasping motion. In the experiments, we evaluated our system by conducting target picking task experiments with a dual-arm humanoid robot Baxter in the cluttered environment as warehouse.  相似文献   

2.
For the last decade, we have been developing a vision-based architecture for mobile robot navigation. Using our bio-inspired model of navigation, robots can perform sensory-motor tasks in real time in unknown indoor as well as outdoor environments. We address here the problem of autonomous incremental learning of a sensory-motor task, demonstrated by an operator guiding a robot. The proposed system allows for semisupervision of task learning and is able to adapt the environmental partitioning to the complexity of the desired behavior. A real dialogue based on actions emerges from the interactive teaching. The interaction leads the robot to autonomously build a precise sensory-motor dynamics that approximates the behavior of the teacher. The usability of the system is highlighted by experiments on real robots, in both indoor and outdoor environments. Accuracy measures are also proposed in order to evaluate the learned behavior as compared to the expected behavioral attractor. These measures, used first in a real experiment and then in a simulated experiment, demonstrate how a real interaction between the teacher and the robot influences the learning process.  相似文献   

3.
We have recently introduced a neural network mobile robot controller (NETMORC). This controller, based on previously developed neural network models of biological sensory-motor control, autonomously learns the forward and inverse odometry of a differential drive robot through an unsupervised learning-by-doing cycle. After an initial learning phase, the controller can move the robot to an arbitrary stationary or moving target while compensating for noise and other forms of disturbance, such as wheel slippage or changes in the robot's plant. In addition, the forward odometric map allows the robot to reach targets in the absence of sensory feedback. The controller is also able to adapt in response to long-term changes in the robot's plant, such as a change in the radius of the wheels. In this article we review the NETMORC architecture and describe its simplified algorithmic implementation, we present new, quantitative results on NETMORC's performance and adaptability under noise-free and noisy conditions, we compare NETMORC's performance on a trajectory-following task with the performance of an alternative controller, and we describe preliminary results on the hardware implementation of NETMORC with the mobile robot ROBUTER.  相似文献   

4.
In this article, we present an integrated manipulation framework for a service robot, that allows to interact with articulated objects at home environments through the coupling of vision and force modalities. We consider a robot which is observing simultaneously his hand and the object to manipulate, by using an external camera (i.e. robot head). Task-oriented grasping algorithms (Proc of IEEE Int Conf on robotics and automation, pp 1794–1799, 2007) are used in order to plan a suitable grasp on the object according to the task to perform. A new vision/force coupling approach (Int Conf on advanced robotics, 2007), based on external control, is used in order to, first, guide the robot hand towards the grasp position and, second, perform the task taking into account external forces. The coupling between these two complementary sensor modalities provides the robot with robustness against uncertainties in models and positioning. A position-based visual servoing control law has been designed in order to continuously align the robot hand with respect to the object that is being manipulated, independently of camera position. This allows to freely move the camera while the task is being executed and makes this approach amenable to be integrated in current humanoid robots without the need of hand-eye calibration. Experimental results on a real robot interacting with different kind of doors are presented.  相似文献   

5.
We propose an approach to efficiently teach robots how to perform dynamic manipulation tasks in cooperation with a human partner. The approach utilises human sensorimotor learning ability where the human tutor controls the robot through a multi-modal interface to make it perform the desired task. During the tutoring, the robot simultaneously learns the action policy of the tutor and through time gains full autonomy. We demonstrate our approach by an experiment where we taught a robot how to perform a wood sawing task with a human partner using a two-person cross-cut saw. The challenge of this experiment is that it requires precise coordination of the robot’s motion and compliance according to the partner’s actions. To transfer the sawing skill from the tutor to the robot we used Locally Weighted Regression for trajectory generalisation, and adaptive oscillators for adaptation of the robot to the partner’s motion.  相似文献   

6.
In this paper, we present a novel data-driven design method for the human-robot interaction (HRI) system, where a given task is achieved by cooperation between the human and the robot. The presented HRI controller design is a two-level control design approach consisting of a task-oriented performance optimization design and a plant-oriented impedance controller design. The task-oriented design minimizes the human effort and guarantees the perfect task tracking in the outer-loop, while the plant-oriented achieves the desired impedance from the human to the robot manipulator end-effector in the inner-loop. Data-driven reinforcement learning techniques are used for performance optimization in the outer-loop to assign the optimal impedance parameters. In the inner-loop, a velocity-free filter is designed to avoid the requirement of end-effector velocity measurement. On this basis, an adaptive controller is designed to achieve the desired impedance of the robot manipulator in the task space. The simulation and experiment of a robot manipulator are conducted to verify the efficacy of the presented HRI design framework.   相似文献   

7.
Team oriented plans have become a popular tool for operators to control teams of autonomous robots to pursue complex objectives in complex environments. Such plans allow an operator to specify high level directives and allow the team to autonomously determine how to implement such directives. However, the operators will often want to interrupt the activities of individual team members to deal with particular situations, such as a danger to a robot that the robot team cannot perceive. Previously, after such interrupts, the operator would usually need to restart the team plan to ensure its success. In this paper, we present an approach to encoding how interrupts can be smoothly handled within a team plan. Building on a team plan formalism that uses Colored Petri Nets, we describe a mechanism that allows a range of interrupts to be handled smoothly, allowing the team to efficiently continue with its task after the operator intervention. We validate the approach with an application of robotic watercraft and show improved overall efficiency. In particular, we consider a situation where several platforms should travel through a set of pre-specified locations, and we identify three specific cases that require the operator to interrupt the plan execution: (i) a boat must be pulled out; (ii) all boats should stop the plan and move to a pre-specified assembly position; (iii) a set of boats must synchronize to traverse a dangerous area one after the other. Our experiments show that the use of our interrupt mechanism decreases the time to complete the plan (up to 48 % reduction) and decreases the operator load (up to 80 % reduction in number of user actions). Moreover, we performed experiments with real robotic platforms to validate the applicability of our mechanism in the actual deployment of robotic watercraft.  相似文献   

8.
Over the last years, physical Human–Robot Interaction (pHRI) has become a particularly interesting topic for industrial tasks. An important issue is allowing people and robots to collaborate in an useful, simple and safe manner. In this work, we propose a new framework that allows the person to collaborate with a robot manipulator, while the robot has its own predefined task. To allow the robot to smoothly switch from its own task to be a compliant collaborator for the person, a variable admittance control is developed. Furthermore, in general the task to accomplish requires the robot to carry variable, unknown, loads at the end-effector. To include this feature in our framework, a robust control is also included to preserve the performance of the robot despite uncertainties coming from the unknown load. To validate our approach, experiments were carried out with a Kuka LBR iiwa 14 R820, first to validate both parts of the controller, and finally, to study a use-case scenario similar to an industrial production line. Results show the efficiency of this approach to allow the person to collaborate at any moment while the robot is capable of performing another task. This flexible framework for object co-manipulation also allows unknown loads up to 2 kg to be handled without making the task more difficult for the person.  相似文献   

9.
Controlling someone’s attention can be defined as shifting his/her attention from the existing direction to another. To shift someone’s attention, gaining attention and meeting gaze are two most important prerequisites. If a robot would like to communicate a particular person, it should turn its gaze to him/her for eye contact. However, it is not an easy task for the robot to make eye contact because such a turning action alone may not be effective in all situations, especially when the robot and the human are not facing each other or the human is intensely attending to his/her task. Therefore, the robot should perform some actions so that it can attract the target person and make him/her respond to the robot to meet gaze. In this paper, we present a robot that can attract a target person’s attention by moving its head, make eye contact through showing gaze awareness by blinking its eyes, and directs his/her attention by repeating its eyes and head turns from the person to the target object. Experiments using 20 human participants confirm the effectiveness of the robot actions to control human attention.  相似文献   

10.
We investigate the application of simulation-based genetic programming to evolve controllers that perform high-level tasks on a service robot. As a case study, we synthesize a controller for a guide robot that manages the visitor traffic flow in an exhibition space in order to maximize the enjoyment of the visitors. We used genetic programming in a low-fidelity simulation to evolve a controller for this task, which was then transferred to a service robot. An experimental evaluation of the evolved controller in both simulation and on the actual service robot shows that it performs well compared to hand-coded heuristics, and performs comparably to a human operator.  相似文献   

11.
One of the basic skills for a robot autonomous grasping is to select the appropriate grasping point for an object. Several recent works have shown that it is possible to learn grasping points from different types of features extracted from a single image or from more complex 3D reconstructions. In the context of learning through experience, this is very convenient, since it does not require a full reconstruction of the object and implicitly incorporates kinematic constraints as the hand morphology. These learning strategies usually require a large set of labeled examples which can be expensive to obtain. In this paper, we address the problem of actively learning good grasping points to reduce the number of examples needed by the robot. The proposed algorithm computes the probability of successfully grasping an object at a given location represented by a feature vector. By autonomously exploring different feature values on different objects, the systems learn where to grasp each of the objects. The algorithm combines beta–binomial distributions and a non-parametric kernel approach to provide the full distribution for the probability of grasping. This information allows to perform an active exploration that efficiently learns good grasping points even among different objects. We tested our algorithm using a real humanoid robot that acquired the examples by experimenting directly on the objects and, therefore, it deals better with complex (anthropomorphic) hand–object interactions whose results are difficult to model, or predict. The results show a smooth generalization even in the presence of very few data as is often the case in learning through experience.  相似文献   

12.
This paper presents a novel enhanced human-robot interaction system based on model reference adaptive control. The presented method delivers guaranteed stability and task performance and has two control loops. A robot-specific inner loop, which is a neuroadaptive controller, learns the robot dynamics online and makes the robot respond like a prescribed impedance model. This loop uses no task information, including no prescribed trajectory. A task-specific outer loop takes into account the human operator dynamics and adapts the prescribed robot impedance model so that the combined human-robot system has desirable characteristics for task performance. This design is based on model reference adaptive control, but of a nonstandard form. The net result is a controller with both adaptive impedance characteristics and assistive inputs that augment the human operator to provide improved task performance of the human-robot team. Simulations verify the performance of the proposed controller in a repetitive point-to-point motion task. Actual experimental implementations on a PR2 robot further corroborate the effectiveness of the approach.  相似文献   

13.
《Advanced Robotics》2013,27(1):93-113
This paper reports on the first gymnastic robot that can perform back handspring. The robot is a planar and serially connected four-link robot, with its joints actuated by electric servomotors. The paper describes the modeling of the robot and the control framework for a back handspring. The controller is derived from a task-specific reference model and its model matching. The use of a reference model described by global physical quantities such as center of mass or angular momentum allows the gymnastic motion planning of a multi-body system to be intuitive and the model matching controller can be applied directly to the experimental model without obtaining each joint trajectory. The controller effectiveness is confirmed via simulations and experiments of the back handspring. Although there remains the problem of how to systematically design the control parameters, the paper shows the strength of the model-based controller for fast gymnastic motions.  相似文献   

14.
This paper addresses the problem of integrating the human operator with autonomous robotic visual tracking and servoing modules. A CCD camera is mounted on the end-effector of a robot and the task is to servo around a static or moving rigid target. In manual control mode, the human operator, with the help of a joystick and a monitor, commands robot motions in order to compensate for tracking errors. In shared control mode, the human operator and the autonomous visual tracking modules command motion along orthogonal sets of degrees of freedom. In autonomous control mode, the autonomous visual tracking modules are in full control of the servoing functions. Finally, in traded control mode, the control can be transferred from the autonomous visual modules to the human operator and vice versa. This paper presents an experimental setup where all these different schemes have been tested. Experimental results of all modes of operation are presented and the related issues are discussed. In certain degrees of freedom (DOF) the autonomous modules perform better than the human operator. On the other hand, the human operator can compensate fast for failures in tracking while the autonomous modules fail. Their failure is due to difficulties in encoding an efficient contingency plan.  相似文献   

15.
Target Reaching by Using Visual Information and Q-learning Controllers   总被引:2,自引:0,他引:2  
This paper presents a solution to the problem of manipulation control: target identification and grasping. The proposed controller is designed for a real platform in combination with a monocular vision system. The objective of the controller is to learn an optimal policy to reach and to grasp a spherical object of known size, randomly placed in the environment. In order to accomplish this, the task has been treated as a reinforcement problem, in which the controller learns by a trial and error approach the situation-action mapping. The optimal policy is found by using the Q-Learning algorithm, a model free reinforcement learning technique, that rewards actions that move the arm closer to the target.The vision system uses geometrical computation to simplify the segmentation of the moving target (a spherical object) and determines an estimate of the target parameters. To speed-up the learning time, the simulated knowledge has been ported on the real platform, an industrial robot manipulator PUMA 560. Experimental results demonstrate the effectiveness of the adaptive controller that does not require an explicit global target position using direct perception of the environment.  相似文献   

16.
周玮  苏剑波 《自动化学报》2006,32(5):819-823
Internet上数据传输存在的不确定性时延,使得遥操作的网络机器人无法及时完成远程操作者期望的动作.提出一种新的方法,即对用户意图进行建模,通过移动机器人的自主性来补偿不确定时延对系统性能造成的影响.在对用户操作机器人的意图建立模型后,利用贝叶斯技术对用户意图进行渐进推断,从而使得机器人能够识别用户赋予的任务,并自主地执行该任务,而无需与用户频繁交互.这大大减少了数据传输、提高了整个控制系统的效率.实验结果证明了所提方法的有效性和可行性.  相似文献   

17.
This article studies collaborative human- robot output tracking when the desired output is only known to the human but not to the robot controller. The main contribution of this article is to propose and establish convergence conditions for an iterative learning algorithm that updates the robot input using (i) the effect of the human action on the combined human-robot output tracking (which includes the effect of the human-response dynamics) and (ii) data-inferred human-robot models. This allows the iterative learning control (ILC) to be personalized for each individual human operator. Additionally, experimental results are presented to illustrate the iterative learning approach. Results show that, with the proposed approach, the robot can learn to collaboratively track the output with 10.0% error, which is close to twice the robot noise of 4.6% of the desired output. Furthermore, the data-inferred models provided evidence of the effect of the human operator’s dynamics on the co-tracking task.  相似文献   

18.
Stochastic policy gradient methods have been applied to a variety of robot control tasks such as robot’s acquisition of motor skills because they have an advantage in learning in high-dimensional and continuous feature spaces by combining some heuristics like motor primitives. However, when we apply one of them to a real-world task, it is difficult to represent the task well by designing the policy function and the feature space due to the lack of enough prior knowledge about the task. In this research, we propose a method to extract a preferred feature space autonomously to achieve a task using a stochastic policy gradient method for a sample-based policy. We apply our method to a control of linear dynamical system and the computer simulation result shows that a desirable controller is obtained and that the performance of the controller is improved by the feature selection.  相似文献   

19.
In minimally invasive surgery, tools go through narrow openings and manipulate soft organs to perform surgical tasks. There are limitations in current robot-assisted surgical systems due to the rigidity of robot tools. The aim of the STIFF-FLOP European project is to develop a soft robotic arm to perform surgical tasks. The flexibility of the robot allows the surgeon to move within organs to reach remote areas inside the body and perform challenging procedures in laparoscopy. This article addresses the problem of designing learning interfaces enabling the transfer of skills from human demonstration. Robot programming by demonstration encompasses a wide range of learning strategies, from simple mimicking of the demonstrator's actions to the higher level imitation of the underlying intent extracted from the demonstrations. By focusing on this last form, we study the problem of extracting an objective function explaining the demonstrations from an over-specified set of candidate reward functions, and using this information for self-refinement of the skill. In contrast to inverse reinforcement learning strategies that attempt to explain the observations with reward functions defined for the entire task (or a set of pre-defined reward profiles active for different parts of the task), the proposed approach is based on context-dependent reward-weighted learning, where the robot can learn the relevance of candidate objective functions with respect to the current phase of the task or encountered situation. The robot then exploits this information for skills refinement in the policy parameters space. The proposed approach is tested in simulation with a cutting task performed by the STIFF-FLOP flexible robot, using kinesthetic demonstrations from a Barrett WAM manipulator.  相似文献   

20.
Exploration of high risk terrain areas such as cliff faces and site construction operations by autonomous robotic systems on Mars requires a control architecture that is able to autonomously adapt to uncertainties in knowledge of the environment. We report on the development of a software/hardware framework for cooperating multiple robots performing such tightly coordinated tasks. This work builds on our earlier research into autonomous planetary rovers and robot arms. Here, we seek to closely coordinate the mobility and manipulation of multiple robots to perform examples of a cliff traverse for science data acquisition, and site construction operations including grasping, hoisting, and transport of extended objects such as large array sensors over natural, unpredictable terrain. In support of this work we have developed an enabling distributed control architecture called control architecture for multirobot planetary outposts (CAMPOUT) wherein integrated multirobot mobility and control mechanisms are derived as group compositions and coordination of more basic behaviors under a task-level multiagent planner. CAMPOUT includes the necessary group behaviors and communication mechanisms for coordinated/cooperative control of heterogeneous robotic platforms. In this paper, we describe CAMPOUT, and its application to ongoing physical experiments with multirobot systems at the Jet Propulsion Laboratory in Pasadena, CA, for exploration of cliff faces and deployment of extended payloads.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号