首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
聂仙丽  蒋平  陈辉堂 《机器人》2002,24(3):201-208
本文探索了一种直接利用自然语言进行机器人运动技能训练的控制方法, 提出了利用模糊神经网络结构作为基本行为控制单元,通过教师的自然语言指令完成针对某 一特定行为的运动经验获取和控制器训练,这是一种更加自然的控制器构造方式,以基本运 动单元为基础,可以进一步实现机器人复杂任务的语言编程与控制.所提控制方法最终在一 个轮式移动机器人系统上进行了语言训练实验.  相似文献   

2.
Fuentes  Olac  Nelson  Randal C. 《Machine Learning》1998,31(1-3):223-237
We present a method for autonomous learning of dextrous manipulation skills with multifingered robot hands. We use heuristics derived from observations made on human hands to reduce the degrees of freedom of the task and make learning tractable. Our approach consists of learning and storing a few basic manipulation primitives for a few prototypical objects and then using an associative memory to obtain the required parameters for new objects and/or manipulations. The parameter space of the robot is searched using a modified version of the evolution strategy, which is robust to the noise normally present in real-world complex robotic tasks. Given the difficulty of modeling and simulating accurately the interactions of multiple fingers and an object, and to ensure that the learned skills are applicable in the real world, our system does not rely on simulation; all the experimentation is performed by a physical robot, in this case the 16-degree-of-freedom Utah/MIT hand. E xperimental results show that accurate dextrous manipulation skills can be learned by the robot in a short period of time. We also show the application of the learned primitives to perform an assembly task and how the primitives generalize to objects that are different from those used during the learning phase.  相似文献   

3.
We present a method for autonomous learning of dextrous manipulation skills with multifingered robot hands. We use heuristics derived from observations made on human hands to reduce the degrees of freedom of the task and make learning tractable. Our approach consists of learning and storing a few basic manipulation primitives for a few prototypical objects and then using an associative memory to obtain the required parameters for new objects and/or manipulations. The parameter space of the robot is searched using a modified version of the evolution strategy, which is robust to the noise normally present in real-world complex robotic tasks. Given the difficulty of modeling and simulating accurately the interactions of multiple fingers and an object, and to ensure that the learned skills are applicable in the real world, our system does not rely on simulation; all the experimentation is performed by a physical robot, in this case the 16-degree-of-freedom Utah/MIT hand. Experimental results show that accurate dextrous manipulation skills can be learned by the robot in a short period of time. We also show the application of the learned primitives to perform an assembly task and how the primitives generalize to objects that are different from those used during the learning phase.  相似文献   

4.
5.
6.
《Advanced Robotics》2013,27(8):835-858
Dexterous manipulation plays an important role in working robots. Manipulator tasks such as assembly and disassembly can generally be divided into several motion primitives. We call these 'skills' and explain how most manipulator tasks can be composed of skill sequences. Skills are also used to compensate for errors both in the geometric model and in manipulator motions. There are dispensable data in the shapes, positions and orientations of objects when achieving skill motions in a task. Therefore, we can simplify geometric models by considering the dispensable data in a skill motion. We call such robust and simplified models 'false models'. This paper describes our definition of false models used in planning and visual sensing, and shows the effectiveness of our method using examples of tasks involving the manipulation of mechanical and electronic parts. Furthermore, we show the application of false models to objects of indefinite sizes and shapes using examples of the same tasks.  相似文献   

7.
A goal of this research is to accomplish a long distance navigation task by an autonomous mobile manipulator, including a behavior of “Passing through a doorway.” In our approach to this problem, we apply the concept of action primitives to the mobile manipulator control system. Action primitives are defined as unit elements of a complex behavior (such as door opening behavior), which control a robot according to a sequence of planned motion primitives. An important feature of the concept is that each action primitive is designed to include an error adjustment mechanism to cope with the accumulated position error of the mobile base. In this article, we report on the design and implementation of action primitives for a door opening task, and show experimental results for “Passing through a doorway” by an autonomous mobile manipulator using sequences of action primitives. © 1996 John Wiley & Sons, Inc.  相似文献   

8.
Task demonstration is an effective technique for developing robot motion control policies. As tasks become more complex, however, demonstration can become more difficult. In this work, we introduce an algorithm that uses corrective human feedback to build a policy able to perform a novel task, by combining simpler policies learned from demonstration. While some demonstration-based learning approaches do adapt policies with execution experience, few provide corrections within low-level motion control domains or to enable the linking of multiple of demonstrated policies. Here we introduce Feedback for Policy Scaffolding (FPS) as an algorithm that first evaluates and corrects the execution of motion primitive policies learned from demonstration. The algorithm next corrects and enables the execution of a more complex task constructed from these primitives. Key advantages of building a policy from demonstrated primitives is the potential for primitive policy reuse within multiple complex policies and the faster development of these policies, in addition to the development of complex policies for which full demonstration is difficult. Policy reuse under our algorithm is assisted by human teacher feedback, which also contributes to the improvement of policy performance. Within a simulated robot motion control domain we validate that, using FPS, a policy for a novel task is successfully built from motion primitives learned from demonstration. We show feedback to both aid and enable policy development, improving policy performance in success, speed and efficiency.  相似文献   

9.
The processing of captured motion is an essential task for undertaking the synthesis of high-quality character animation. The motion decomposition techniques investigated in prior work extract meaningful motion primitives that help to facilitate this process. Carefully selected motion primitives can play a major role in various motion-synthesis tasks, such as interpolation, blending, warping, editing or the generation of new motions. Unfortunately, for a complex character motion, finding generic motion primitives by decomposition is an intractable problem due to the compound nature of the behaviours of such characters. Additionally, decomposed motion primitives tend to be too limited for the chosen model to cover a broad range of motion-synthesis tasks. To address these challenges, we propose a generative motion decomposition framework in which the decomposed motion primitives are applicable to a wide range of motion-synthesis tasks. Technically, the input motion is smoothly decomposed into three motion layers. These are base-level motion, a layer with controllable motion displacements and a layer with high-frequency residuals. The final motion can easily be synthesized simply by changing a single user parameter that is linked to the layer of controllable motion displacements or by imposing suitable temporal correspondences to the decomposition framework. Our experiments show that this decomposition provides a great deal of flexibility in several motion synthesis scenarios: denoising, style modulation, upsampling and time warping.  相似文献   

10.
Many motor skills in humanoid robotics can be learned using parametrized motor primitives. While successful applications to date have been achieved with imitation learning, most of the interesting motor learning problems are high-dimensional reinforcement learning problems. These problems are often beyond the reach of current reinforcement learning methods. In this paper, we study parametrized policy search methods and apply these to benchmark problems of motor primitive learning in robotics. We show that many well-known parametrized policy search methods can be derived from a general, common framework. This framework yields both policy gradient methods and expectation-maximization (EM) inspired algorithms. We introduce a novel EM-inspired algorithm for policy learning that is particularly well-suited for dynamical system motor primitives. We compare this algorithm, both in simulation and on a real robot, to several well-known parametrized policy search methods such as episodic REINFORCE, ??Vanilla?? Policy Gradients with optimal baselines, episodic Natural Actor Critic, and episodic Reward-Weighted Regression. We show that the proposed method out-performs them on an empirical benchmark of learning dynamical system motor primitives both in simulation and on a real robot. We apply it in the context of motor learning and show that it can learn a complex Ball-in-a-Cup task on a real Barrett WAM? robot arm.  相似文献   

11.
To model manipulation tasks, we propose a novel method for learning manipulation skills based on the degree of motion granularity. Even though manipulation tasks usually consist of a mixture of fine-grained and coarse-grained movements, to the best of our knowledge, manipulation skills have so far been modeled without considering their motion granularity. To model such a manipulation skill, Gaussian mixture models (GMMs) have been represented using several well-known techniques such as principal component analysis, k-means, Bayesian information criterion, and expectation-maximization (EM) algorithms. However, in this GMM, there is a problem in that when a mixture of fine-grained and coarse-grained movements is modeled as a GMM, fine-grained movements tend to be poorly represented. To resolve this issue, we measure a continuous degree of motion granularity for every time step of a manipulation task from a GMM. Then, we remodel the GMM by weighting a conventional k-means algorithm with motion granularity. Finally, we also estimate the parameters of the GMM by weighting the conventional EM with motion granularity. To validate our proposed method, we evaluate the GMM estimated using our proposed method by comparing it with those estimated by different GMMs in terms of inference, regression, and generalization using a robot arm that performs two daily tasks, namely decorating a very small area and passing through a narrow tunnel.  相似文献   

12.
The motion of manipulation in a task can be decomposed into several motion primitives called “skills.” Skill-based motion planning gives the possibility of performing tasks as skillfully as human beings do. On the other hand, the backprojection method performed in configuration space has often been used in fine-motion planning. This paper describes fine-motion planning in three-dimensional space using skill-based backprojection. Now that skill-based planning in three-dimensional space has been developed, it becomes possible to plan manipulation motions like the behavior of the human hand. This work was presented, in part, at the Second International Symposium on Artificial Life and Robotics, Oita, Japan, February 18–20, 1997.  相似文献   

13.
We describe a novel approach that allows humanoid robots to incrementally integrate motion primitives and language expressions, when there are underlying natural language and motion language modules. The natural language module represents sentence structure using word bigrams. The motion language module extracts the relations between motion primitives and the relevant words. Both the natural language module and the motion language module are expressed as probabilistic models and, therefore, they can be integrated so that the robots can both interpret observed motion in the form of sentences and generate the motion corresponding to a sentence command. Incremental learning is needed for a robot that develops these linguistic skills autonomously . The algorithm is derived from optimization of the natural language and motion language modules under constraints on their probabilistic variables such that the association between motion primitive and sentence in incrementally added training pairs is strengthened. A test based on interpreting observed motion in the forms of sentence demonstrates the validity of the incremental statistical learning algorithm.  相似文献   

14.
In this work, we present WALK‐MAN, a humanoid platform that has been developed to operate in realistic unstructured environment, and demonstrate new skills including powerful manipulation, robust balanced locomotion, high‐strength capabilities, and physical sturdiness. To enable these capabilities, WALK‐MAN design and actuation are based on the most recent advancements of series elastic actuator drives with unique performance features that differentiate the robot from previous state‐of‐the‐art compliant actuated robots. Physical interaction performance is benefited by both active and passive adaptation, thanks to WALK‐MAN actuation that combines customized high‐performance modules with tuned torque/velocity curves and transmission elasticity for high‐speed adaptation response and motion reactions to disturbances. WALK‐MAN design also includes innovative design optimization features that consider the selection of kinematic structure and the placement of the actuators with the body structure to maximize the robot performance. Physical robustness is ensured with the integration of elastic transmission, proprioceptive sensing, and control. The WALK‐MAN hardware was designed and built in 11 months, and the prototype of the robot was ready four months before DARPA Robotics Challenge (DRC) Finals. The motion generation of WALK‐MAN is based on the unified motion‐generation framework of whole‐body locomotion and manipulation (termed loco‐manipulation). WALK‐MAN is able to execute simple loco‐manipulation behaviors synthesized by combining different primitives defining the behavior of the center of gravity, the motion of the hands, legs, and head, the body attitude and posture, and the constrained body parts such as joint limits and contacts. The motion‐generation framework including the specific motion modules and software architecture is discussed in detail. A rich perception system allows the robot to perceive and generate 3D representations of the environment as well as detect contacts and sense physical interaction force and moments. The operator station that pilots use to control the robot provides a rich pilot interface with different control modes and a number of teleoperated or semiautonomous command features. The capability of the robot and the performance of the individual motion control and perception modules were validated during the DRC in which the robot was able to demonstrate exceptional physical resilience and execute some of the tasks during the competition.  相似文献   

15.
Generally, a manipulator task can be divided into several motion primitives called “skills.” Skill-based motion planning is an effective way to execute a complicated task. When planning an assembly process, a technique of fine motion planning such as the backprojection method in configuration space is often used. This paper describes fine motion planning using a skill library, which consists of a pattern of trajectories of skill motions in configuration space. This method gives the initial position and orientation from which the object can reach the goal in skill-based manipulation.  相似文献   

16.
17.
An interactive loop between motion recognition and motion generation is a fundamental mechanism for humans and humanoid robots. We have been developing an intelligent framework for motion recognition and generation based on symbolizing motion primitives. The motion primitives are encoded into Hidden Markov Models (HMMs), which we call “motion symbols”. However, to determine the motion primitives to use as training data for the HMMs, this framework requires a manual segmentation of human motions. Essentially, a humanoid robot is expected to participate in daily life and must learn many motion symbols to adapt to various situations. For this use, manual segmentation is cumbersome and impractical for humanoid robots. In this study, we propose a novel approach to segmentation, the Real-time Unsupervised Segmentation (RUS) method, which comprises three phases. In the first phase, short human movements are encoded into feature HMMs. Seamless human motion can be converted to a sequence of these feature HMMs. In the second phase, the causality between the feature HMMs is extracted. The causality data make it possible to predict movement from observation. In the third phase, movements having a large prediction uncertainty are designated as the boundaries of motion primitives. In this way, human whole-body motion can be segmented into a sequence of motion primitives. This paper also describes an application of RUS to AUtonomous Symbolization of motion primitives (AUS). Each derived motion primitive is classified into an HMM for a motion symbol, and parameters of the HMMs are optimized by using the motion primitives as training data in competitive learning. The HMMs are gradually optimized in such a way that the HMMs can abstract similar motion primitives. We tested the RUS and AUS frameworks on captured human whole-body motions and demonstrated the validity of the proposed framework.  相似文献   

18.
Our focus is on creating interesting and human-like behaviors for humanoid robots and virtual characters. Interactive behaviors are especially engaging. They are also challenging, as they necessitate finding satisfactory realtime solutions for complex systems such as the 30-degree-of-freedom humanoid robot in our laboratory. Here we describe a catching behavior between a person and a robot. We generate ball-hand impact predictions based on the flight of the ball, and human-like motion trajectories to move the hand to the catch position. We use a dynamical systems approach to produce the motion trajectories where new movements are generated from motion primitives as they are needed.  相似文献   

19.
Solving mobile manipulation tasks in inaccessible and dangerous environments is an important application of robots to support humans. Example domains are construction and maintenance of manned and unmanned stations on the moon and other planets. Suitable platforms require flexible and robust hardware, a locomotion approach that allows for navigating a wide variety of terrains, dexterous manipulation capabilities, and respective user interfaces. We present the CENTAURO system which has been designed for these requirements and consists of the Centauro robot and a set of advanced operator interfaces with complementary strength enabling the system to solve a wide range of realistic mobile manipulation tasks. The robot possesses a centaur‐like body plan and is driven by torque‐controlled compliant actuators. Four articulated legs ending in steerable wheels allow for omnidirectional driving as well as for making steps. An anthropomorphic upper body with two arms ending in five‐finger hands enables human‐like manipulation. The robot perceives its environment through a suite of multimodal sensors. The resulting platform complexity goes beyond the complexity of most known systems which puts the focus on a suitable operator interface. An operator can control the robot through a telepresence suit, which allows for flexibly solving a large variety of mobile manipulation tasks. Locomotion and manipulation functionalities on different levels of autonomy support the operation. The proposed user interfaces enable solving a wide variety of tasks without previous task‐specific training. The integrated system is evaluated in numerous teleoperated experiments that are described along with lessons learned.  相似文献   

20.
In this article, a learning framework that enables robotic arms to replicate new skills from human demonstration is proposed. The learning framework makes use of online human motion data acquired using wearable devices as an interactive interface for providing the anticipated motion to the robot in an efficient and user-friendly way. This approach offers human tutors the ability to control all joints of the robotic manipulator in real-time and able to achieve complex manipulation. The robotic manipulator is controlled remotely with our low-cost wearable devices for easy calibration and continuous motion mapping. We believe that our approach might lead to improving the human-robot skill learning, adaptability, and sensitivity of the proposed human-robot interaction for flexible task execution and thereby giving room for skill transfer and repeatability without complex coding skills.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号