首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Human-robot control interfaces have received increased attention during the past decades for conveniently introducing robot into human daily life. In this paper, a novel Human-machine Interface (HMI) is developed, which contains two components. One is based on the surface electromyography (sEMG) signal, which is from the human upper limb, and the other is based on the Microsoft Kinect sensor. The proposed interface allows the user to control in real time a mobile humanoid robot arm in 3-D space, through upper limb motion estimation by sEMG recordings and Microsoft Kinect sensor. The effectiveness of the method is verified by experiments, including random arm motions in the 3-D space with variable hand speed profiles.  相似文献   

2.
On learning, representing, and generalizing a task in a humanoid robot.   总被引:1,自引:0,他引:1  
We present a programming-by-demonstration framework for generically extracting the relevant features of a given task and for addressing the problem of generalizing the acquired knowledge to different contexts. We validate the architecture through a series of experiments, in which a human demonstrator teaches a humanoid robot simple manipulatory tasks. A probability-based estimation of the relevance is suggested by first projecting the motion data onto a generic latent space using principal component analysis. The resulting signals are encoded using a mixture of Gaussian/Bernoulli distributions (Gaussian mixture model/Bernoulli mixture model). This provides a measure of the spatio-temporal correlations across the different modalities collected from the robot, which can be used to determine a metric of the imitation performance. The trajectories are then generalized using Gaussian mixture regression. Finally, we analytically compute the trajectory which optimizes the imitation metric and use this to generalize the skill to different contexts.  相似文献   

3.
The recent increase in technological maturity has empowered robots to assist humans and provide daily services. Voice command usually appears as a popular human–machine interface for communication. Unfortunately, deaf people cannot exchange information from robots through vocal modalities. To interact with deaf people effectively and intuitively, it is desired that robots, especially humanoids, have manual communication skills, such as performing sign languages. Without ad hoc programming to generate a particular sign language motion, we present an imitation system to teach the humanoid robot performing sign languages by directly replicating observed demonstration. The system symbolically encodes the information of human hand–arm motion from low-cost depth sensors as a skeleton motion time-series that serves to generate initial robot movement by means of perception-to-action mapping. To tackle the body correspondence problem, the virtual impedance control approach is adopted to smoothly follow the initial movement, while preventing potential risks due to the difference in the physical properties between the human and the robot, such as joint limit and self-collision. In addition, the integration of the leg-joints stabilizer provides better balance of the whole robot. Finally, our developed humanoid robot, NINO, successfully learned by imitation from human demonstration to introduce itself using Taiwanese Sign Language.  相似文献   

4.
5.
We developed a new framework to generate hand and finger grasping motions. The proposed framework provides online adaptation to the position and orientation of objects and can generate grasping motions even when the object shape differs from that used during motion capture. This is achieved by using a mesh model, which we call primitive object grasping (POG), to represent the object grasping motion. The POG model uses a mesh deformation algorithm that keeps the original shape of the mesh while adapting to varying constraints. These characteristics are beneficial for finger grasping motion synthesis that satisfies constraints for mimicking the motion capture sequence and the grasping points reflecting the shape of the object. We verify the adaptability of the proposed motion synthesizer according to its position/orientation and shape variations of different objects by using motion capture sequences for grasping primitive objects, namely, a sphere, a cylinder, and a box. In addition, a different grasp strategy called a three‐finger grasp is synthesized to validate the generality of the POG‐based synthesis framework.  相似文献   

6.
《Advanced Robotics》2013,27(10):1073-1091
As a way of automatic programming of robot behavior, a method for building a symbolic manipulation task model from a demonstration is proposed. The feature of this model is that it explicitly stores the information about the essential parts of a task, i.e. interaction between a hand and an environmental object, or interaction between a grasped object and a target object. Thus, even in different environments, this method reproduces robot motion as similar as possible to that of humans to complete the task while changing the motion during non-essential parts to adapt to the current environment. To automatically determine the essential parts, a method called attention point analysis is proposed; this method searches for the nature of a task using multiple sensors and estimates the parameters to represent the task. A humanoid robot is used to verify the reproduced robot motion based on the generated task model.  相似文献   

7.
为实现对具有16个自由度仿人机器人的姿态控制,采用Kinect传感器对人体姿态的坐标数据进行采集,根据坐标信息利用Processing软件开发基于SimpleOpenNI库的上位机软件,建立人体关节模型,并利用空间向量法对仿人机器人的步态规划以及重心控制算法分析,解析各关节的转动角度,经由无线WiFi模块向仿人机器人发送指令以控制舵机的运动,最终实现对机器人的控制,搭建了基于Kinect传感器的测试平台.测试结果表明:仿人机器人上肢在运动范围内无死角,通过对重心的控制,下肢可实现简单的步行,符合预期效果.  相似文献   

8.
This paper presents a sensory-motor coordination scheme for a robot hand-arm-head system that provides the robot with the capability to reach an object while pre-shaping the fingers to the required grasp configuration and while predicting the tactile image that will be perceived after grasping. A model for sensory-motor coordination derived from studies in humans inspired the development of this scheme. A peculiar feature of this model is the prediction of the tactile image. The implementation of the proposed scheme is based on a neuro-fuzzy module that, after a learning phase, starting from visual data, calculates the position and orientation of the hand for reaching, selects the best-suited hand configuration, and predicts the tactile feedback. The implementation of the scheme on a humanoid robot allowed experimental validation of its effectiveness in robotics and provided perspectives on applications of sensory predictions in robot motor control.  相似文献   

9.
We present a novel tactile sensor, which is applied for dextrous grasping with a simple robot gripper. The hardware novelty consists of an array of capacitive sensors, which couple to the object by means of little brushes of fibers. These sensor elements are very sensitive (with a threshold of about 5 mN) but robust enough not to be damaged during grasping. They yield two types of dynamical tactile information corresponding roughly to two types of tactile sensor in the human skin. The complete sensor consists of a foil-based static force sensor, which yields the total force and the center of the two-dimensional force distribution and is surrounded by an array of the dynamical sensor elements. One such sensor has been mounted on each of the two gripper jaws of our humanoid robot and equipped with the necessary read-out electronics and a CAN bus interface. We describe applications to guiding a robot arm on a desired trajectory with negligible force, reflective grip improvement, and tactile exploration of objects to create a shape representation and find stable grips, which are applied autonomously on the basis of visual recognition.  相似文献   

10.
段宝阁  杨尚尚  谢啸  肖晓晖 《机器人》2022,44(4):504-512
针对双曲率曲面零件的复合材料织物铺放,手工铺放效率低、质量均一性差,机器人铺放的相关研究未能准确地描述手工铺放技能。为此,本文提出基于模仿学习的铺放技能采集、描述与重现的相关方法。首先,利用拖动示教获取织物铺放的轨迹信息,以压力阈值为分割依据进行有监督的轨迹分割,再采用高斯混合模型(Gaussian mixture m...  相似文献   

11.
ABSTRACT

The recent demographic trend across developed nations shows a dramatic increase in the aging population, fallen fertility rates and a shortage of caregivers. Hence, the demand for service robots to assist with dressing which is an essential Activity of Daily Living (ADL) is increasing rapidly. Robotic Clothing Assistance is a challenging task since the robot has to deal with two demanding tasks simultaneously, (a) non-rigid and highly flexible cloth manipulation and (b) safe human–robot interaction while assisting humans whose posture may vary during the task. On the other hand, humans can deal with these tasks rather easily. In this paper, we propose a framework for robotic clothing assistance by imitation learning from a human demonstration to a compliant dual-arm robot. In this framework, we divide the dressing task into three phases, i.e. reaching phase, arm dressing phase, and body dressing phase. We model the arm dressing phase as a global trajectory modification using Dynamic Movement Primitives (DMP), while we model the body dressing phase toward a local trajectory modification applying Bayesian Gaussian Process Latent Variable Model (BGPLVM). We show that the proposed framework developed towards assisting the elderly is generalizable to various people and successfully performs a sleeveless shirt dressing task. We also present participants feedback on public demonstration at the International Robot Exhibition (iREX) 2017. To our knowledge, this is the first work performing a full dressing of a sleeveless shirt on a human subject with a humanoid robot.  相似文献   

12.
Recently, robots are introduced to warehouses and factories for automation and are expected to execute dual-arm manipulation as human does and to manipulate large, heavy and unbalanced objects. We focus on target picking task in the cluttered environment and aim to realize a robot picking system which the robot selects and executes proper grasping motion from single-arm and dual-arm motion. In this paper, we propose a few-experiential learning-based target picking system with selective dual-arm grasping. In our system, a robot first learns grasping points and object semantic and instance label with automatically synthesized dataset. The robot then executes and collects grasp trial experiences in the real world and retrains the grasping point prediction model with the collected trial experiences. Finally, the robot evaluates candidate pairs of grasping object instance, strategy and points and selects to execute the optimal grasping motion. In the experiments, we evaluated our system by conducting target picking task experiments with a dual-arm humanoid robot Baxter in the cluttered environment as warehouse.  相似文献   

13.
This paper presents a remote manipulation method for mobile manipulator through operator’s gesture. In particular, a track mobile robot is equipped with a 4-DOF robot arm to grasp objects. Operator uses one hand to control both the motion of mobile robot and the posture of robot arm via scheme of gesture polysemy method which is put forward in this paper. A sensor called leap motion (LM), which can obtain the position and posture data of hand, is employed in this system. Two filters were employed to estimate the position and posture of human hand so as to reduce the inherent noise of the sensor. Kalman filter was used to estimate the position, and particle filter was used to estimate the orientation. The advantage of the proposed method is that it is feasible to control a mobile manipulator through just one hand using a LM sensor. The effectiveness of the proposed human–robot interface was verified in laboratory with a series of experiments. And the results indicate that the proposed human–robot interface is able to track the movements of operator’s hand with high accuracy. It is found that the system can be employed by a non-professional operator for robot teleoperation.  相似文献   

14.
In this paper, we present a strategy for fast grasping of unknown objects based on the partial shape information from range sensors for a mobile robot with a parallel-jaw gripper. The proposed method can realize fast grasping of an unknown object without needing complete information of the object or learning from grasping experience. Information regarding the shape of the object is acquired by a 2D range sensor installed on the robot at an inclined angle to the ground. Features for determining the maximal contact area are extracted directly from the partial shape information of the unknown object to determine the candidate grasping points. Note that since the shape and mass are unknown before grasping, a successful and stable grasp cannot be in fact guaranteed. Thus, after performing a grasping trial, the mobile robot uses the 2D range sensor to judge whether the object can be lifted. If a grasping trial fails, the mobile robot will quickly find other candidate grasping points for another trial until a successful and stable grasp is realized. The proposed approach has been tested in experiments, which found that a mobile robot with a parallel-jaw gripper can successfully grasp a wide variety of objects using the proposed algorithm. The results illustrate the validity of the proposed algorithm in term of the grasping time.  相似文献   

15.
An approach to the task of Programming by demonstration (PbD) of grasping skills is introduced, where a mobile service robot is taught by a human instructor how to grasp a specific object. In contrast to other approaches the instructor demonstrates the grasping action several times to the robot to increase reconstruction performance. Only the robot’s stereoscopic vision system is used to track the instructor’s hand. The developed tracking algorithm is designed to not need artificial markers, data gloves or being restricted to fixed or difficult to calibrate sensor installations while at the same time being real-time capable on a mobile service robot with limited resources. Due to the instructor’s repeated demonstrations and his low repeating accuracy, every time a grasp is demonstrated the instructor performs it differently. To compensate for these variations and also to compensate for tracking errors, the use of a Self-Organizing-Map (SOM) with a one-dimensional topology is proposed. This SOM is used to generalize over differently demonstrated grasping actions and to reconstruct the intended approach trajectory of the instructor’s hand while grasping an object. The approach is implemented and evaluated on the service robot TASER using synthetically generated data as well as real world data.  相似文献   

16.
When a user and a robot share the same physical workspace the robot may need to keep an updated 3D representation of the environment. Indeed, robot systems often need to reconstruct relevant parts of the environment where the user executes manipulation tasks. This paper proposes a spatial attention approach for a robot manipulator with an eye-in-hand Kinect range sensor. Salient regions of the environment, where user manipulation actions are more likely to have occurred, are detected by applying a clustering algorithm based on Gaussian Mixture Models applied to the user hand trajectory. A motion capture sensor is used for hand tracking. The robot attentional behavior is driven by a next-best view algorithm that computes the most promising range sensor viewpoints to observe the detected salient regions, where potential changes in the environment have occurred. The environment representation is built upon the PCL KinFu Large Scale project [1], an open source implementation of KinectFusion. KinFu has been modified to support the execution of the next-best view algorithm directly on the GPU and to properly manage voxel data. Experiments are reported to illustrate the proposed attention based approach and to show the effectiveness of GPU-based next-best view planning compared to the same algorithm executed on the CPU.  相似文献   

17.
Humanoid robots needs to have human-like motions and appearance in order to be well-accepted by humans. Mimicking is a fast and user-friendly way to teach them human-like motions. However, direct assignment of observed human motions to robot’s joints is not possible due to their physical differences. This paper presents a real-time inverse kinematics based human mimicking system to map human upper limbs motions to robot’s joints safely and smoothly. It considers both main definitions of motion similarity, between end-effector motions and between angular configurations. Microsoft Kinect sensor is used for natural perceiving of human motions. Additional constraints are proposed and solved in the projected null space of the Jacobian matrix. They consider not only the workspace and the valid motion ranges of the robot’s joints to avoid self-collisions, but also the similarity between the end-effector motions and the angular configurations to bring highly human-like motions to the robot. Performance of the proposed human mimicking system is quantitatively and qualitatively assessed and compared with the state-of-the-art methods in a human-robot interaction task using Nao humanoid robot. The results confirm applicability and ability of the proposed human mimicking system to properly mimic various human motions.  相似文献   

18.
《Advanced Robotics》2013,27(15):1687-1707
In robotics, recognition of human activity has been used extensively for robot task learning through imitation and demonstration. However, there has not been much work performed on modeling and recognition of activities that involve object manipulation and grasping. In this work, we deal with single arm/hand actions which are very similar to each other in terms of arm/hand motions. The approach is based on the hypothesis that actions can be represented as sequences of motion primitives. Given this, a set of five different manipulation actions of different levels of complexity are investigated. To model the process, we use a combination of discriminative support vector machines and generative hidden Markov models. The experimental evaluation, performed with 10 people, investigates both the definition and structure of primitive motions, as well as the validity of the modeling approach taken.  相似文献   

19.
In this article, we present an integrated manipulation framework for a service robot, that allows to interact with articulated objects at home environments through the coupling of vision and force modalities. We consider a robot which is observing simultaneously his hand and the object to manipulate, by using an external camera (i.e. robot head). Task-oriented grasping algorithms (Proc of IEEE Int Conf on robotics and automation, pp 1794–1799, 2007) are used in order to plan a suitable grasp on the object according to the task to perform. A new vision/force coupling approach (Int Conf on advanced robotics, 2007), based on external control, is used in order to, first, guide the robot hand towards the grasp position and, second, perform the task taking into account external forces. The coupling between these two complementary sensor modalities provides the robot with robustness against uncertainties in models and positioning. A position-based visual servoing control law has been designed in order to continuously align the robot hand with respect to the object that is being manipulated, independently of camera position. This allows to freely move the camera while the task is being executed and makes this approach amenable to be integrated in current humanoid robots without the need of hand-eye calibration. Experimental results on a real robot interacting with different kind of doors are presented.  相似文献   

20.
This paper addresses a real-time grasp synthesis of multi-fingered robot hands to find grasp configurations which satisfy the force closure condition of arbitrary shaped objects. We propose a fast and efficient grasp synthesis algorithm for planar polygonal objects, which yields the contact locations on a given polygonal object to obtain a force closure grasp by a multi-fingered robot hand. For an optimum grasp and real-time computation, we develop the preference and the hibernation process and assign the physical constraints of a humanoid hand to the motion of each finger. The preferences consist of each sublayer reflecting the primitive preference similar to the conditional behaviors of humans for given objectives and their arrangements are adjusted by the heuristics of human grasping. The proposed method reduces the computational time significantly at the sacrifice of global optimality, and enables grasp posture to be changeable within 2-finger and 3-finger grasp. The performance of the presented algorithm is evaluated via simulation studies to obtain the force-closure grasps of polygonal objects with fingertip grasps. The architecture suggested is verified through experimental implementation to our developed robot hand system by solving 2- or 3-finger grasp synthesis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号