首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 500 毫秒
1.
2.
In minimally invasive surgery, tools go through narrow openings and manipulate soft organs to perform surgical tasks. There are limitations in current robot-assisted surgical systems due to the rigidity of robot tools. The aim of the STIFF-FLOP European project is to develop a soft robotic arm to perform surgical tasks. The flexibility of the robot allows the surgeon to move within organs to reach remote areas inside the body and perform challenging procedures in laparoscopy. This article addresses the problem of designing learning interfaces enabling the transfer of skills from human demonstration. Robot programming by demonstration encompasses a wide range of learning strategies, from simple mimicking of the demonstrator's actions to the higher level imitation of the underlying intent extracted from the demonstrations. By focusing on this last form, we study the problem of extracting an objective function explaining the demonstrations from an over-specified set of candidate reward functions, and using this information for self-refinement of the skill. In contrast to inverse reinforcement learning strategies that attempt to explain the observations with reward functions defined for the entire task (or a set of pre-defined reward profiles active for different parts of the task), the proposed approach is based on context-dependent reward-weighted learning, where the robot can learn the relevance of candidate objective functions with respect to the current phase of the task or encountered situation. The robot then exploits this information for skills refinement in the policy parameters space. The proposed approach is tested in simulation with a cutting task performed by the STIFF-FLOP flexible robot, using kinesthetic demonstrations from a Barrett WAM manipulator.  相似文献   

3.
Robot assistants need to interact with people in a natural way in order to be accepted into people’s day-to-day lives. We have been researching robot assistants with capabilities that include visually tracking humans in the environment, identifying the context in which humans carry out their activities, understanding spoken language (with a fixed vocabulary), participating in spoken dialogs to resolve ambiguities, and learning task procedures. In this paper, we describe a robot task learning algorithm in which the human explicitly and interactively instructs a series of steps to the robot through spoken language. The training algorithm fuses the robot’s perception of the human with the understood speech data, maps the spoken language to robotic actions, and follows the human to gather the action applicability state information. The robot represents the acquired task as a conditional procedure and engages the human in a spoken-language dialog to fill in information that the human may have omitted.  相似文献   

4.
For a long time, robot assembly programming has been produced in two environments: on-line and off-line. On-line robot programming uses the actual robot for the experiments performing a given task; off-line robot programming develops a robot program in either an autonomous system with a high-level task planner and simulation or a 2D graphical user interface linked to other system components. This paper presents a whole hand interface for more easily performing robotic assembly tasks in the virtual tenvironment. The interface is composed of both static hand shapes (states) and continuous hand motions (modes). Hand shapes are recognized as discrete states that trigger the control signals and commands, and hand motions are mapped to the movements of a selected instance in real-time assembly. Hand postures are also used for specifying the alignment constraints and axis mapping of the hand-part coordinates. The basic virtual-hand functions are constructed through the states and modes developing the robotic assembly program. The assembling motion of the object is guided by the user immersed in the environment to a path such that no collisions will occur. The fine motion in controlling the contact and ending position/orientation is handled automatically by the system using prior knowledge of the parts and assembly reasoning. One assembly programming case using this interface is described in detail in the paper.  相似文献   

5.
As the manufacturing industry becomes more agile, the use of collaborative robots capable of safely working with humans is becoming more prevalent, while adaptable and natural interaction is a goal yet to be achieved. This work presents a cognitive architecture composed of perception and reasoning modules that allows a robot to adapt its actions while collaborating with humans in an assembly task. Human action recognition perception is performed using convolutional neural network models with inertial measurement unit and skeleton tracking data. The action predictions are used for task status reasoning which predicts the time left for each action in a task allowing a robot to plan future actions. The task status reasoning uses a recurrent neural network method which is developed for transferability to new actions and tasks. Updateable input parameters allowing the system to optimise for each user and task with each trial performed are also investigated. Finally, the complete system is demonstrated with the collaborative assembly of a small chair and wooden box, along with a solo robot task of stacking objects performed when it would otherwise be idle. The human actions recognised are using a screw driver, Allen key, hammer and hand screwing, with online accuracies between 83–92%. User trials demonstrate the robot deciding when to start collaborative actions in order to synchronise with the user, as well as deciding when it has time to complete an action on its solo task before a collaborative action is required.  相似文献   

6.
A virtual reality system enabling high-level programming of robot grasps is described. The system is designed to support programming by demonstration (PbD), an approach aimed at simplifying robot programming and empowering even unexperienced users with the ability to easily transfer knowledge to a robotic system. Programming robot grasps from human demonstrations requires an analysis phase, comprising learning and classification of human grasps, as well as a synthesis phase, where an appropriate human-demonstrated grasp is imitated and adapted to a specific robotic device and object to be grasped. The virtual reality system described in this paper supports both phases, thereby enabling end-to-end imitation-based programming of robot grasps. Moreover, as in the PbD approach robot environment interactions are no longer explicitly programmed, the system includes a method for automatic environment reconstruction that relieves the designer from manually editing the pose of the objects in the scene and enables intelligent manipulation. A workspace modeling technique based on monocular vision and computation of edge-face graphs is proposed. The modeling algorithm works in real time and supports registration of multiple views. Object recognition and workspace reconstruction features, along with grasp analysis and synthesis, have been tested in simulated tasks involving 3D user interaction and programming of assembly operations. Experiments reported in the paper assess the capabilities of the three main components of the system: the grasp recognizer, the vision-based environment modeling system, and the grasp synthesizer.  相似文献   

7.
The interaction between humans and robot teams is highly relevant in many application domains, for example in collaborative manufacturing, search and rescue, and logistics. It is well-known that humans and robots have complementary capabilities: Humans are excellent in reasoning and planning in unstructured environments, while robots are very good in performing tasks repetitively and precisely. In consequence, one of the key research questions is how to combine human and robot team decision making and task execution capabilities in order to exploit their complementary skills. From a controls perspective this question boils down to how control should be shared among them. This article surveys advances in human-robot team interaction with special attention devoted to control sharing methodologies. Additionally, aspects affecting the control sharing design, such as human behavior modeling, level of autonomy and human-machine interfaces are identified. Open problems and future research directions towards joint decision making and task execution in human-robot teams are discussed.  相似文献   

8.
The limited understanding of the surrounding environment still restricts the capabilities of robotic systems in real world applications. Specifically, the acquisition of knowledge about the environment typically relies only on perception, which requires intensive ad hoc training and is not sufficiently reliable in a general setting. In this paper, we aim at integrating new acquisition devices, such as tangible user interfaces, speech technologies and vision-based systems, with established AI methodologies, to present a novel and effective knowledge acquisition approach. A natural interaction paradigm is presented, where humans move within the environment with the robot and easily acquire information by selecting relevant spots, objects, or other relevant landmarks. The synergy between novel interaction technologies and semantic knowledge leverages humans’ cognitive skills to support robots in acquiring and grounding knowledge about the environment; such richer representation can be exploited in the realization of robot autonomous skills for task accomplishment.  相似文献   

9.
A knowledge-based framework to support task-level programming and operational control of robots is described. Our bask intention is to enhance the intelligence of a robot control system so that it may carefully coordinate the interactions among discrete, asynchronous and concurrent events under the constraints of action precedence and resource allocation. We do this by integrating both off-line and on-line planning capabilities in a single framework. The off-line phase is equipped with proper languages for describing workbenches, specifying tasks, and soliciting knowledge from the user to support the execution of robot tasks. A static planner is included in the phase to conduct static planning, which develops local plans for various specific tasks. The on-line phase is designed as a dynamic control loop for the robot system. It employs a dynamic planner to tackle any contingent situations during the robot operations. It is responsible for developing proper working paths and motion plans to achieve the task goals within designated temporal and resource constraints. It is implemented in a distributed and cooperative blackboard system, which facilitates the integration of various types of knowledge. Finally, any failures from the on-line phase are fed back to the off-line phase. This forms the interaction between the off-line and on-line phases and introduces an extra closed loop opportunistically to tune the dynamic planner to adapt to the variation of the working environment in a long-term manner.  相似文献   

10.
Robots are important in high-mix low-volume manufacturing because of their versatility and repeatability in performing manufacturing tasks. However, robots have not been widely used due to cumbersome programming effort and lack of operator skill. One significant factor prohibiting the widespread application of robots by small and medium enterprises (SMEs) is the high cost and necessary skill of programming and re-programming robots to perform diverse tasks. This paper discusses an Augmented Reality (AR) assisted robot programming system (ARRPS) that provides faster and more intuitive robot programming than conventional techniques. ARRPS is designed to allow users with little robot programming knowledge to program tasks for a serial robot. The system transforms the work cell of a serial industrial robot into an AR environment. With an AR user interface and a handheld pointer for interaction, users are free to move around the work cell to define 3D points and paths for the real robot to follow. Sensor data and algorithms are used for robot motion planning, collision detection and plan validation. The proposed approach enables fast and intuitive robotic path and task programming, and allows users to focus only on the definition of tasks. The implementation of this AR-assisted robot system is presented, and specific methods to enhance the performance of the users in carrying out robot programming using this system are highlighted.  相似文献   

11.
The increasing industrial need for production automation in one of a kind production and increasing demand for one of a kind or small batch production requires that automation is carried out by programmable automatic equipment in order to ensure that this equipment can be rapidly and easily adjusted and configured to the ever changing production tasks. In order that the adjustment and reconfiguration of the production equipment can be carried out quickly, reliably and in compatibility with the state of the production plant at the time of task execution it is necessary that the equipment tasks can be specified by a specification language, which is much more abstract than the task specification languages that are today commercially available for machine tool and robot programming, and that the corresponding compiler can be controlled by the system controlling the sequence of operations of the assembly plant. To comply with this need a task level robot programming language for assembly operations has been developed with a level of abstraction corresponding to the level of abstraction for the command: Mount part A. The compiler of the language retrieves information about placement and orientation of the parts to be assembled from a database. The information is supplied to the database by proper sensors mounted on the assembly plant, implying that the program which is generated corresponds to the state of the physical plant at program execution time. Additionally, making available information of permitted production assembly sequences has enabled the compiler to be controlled by a system that optimizes the sequence in which a particular assembly should be carried out respecting present availability of resources needed for the assembling tasks.In this paper the designed robot language, the corresponding compiler and the necessary information structures are presented.  相似文献   

12.
This paper focuses on the problem of grasp stability and grasp quality analysis. An elegant way to evaluate the stability of a grasp is to model its wrench space. However, classical grasp quality measures suffer from several disadvantages, the main drawback being that they are not task related. Indeed, constructive approaches for approximating the wrench space including also task information have been rarely considered. This work presents an effective method for task-oriented grasp quality evaluation based on a novel grasp quality measure. We address the general case of multifingered grasps with point contacts with friction.The proposed approach is based on the concept of programming by demonstration and interactive teaching, wherein an expert user provides in a teaching phase a set of exemplar grasps appropriate for the task. Following this phase, a representation of task-related grasps is built. During task planning and execution, a grasp could be either submitted interactively for evaluation by a non-expert user or synthesized by an automatic planning system. Grasp quality is then assessed based on the proposed measure, which takes into account grasp stability along with its suitability for the task. To enable real-time evaluation of grasps, a fast algorithm for computing an approximation of the quality measure is also proposed. Finally, a local grasp optimization technique is described which can amend uncertainties arising in supplied grasps by non-expert users or assist in planning more valuable grasps in the neighborhood of candidate ones.The paper reports experiments performed in virtual reality with both an anthropomorphic virtual hand and a three-fingered robot hand. These experiments suggest the effectiveness and task relevance of the proposed grasp quality measure.  相似文献   

13.
Robots that are capable of learning new tasks from humans need the ability to transform gathered abstract task knowledge into their own representation and dimensionality. New task knowledge that has been collected e.g. with Programming by Demonstration approaches by observing a human does not a-priori contain any robot-specific knowledge and actions, and is defined in the workspace of the human demonstrator. This article presents a new approach for mapping abstract human-centered task knowledge to a robot execution system based on the target system properties. Therefore the required background knowledge about the target system is examined and defined explicitly.  相似文献   

14.
《Advanced Robotics》2013,27(2):137-163
This paper focuses on dexterity and versatility in pinching a rectangular object by a pair of robot fingers based on sensory feedback. In the pinching motion of humans, it is possible to execute concurrent pinching and orientation control quickly and precisely by using only the thumb and index finger. However, it is not easy for robot fingers to perform such imposed tasks agilely and simultaneously. In the case of robotic grasping, to perform concurrently such plural tasks retards the convergence speed in the execution of the overall task. This means that in order to increase versatility by imposing additional tasks, dexterity in the execution of each task may deteriorate. In this paper it is shown that both dexterity and versatility in the execution of such imposed tasks can be enhanced remarkably, without any deterioration in dexterity in the execution of each task, by using a sensory feedback method based on the idea of role-sharing joint control which comes from observation of the functional role of each human finger joint.  相似文献   

15.
Complex robot tasks are usually described as high level goals, with no details on how to achieve them. However, details must be provided to generate primitive commands to control a real robot. A sensor explication concept that makes details explicit from general commands is presented. We show how the transformation from high-level goals to primitive commands can be performed at execution time and we propose an architecture based on reconfigurable objects that contain domain knowledge and knowledge about the sensors and actuators available. Our approach is based on two premises: 1) plan execution is an information gathering process where determining what information is relevant is a great part of the process; and 2) plan execution requires that many details are made explicit. We show how our approach is used in solving the task of moving a robot to and through an unknown, and possibly narrow, doorway; where sonic range data is used to find the doorway, walls, and obstacles. We illustrate the difficulty of such a task using data from a large number of experiments we conducted with a real mobile robot. The laboratory results illustrate how the proper application of knowledge in the integration and utilization of sensors and actuators increases the robustness of plan execution.  相似文献   

16.
A visuo-haptic augmented reality system is presented for object manipulation and task learning from human demonstration. The proposed system consists of a desktop augmented reality setup where users operate a haptic device for object interaction. Users of the haptic device are not co-located with the environment where real objects are present. A three degrees of freedom haptic device, providing force feedback, is adopted for object interaction by pushing, selection, translation and rotation. The system also supports physics-based animation of rigid bodies. Virtual objects are simulated in a physically plausible manner and seem to coexist with real objects in the augmented reality space. Algorithms for calibration, object recognition, registration and haptic rendering have been developed. Automatic model-based object recognition and registration are performed from 3D range data acquired by a moving laser scanner mounted on a robot arm. Several experiments have been performed to evaluate the augmented reality system in both single-user and collaborative tasks. Moreover, the potential of the system for programming robot manipulation tasks by demonstration is investigated. Experiments show that a precedence graph, encoding the sequential structure of the task, can be successfully extracted from multiple user demonstrations and that the learned task can be executed by a robot system.  相似文献   

17.
We present a novel method for a robot to interactively learn, while executing, a joint human–robot task. We consider collaborative tasks realized by a team of a human operator and a robot helper that adapts to the human’s task execution preferences. Different human operators can have different abilities, experiences, and personal preferences so that a particular allocation of activities in the team is preferred over another. Our main goal is to have the robot learn the task and the preferences of the user to provide a more efficient and acceptable joint task execution. We cast concurrent multi-agent collaboration as a semi-Markov decision process and show how to model the team behavior and learn the expected robot behavior. We further propose an interactive learning framework and we evaluate it both in simulation and on a real robotic setup to show the system can effectively learn and adapt to human expectations.  相似文献   

18.
On learning, representing, and generalizing a task in a humanoid robot.   总被引:1,自引:0,他引:1  
We present a programming-by-demonstration framework for generically extracting the relevant features of a given task and for addressing the problem of generalizing the acquired knowledge to different contexts. We validate the architecture through a series of experiments, in which a human demonstrator teaches a humanoid robot simple manipulatory tasks. A probability-based estimation of the relevance is suggested by first projecting the motion data onto a generic latent space using principal component analysis. The resulting signals are encoded using a mixture of Gaussian/Bernoulli distributions (Gaussian mixture model/Bernoulli mixture model). This provides a measure of the spatio-temporal correlations across the different modalities collected from the robot, which can be used to determine a metric of the imitation performance. The trajectories are then generalized using Gaussian mixture regression. Finally, we analytically compute the trajectory which optimizes the imitation metric and use this to generalize the skill to different contexts.  相似文献   

19.
目前,针对微博领域的谣言检测方法主要基于微博正文,同时辅以用户评论特征、传播特征等信息进行判定。然而已有方法没有考虑用户评论质量会直接影响谣言检测的性能,质量低的评论甚至会引入无用甚至负面的特征,进而对谣言检测的性能带来更大的影响。针对该问题,基于用户评论和谣言检测的关联性,首次提出一种考虑评论有效性,并基于多任务联合学习的谣言检测方法。首先将谣言检测作为主任务,用户评论相关性检测为辅助任务;然后采用门控机制和注意力机制过滤和选择有效的用户评论特征;最后基于自主构建的3万条疫情微博谣言数据集进行实验。实验结果表明,对用户评论进行筛选不仅可以提升谣言检测性能,还能对用户评论质量进行判定。  相似文献   

20.
《Advanced Robotics》2013,27(5):499-517
We are developing a helper robot that carries out tasks ordered by users through speech. The robot needs a vision system to recognize the objects appearing in the orders. However, conventional vision systems cannot recognize objects in complex scenes. They may find many objects and cannot determine which is the target. This paper proposes a method of using a conversation with the user to solve this problem. The robot asks a question to which the user can easily answer and whose answer can efficiently reduce the number of candidate objects. It considers the characteristics of features used for object identification such as the ease for humans to specify them by word, generating a user-friendly and efficient sequence of questions. Experimental results show that the robot can detect target objects by asking the questions generated by the method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号