首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A virtual reality system enabling high-level programming of robot grasps is described. The system is designed to support programming by demonstration (PbD), an approach aimed at simplifying robot programming and empowering even unexperienced users with the ability to easily transfer knowledge to a robotic system. Programming robot grasps from human demonstrations requires an analysis phase, comprising learning and classification of human grasps, as well as a synthesis phase, where an appropriate human-demonstrated grasp is imitated and adapted to a specific robotic device and object to be grasped. The virtual reality system described in this paper supports both phases, thereby enabling end-to-end imitation-based programming of robot grasps. Moreover, as in the PbD approach robot environment interactions are no longer explicitly programmed, the system includes a method for automatic environment reconstruction that relieves the designer from manually editing the pose of the objects in the scene and enables intelligent manipulation. A workspace modeling technique based on monocular vision and computation of edge-face graphs is proposed. The modeling algorithm works in real time and supports registration of multiple views. Object recognition and workspace reconstruction features, along with grasp analysis and synthesis, have been tested in simulated tasks involving 3D user interaction and programming of assembly operations. Experiments reported in the paper assess the capabilities of the three main components of the system: the grasp recognizer, the vision-based environment modeling system, and the grasp synthesizer.  相似文献   

2.
Dexterous manipulation is an important function for working robots. Manipulator tasks such as assembly and disassembly can generally be divided into several motion primitives. We call such motion primitives “skills,” and explain how most manipulator tasks can be composed of sequences of these skills. We are currently planning to construct a maintenance robot for household electrical appliances. We considered establishing a hierarchy of the manipulation tasks of this robot since the maintenance of such appliances has become more complex than ever before. In addition, as errors seem likely to increase in complex tasks, it is important to implement an effective error recovery technology. This article presents our proposal for a new type of error recovery that uses the concepts of task stratification and error classification.  相似文献   

3.
A visuo-haptic augmented reality system is presented for object manipulation and task learning from human demonstration. The proposed system consists of a desktop augmented reality setup where users operate a haptic device for object interaction. Users of the haptic device are not co-located with the environment where real objects are present. A three degrees of freedom haptic device, providing force feedback, is adopted for object interaction by pushing, selection, translation and rotation. The system also supports physics-based animation of rigid bodies. Virtual objects are simulated in a physically plausible manner and seem to coexist with real objects in the augmented reality space. Algorithms for calibration, object recognition, registration and haptic rendering have been developed. Automatic model-based object recognition and registration are performed from 3D range data acquired by a moving laser scanner mounted on a robot arm. Several experiments have been performed to evaluate the augmented reality system in both single-user and collaborative tasks. Moreover, the potential of the system for programming robot manipulation tasks by demonstration is investigated. Experiments show that a precedence graph, encoding the sequential structure of the task, can be successfully extracted from multiple user demonstrations and that the learned task can be executed by a robot system.  相似文献   

4.
The development of flexible assembly is closely related to the introduction of robots in assembly automation. If has long been recognized that automatic parts assembly by robots is one of the most delicate and most difficult tasks in industrial robotics. This task involves two control problems, trajectory planning for the whole automatic assembly process and reduction of the reaction forces appearing between the parts being assembled. This paper addresses both aspects of this control task. The strategical control level for the manipulation of robots and various approaches to trajectory planning tasks in assembly processes are discussed. A new approach to the determination of the strategical control level, including various models (geometric, kinematic and dynamic) for manipulation robots, is briefly described.The last and most delicate phase of the assembly process is parts mating, which is rather like inserting a peg in a hole. In order to reduce the reaction forces appearing between the parts being assembled, force feedback control is applied. The experimental results of the industrial robot insertion process with force feedback are also presented in the paper.  相似文献   

5.
The control of soft continuum robots is challenging owing to their mechanical elasticity and complex dynamics. An additional challenge emerges when we want to apply Learning from Demonstration (LfD) and need to collect necessary demonstrations due to the inherent control difficulty. In this paper, we provide a multi-level architecture from low-level control to high-level motion planning for the Bionic Handling Assistant (BHA) robot. We deploy learning across all levels to enable the application of LfD for a real-world manipulation task. To record the demonstrations, an actively compliant controller is used. A variant of dynamical systems' application that are able to encode both position and orientation then maps the recorded 6D end-effector pose data into a virtual attractor space. A recent LfD method encodes the pose attractors within the same model for point-to-point motion planning. In the proposed architecture, hybrid models that combine an analytical approach and machine learning techniques are used to overcome the inherent slow dynamics and model imprecision of the BHA. The performance and generalization capability of the proposed multi-level approach are evaluated in simulation and with the real BHA robot in an apple-picking scenario which requires high accuracy to control the pose of the robot's end-effector.  相似文献   

6.
Robot task teaching on a real work cell is expensive and sometimes risky. This cost and risk can be avoided by using virtual reality technology. Using the simulated environment in virtual reality (VR), the operator can practise, explore and preview the operations for possible problems that might occur during implementation. It is therefore of practical importance to build the virtual robot work cell in VR that can facilitate the study of the performance of robotic tasks such as robotic assembly. This paper describes our work in incorporating physical behaviours of virtual objects into VR for robot task teaching. To facilitate the task teaching, we developed visual and audio cues which help visualise the dynamic interactions between virtual objects. Dynamic sensing capability is incorporated in the simulated environment. A simplified force sensor is modelled and simulated. The physical behaviours of the virtual objects are simulated using physics-based approach. A virtual robot work cell is built incorporating the developed features and an example for the task teaching is given. The implementation includes view tracking using virtual camera, visual and audio rendering, and the user interface developed in the VR. The current implementation was carried out on a PC-based VR platform, with the programs developed using Watcom C++.  相似文献   

7.
8.
Two-handed assembly with immersive task planning in virtual reality   总被引:1,自引:0,他引:1  
Assembly modelling is the process of capturing entities and activity information related to assembling and assembly. Currently, most CAD systems have been developed to ease the design of individual components, but are limited in their support for assembly designs and planning capability, which are crucial for reducing the cost and processing time in complex design, constraint analysis and assembly task planning. This paper presents a framework of a two-handed virtual assembly (VA) planner for assembly tasks, which coordinates two hands jointly for feature-based manipulation, assembly analysis and constraint-based task planning. Feature-based manipulation highlights the important assembling features (e.g. dynamic reference frames, moving arrow, mating features) to guide users for the ease of assembly and in an efficient and fluid manner. The users can freely navigate and move the mating pair along the collision-free path. The free motion of two-handed input in assembly is further restricted to the allowable motion guided by the constraints recognised on-line. The allowable motion in assembly is planned by the logic steps derived from the analysis of constraints and their translation in the progress of assembly. No preprocessing or predefined assembly sequence is necessary since the planning is produced in real-time upon the two-handed interactions. Mating features and constraints in databases are automatically updated after each assembly to simplify the planning process. The two-handed task planner has been developed and experimented for several assembly examples including a drill (12-parts) and a robot (17-parts). The system can be generally applied for the interactive task planning of assembly-type applications.  相似文献   

9.
The development of a realistic virtual assembly environment is challenging because of the complexity of the physical processes and the limitation of available VR technology. Many research activities in this domain primarily focused on particular aspects of the assembly task such as the feasibility of assembly operations in terms of interference between the manipulated parts. The virtual assembly environment reported in this research is focused on mechanical part assembly. The approach presented addresses the problem of part-to-part contacts during the mating phase of assembly tasks. The system described calculates contact force sensations by making their intensity dependent on the depth of penetration. However the penetration is not visible to the user who sees a separate model, which does not intersect the mating part model. The two 3D models of the part, the off-screen rendered model and the on-screen rendered model are connected by a spring-dumper arrangement. The force calculated is felt by the operator through the haptic interface when parts come in contact during the mating phase of the assembly task. An evaluation study investigating the effect of contact force sensation on user performance during part-to-part interface was conducted. The results showed statistically significant effect of contact force sensation on user performance in terms of task completion time. The subjective evaluation based on feedback from users confirmed that contact force sensation is a useful cue for the operator to find the relative positions of components in the final assembly state.  相似文献   

10.
《Advanced Robotics》2013,27(8):835-858
Dexterous manipulation plays an important role in working robots. Manipulator tasks such as assembly and disassembly can generally be divided into several motion primitives. We call these 'skills' and explain how most manipulator tasks can be composed of skill sequences. Skills are also used to compensate for errors both in the geometric model and in manipulator motions. There are dispensable data in the shapes, positions and orientations of objects when achieving skill motions in a task. Therefore, we can simplify geometric models by considering the dispensable data in a skill motion. We call such robust and simplified models 'false models'. This paper describes our definition of false models used in planning and visual sensing, and shows the effectiveness of our method using examples of tasks involving the manipulation of mechanical and electronic parts. Furthermore, we show the application of false models to objects of indefinite sizes and shapes using examples of the same tasks.  相似文献   

11.
The assembly in Virtual Reality (VR) enables users to fit virtual parts into existing 3D models immersively. However, users cannot physically feel the haptic feedback when connecting the parts with the virtual model. This work presents a robot-enabled tangible interface that dynamically moves a physical structure with a robotic arm to provide physical feedback for holding a handheld proxy in VR. This enables the system to provide force feedback during virtual assembly. The cooperation between the physical support and the handheld proxy produces realistic physical force feedback, providing a tangible experience for various virtual parts in virtual assembly scenarios. We developed a prototype system that allowed the operator to place a virtual part onto other models in VR by placing the proxy onto the matched structure attached to a robotic arm. We conducted a user evaluation to explore user performance and system usability in a virtual assembly task. The results indicated that the robot-enabled tangible support increased the task completion time but significantly improved the system usability and sense of presence with a more realistic haptic experience.  相似文献   

12.
This paper presents a novel constraint-based 3D manipulation approach to interactive constraint-based solid modelling. This approach employs a constraint recognition process to automatically recognise assembly relationships and geometric constraints between entities from 3D manipulation. A technique referred to as allowable motion is used to achieve accurate 3D positioning of a solid model by automatically constraining its 3D manipulation without menu interaction. A set of virtual design tools, which can be used to construct constraint-based solid models within a virtual environment, are also supported. These tools have been implemented as functional 3D objects associated with several pre-defined modelling functions to simulate physical tools such as a drilling tool and T-square. They can be directly manipulated by the user, and precisely positioned relative to other solid models through the constraint-based 3D manipulation approach. Their modelling functions can be automatically triggered, depending upon their associated constraints and the user's manipulation manner. A prototype system has been implemented to demonstrate the feasibility of these techniques for model construction and assembly operations.  相似文献   

13.
Humans excel in manipulation tasks, a basic skill for our survival and a key feature in our manmade world of artefacts and devices. In this work, we study how humans manipulate simple daily objects, and construct a probabilistic representation model for the tasks and objects useful for autonomous grasping and manipulation by robotic hands. Human demonstrations of predefined object manipulation tasks are recorded from both the human hand and object points of view. The multimodal data acquisition system records human gaze, hand and fingers 6D pose, finger flexure, tactile forces distributed on the inside of the hand, colour images and stereo depth map, and also object 6D pose and object tactile forces using instrumented objects. From the acquired data, relevant features are detected concerning motion patterns, tactile forces and hand-object states. This will enable modelling a class of tasks from sets of repeated demonstrations of the same task, so that a generalised probabilistic representation is derived to be used for task planning in artificial systems. An object centred probabilistic volumetric model is proposed to fuse the multimodal data and map contact regions, gaze, and tactile forces during stable grasps. This model is refined by segmenting the volume into components approximated by superquadrics, and overlaying the contact points used taking into account the task context. Results show that the features extracted are sufficient to distinguish key patterns that characterise each stage of the manipulation tasks, ranging from simple object displacement, where the same grasp is employed during manipulation (homogeneous manipulation) to more complex interactions such as object reorientation, fine positioning, and sequential in-hand rotation (dexterous manipulation). The framework presented retains the relevant data from human demonstrations, concerning both the manipulation and object characteristics, to be used by future grasp planning in artificial systems performing autonomous grasping.  相似文献   

14.
Current computer-aided assembly systems provide engineers with a variety of spatial snapping and alignment techniques for interactively defining the positions and attachments of components. With the advent of haptics and its integration into virtual assembly systems, users now have the potential advantage of tactile information. This paper reports research that aims to quantify how the provision of haptic feedback in an assembly system can affect user performance. To investigate human–computer interaction processes in assembly modeling, performance of a peg-in-hole manipulation was studied to determine the extent to which haptics and stereovision may impact on task completion time. The results support two important conclusions: first, it is apparent that small (i.e. visually insignificant) assembly features (e.g. chamfers) affect the overall task completion at times only when haptic feedback is provided; and second, that the difference is approximately similar to the values reported for equivalent real world peg-in-hole assembly tasks.  相似文献   

15.
Trajectory learning is a fundamental component in a robot Programming by Demonstration (PbD) system, where often the very purpose of the demonstration is to teach complex manipulation patterns. However, human demonstrations are inevitably noisy and inconsistent. This paper highlights the trajectory learning component of a PbD system for manipulation tasks encompassing the ability to cluster, select, and approximate human demonstrated trajectories. The proposed technique provides some advantages with respect to alternative approaches and is suitable for learning from both individual and multiple user demonstrations.  相似文献   

16.
17.
Automatic 3D animation generation techniques are becoming increasingly popular in different areas related to computer graphics such as video games and animated movies. They help automate the filmmaking process even by non professionals without or with minimal intervention of animators and computer graphics programmers. Based on specified cinematographic principles and filming rules, they plan the sequence of virtual cameras that the best render a 3D scene. In this paper, we present an approach for automatic movie generation using linear temporal logic to express these filming and cinematography rules. We consider the filming of a 3D scene as a sequence of shots satisfying given filming rules, conveying constraints on the desirable configuration (position, orientation, and zoom) of virtual cameras. The selection of camera configurations at different points in time is understood as a camera plan, which is computed using a temporal-logic based planning system (TLPlan) to obtain a 3D movie. The camera planner is used within an automated planning application for generating 3D tasks demonstrations involving a teleoperated robot arm on the the International Space Station (ISS). A typical task demonstration involves moving the robot arm from one configuration to another. The main challenge is to automatically plan the configurations of virtual cameras to film the arm in a manner that conveys the best awareness of the robot trajectory to the user. The robot trajectory is generated using a path-planner. The camera planner is then invoked to find a sequence of configurations of virtual cameras to film the trajectory.  相似文献   

18.
高钦和  邓刚锋 《计算机应用》2012,32(11):3232-3239
现有装配体建模方法在描述虚拟维修训练中装配体拆卸过程时,存在的信息冗余大、建模过程复杂等缺点,在研究现有装配体建模方法和分析装配过程与拆卸过程差异的基础上,提出了一种面向虚拟维修拆卸过程的建模方法。该方法以矩阵描述装配体模型,用矩阵运算描述零部件拆卸过程,简化了建模过程,减少了模型数据量,为虚拟维修训练系统的快速构建提供了一种思路和方法。  相似文献   

19.
Physical interaction requires robots to accurately follow kinematic trajectories while modulating the interaction forces to accomplish tasks and to be safe to the environment. However, current approaches rely on accurate physical models or iterative learning approaches. We present a versatile approach for physical interaction tasks, based on Movement Primitives (MPs) that can learn physical interaction tasks solely by demonstrations, without explicitly modeling the robot or the environment. We base our approach on the Probabilistic Movement Primitives (ProMPs), which utilizes the variance of the demonstrations to provide better generalization of the encoded skill, combine skills, and derive a controller that follows exactly the encoded trajectory distribution. However, the ProMP controller requires the system dynamics to be known. We present a reformulation of the ProMPs that allows accurate reproduction of the skill without modeling the system dynamics and, further, we extent our approach to incorporate external sensors, as for example, force/torque sensors. Our approach learns physical interaction tasks solely from demonstrations and online adapts the movement to force–torque sensor input. We derive a variable-stiffness controller in closed form that reproduces the trajectory distribution and the interaction forces present in the demonstrations. We evaluate our approach in simulated and real-robot tasks.  相似文献   

20.
该文将人对虚拟对象的抓取分为两个过程,然后分别对这两个过程的触觉反馈进行模拟,使用户产生真实的操作感觉。然后根据零件在装配过程中受到的约束,对零件的运动进行约束,同时提出基于虚拟工具的运动轴线与连接件的轴线对齐的方法对虚拟工具的运动进行约束,从而实现自然直观的交互手段。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号