首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 578 毫秒
1.
增强现实技术是近年来人机交互领域的研究热点。在增强现实环境下加入触觉感知,可使用户在真实场景中看到并感知到虚拟对象。为了实现增强现实环境下与虚拟对象之间更加自然的交互,提出一种视触觉融合的三维注册方法。基于图像视觉技术获得三维注册矩阵;借助空间转换关系求解出触觉空间与图像空间的转换关系;结合两者与摄像头空间的关系实现视触觉融合的增强现实交互场景。为验证该方法的有效性,设计了一个基于视触觉增强现实的组装机器人项目。用户可触摸并移动真实环境中的机器人零件,还能在触摸时感受到反馈力,使交互更具真实感。  相似文献   

2.
A virtual reality system enabling high-level programming of robot grasps is described. The system is designed to support programming by demonstration (PbD), an approach aimed at simplifying robot programming and empowering even unexperienced users with the ability to easily transfer knowledge to a robotic system. Programming robot grasps from human demonstrations requires an analysis phase, comprising learning and classification of human grasps, as well as a synthesis phase, where an appropriate human-demonstrated grasp is imitated and adapted to a specific robotic device and object to be grasped. The virtual reality system described in this paper supports both phases, thereby enabling end-to-end imitation-based programming of robot grasps. Moreover, as in the PbD approach robot environment interactions are no longer explicitly programmed, the system includes a method for automatic environment reconstruction that relieves the designer from manually editing the pose of the objects in the scene and enables intelligent manipulation. A workspace modeling technique based on monocular vision and computation of edge-face graphs is proposed. The modeling algorithm works in real time and supports registration of multiple views. Object recognition and workspace reconstruction features, along with grasp analysis and synthesis, have been tested in simulated tasks involving 3D user interaction and programming of assembly operations. Experiments reported in the paper assess the capabilities of the three main components of the system: the grasp recognizer, the vision-based environment modeling system, and the grasp synthesizer.  相似文献   

3.
4.
Grasping and manipulating objects with robotic hands depend largely on the features of the object to be used. Especially, features such as softness and deformability are crucial to take into account during the manipulation tasks. Indeed, positions of the fingers and forces to be applied by the robot hand when manipulating an object must be adapted to the caused deformation. For unknown objects, a previous recognition stage is usually needed to get the features of the object, and the manipulation strategies must be adapted depending on that recognition stage. To obtain a precise control in the manipulation task, a complex object model is usually needed and performed, for example using the Finite Element Method. However, these models require a complete discretization of the object and they are time-consuming for the performance of the manipulation tasks. For that reason, in this paper a new control strategy, based on a minimal spring model of the objects, is presented and used for the control of the robot hand. This paper also presents an adaptable tactile-servo control scheme that can be used in in-hand manipulation tasks of deformable objects. Tactile control is based on achieving and maintaining a force value at the contact points which changes according to the object softness, a feature estimated in an initial recognition stage.  相似文献   

5.
Haptic Direct-Drive Robot Control Scheme in Virtual Reality   总被引:3,自引:0,他引:3  
This paper explores the use of a 2-D (Direct-Drive Arm) manipulator for mechanism design applications based on virtual reality (VR). This article reviews the system include a user interface, a simulator, and a robot control scheme. The user interface is a combination of a virtual clay environment and human arm dynamics via robot effector handler. The model of the VR system is built based on a haptic interface device behavior that enables the operator to feel the actual force feedback from the virtual environment just as s/he would from the real environment. A primary stabilizing controller is used to develop a haptic interface device where realistic simulations of the dynamic interaction forces between a human operator and the simulated virtual object/mechanism are required. The stability and performance of the system are studied and analyzed based on the Nyquist stability criterion. Experiments on cutting virtual clay are used to validate the theoretical developments. It was shown that the experimental and theoretical results are in good agreement and that the designed controller is robust to constrained/unconstrained environment.  相似文献   

6.
Haptic rendering: introductory concepts   总被引:6,自引:0,他引:6  
Haptic rendering allows users to "feel" virtual objects in a simulated environment. We survey current haptic systems and discuss some basic haptic-rendering algorithms. In the past decade we've seen an enormous increase in interest in the science of haptics. Haptics broadly refers to touch interactions (physical contact) that occur for the purpose of perception or manipulation of objects. These interactions can be between a human hand and a real object; a robot end-effector and a real object; a human hand and a simulated object (via haptic interface devices); or a variety of combinations of human and machine interactions with real, remote, or virtual objects. Rendering refers to the process by which desired sensory stimuli are imposed on the user to convey information about a virtual haptic object.  相似文献   

7.
The paper reports on the development and evaluation of a virtual reality system to support training in on-line programming of industrial robots. The system was evaluated by running training experiments with three groups of engineering students in the real, virtual and virtual augmented robot conditions. Results suggest that, the group with prior training in the virtual reality system augmented with cognitive/perceptual aids clearly outperformed the group that executed the tasks in the real robot only. The group trained in the non-augmented virtual reality system did not demonstrate the same results. It is concluded that the cognitive/perceptual aids embedded in the augmented virtual reality system had a positive effect on all task performance metrics and on the consistency of results across participants on the real robot. Virtual training environments need not be designed as close as possible to the real ones. Specifically designed augmented cognitive/perceptual aids may foster skill development that can be transferred to the real task. The suggested training environment is simple and cost effective in training of novices in an entry level task.  相似文献   

8.
基于视觉的增强现实运动跟踪算法   总被引:6,自引:0,他引:6  
增强现实系统不仅具有虚拟现实的特点同时具有虚实结合的新特性,为实现虚拟物体与真实物体间的完善结合,必须实时地动态跟踪摄像与真实物体间的相对位置和方向,建立观测模,墼是而通过动态三维显示技术迅速地将虚拟物体添加到真实物体之上,然而目前大多数增强现实系统的注册对象均匀静物体,运动物体的注册跟踪尚很少有人涉足。该算法通过标志点的光流场估计真实环境中运动物体的运动参数,根据透视投影原理和刚体的运动特性确定摄像机与运动物体间的相对位置和方向,实现增强现实系统的运动目标跟踪注册。该算法构架简单、实时性强,易于实现,扩展了增强现实系统的应用范围。  相似文献   

9.
以增强现实环境下引导产品装配为目标,建立了面向增强装配过程的统一 信息模型,管理文字、几何和产品装配特征等可视化引导信息。采用基于标志物的视觉跟踪 技术实现虚拟零件和视频中真实零件的注册定位,通过建立虚拟和真实装配场景的深度图处 理增强装配场景中虚实物体的遮挡关系。利用虚实零件的注册位置把装配引导信息叠加到装 配视频场景中。并开发了演示系统,分析和说明了增强现实环境下引导产品进行装配的过程。  相似文献   

10.
This paper presents a novel object–object affordance learning approach that enables intelligent robots to learn the interactive functionalities of objects from human demonstrations in everyday environments. Instead of considering a single object, we model the interactive motions between paired objects in a human–object–object way. The innate interaction-affordance knowledge of the paired objects are learned from a labeled training dataset that contains a set of relative motions of the paired objects, human actions, and object labels. The learned knowledge is represented with a Bayesian Network, and the network can be used to improve the recognition reliability of both objects and human actions and to generate proper manipulation motion for a robot if a pair of objects is recognized. This paper also presents an image-based visual servoing approach that uses the learned motion features of the affordance in interaction as the control goals to control a robot to perform manipulation tasks.  相似文献   

11.
In this paper, we introduce the concept of Extended VR (extending viewing space and interaction space of back-projection VR systems), by describing the use of a hand-held semi-transparent mirror to support augmented reality tasks with back-projection systems. This setup overcomes the problem of occlusion of virtual objects by real ones linked with such display systems. The presented approach allows an intuitive and effective application of immersive or semi-immersive virtual reality tasks and interaction techniques to an augmented surrounding space. Thereby, we use the tracked mirror as an interactive image-plane that merges the reflected graphics, which are displayed on the projection plane, with the transmitted image of the real environment. In our implementation, we also address traditional augmented reality problems, such as real-object registration and virtual-object occlusion. The presentation is complemented by a hypothesis of conceivable further setups that apply transflective surfaces to support an Extended VR environment.  相似文献   

12.
We present a method for autonomous learning of dextrous manipulation skills with multifingered robot hands. We use heuristics derived from observations made on human hands to reduce the degrees of freedom of the task and make learning tractable. Our approach consists of learning and storing a few basic manipulation primitives for a few prototypical objects and then using an associative memory to obtain the required parameters for new objects and/or manipulations. The parameter space of the robot is searched using a modified version of the evolution strategy, which is robust to the noise normally present in real-world complex robotic tasks. Given the difficulty of modeling and simulating accurately the interactions of multiple fingers and an object, and to ensure that the learned skills are applicable in the real world, our system does not rely on simulation; all the experimentation is performed by a physical robot, in this case the 16-degree-of-freedom Utah/MIT hand. Experimental results show that accurate dextrous manipulation skills can be learned by the robot in a short period of time. We also show the application of the learned primitives to perform an assembly task and how the primitives generalize to objects that are different from those used during the learning phase.  相似文献   

13.
Fuentes  Olac  Nelson  Randal C. 《Machine Learning》1998,31(1-3):223-237
We present a method for autonomous learning of dextrous manipulation skills with multifingered robot hands. We use heuristics derived from observations made on human hands to reduce the degrees of freedom of the task and make learning tractable. Our approach consists of learning and storing a few basic manipulation primitives for a few prototypical objects and then using an associative memory to obtain the required parameters for new objects and/or manipulations. The parameter space of the robot is searched using a modified version of the evolution strategy, which is robust to the noise normally present in real-world complex robotic tasks. Given the difficulty of modeling and simulating accurately the interactions of multiple fingers and an object, and to ensure that the learned skills are applicable in the real world, our system does not rely on simulation; all the experimentation is performed by a physical robot, in this case the 16-degree-of-freedom Utah/MIT hand. E xperimental results show that accurate dextrous manipulation skills can be learned by the robot in a short period of time. We also show the application of the learned primitives to perform an assembly task and how the primitives generalize to objects that are different from those used during the learning phase.  相似文献   

14.
A solution for interaction using finger tracking in a cubic immersive virtual reality system (or immersive cube) is presented. Rather than using a traditional wand device, users can manipulate objects with fingers of both hands in a close-to-natural manner for moderately complex, general purpose tasks. Our solution couples finger tracking with a real-time physics engine, combined with a heuristic approach for hand manipulation, which is robust to tracker noise and simulation instabilities. A first study has been performed to evaluate our interface, with tasks involving complex manipulations, such as balancing objects while walking in the cube. The user’s finger-tracked manipulation was compared to manipulation with a 6 degree-of-freedom wand (or flystick), as well as with carrying out the same task in the real world. Users were also asked to perform a free task, allowing us to observe their perceived level of presence in the scene. Our results show that our approach provides a feasible interface for immersive cube environments and is perceived by users as being closer to the real experience compared to the wand. However, the wand outperforms direct manipulation in terms of speed and precision. We conclude with a discussion of the results and implications for further research.  相似文献   

15.
In conventional haptic devices for virtual reality (VR) systems, a user interacts with a scene by handling a tool (such as a pen) using a mechanical device (i.e. an end-effector-type haptic device). In the case that the device can ‘mimic’ a VR object, the user can interact directly with the VR object without the mechanical constraint of a device (i.e. an encounter-type haptic device). A new challenge of an encounter-type haptic device is displaying the visuals and haptic information simultaneously on a single device. We are proposing a new desk-top encounter-type haptic device with an actively driven pen-tablet LCD panel. The proposed device is capable of providing pseudo-3D visuals and haptic information on a single device. As the result, the system provides to the user a sense of interaction with a real object. To develop a proof-of-concept prototype, a compact parallel mechanism was developed and implemented. The aim of this research is to propose a new concept in haptic research. In this paper, the concept, the prototype, and some preliminary evaluation tests with the proposed system are presented.  相似文献   

16.
Like humans, robots that need semantic perception and accurate estimation of the environment can increase their knowledge through active interaction with objects. This paper proposes a novel method for 3D object modeling for a robot manipulator with an eye-in-hand laser range sensor. Since the robot can only perceive the environment from a limited viewpoint, it actively manipulates a target object and generates a complete model by accumulation and registration of partial views. Three registration algorithms are investigated and compared in experiments performed in cluttered environments with complex rigid objects made of multiple parts. A data structure based on proximity graph, that encodes neighborhood relations in range scans, is also introduced to perform efficient range queries. The proposed method for 3D object modeling is applied to perform task-level manipulation. Indeed, once a complete model is available the object is segmented into its constituent parts and categorized. Object sub-parts that are relevant for the task and that afford a grasping action are identified and selected as candidate regions for grasp planning.  相似文献   

17.
《Ergonomics》2012,55(15):1091-1102
Augmented reality allows changes to be made to the visual perception of object size even while the tangible components remain completely unaltered. It was, therefore, utilized in a study whose results are being reported here to provide the proper environment required to thoroughly observe the exact effect that visual change to object size had on programming fingertip forces when objects were lifted with a precision grip. Twenty-one participants performed repeated lifts of an identical grip apparatus to a height of 20 mm, maintained each lift for 8 seconds, and then replaced the grip apparatus on the table. While all other factors of the grip apparatus remained unchanged, visual appearance was altered graphically in a 3-D augmented environment. The grip apparatus measured grip and load forces independently. Grip and load forces demonstrated significant rates of increase as well as peak forces as the size of graphical images increased; an aspect that occurred in spite of the fact that extraneous haptic information remained constant throughout the trials. By indicating a human tendency to rely - even unconsciously - on visual input to program the forces in the initial lifting phase, this finding provides further confirmation of previous research findings obtained in the physical environment; including the possibility of extraneous haptic effects (Gordon et al. 1991a, Mon-Williams and Murray 2000, Kawai et al. 2000). The present results also suggest that existing knowledge concerning human manipulation tasks in the physical world may be applied to an augmented environment where the physical objects are enhanced by computer generated visual components.  相似文献   

18.
Distributed Augmented Reality for Collaborative Design Applications   总被引:1,自引:0,他引:1  
This paper presents a system for constructing collaborative design applications based on distributed augmented reality. Augmented reality interfaces are a natural method for presenting computer-based design by merging graphics with a view of the real world. Distribution enables users at remote sites to collaborate on design tasks. The users interactively control their local view, try out design options, and communicate design proposals. They share virtual graphical objects that substitute for real objects which are not yet physically created or are not yet placed into the real design environment. We describe the underlying augmented reality system and in particular how it has been extended in order to support multi-user collaboration. The construction of distributed augmented reality applications is made easier by a separation of interface, interaction and distribution issues. An interior design application is used as an example to demonstrate the advantages of our approach.  相似文献   

19.
Augmented reality allows changes to be made to the visual perception of object size even while the tangible components remain completely unaltered. It was, therefore, utilized in a study whose results are being reported here to provide the proper environment required to thoroughly observe the exact effect that visual change to object size had on programming fingertip forces when objects were lifted with a precision grip. Twenty-one participants performed repeated lifts of an identical grip apparatus to a height of 20 mm, maintained each lift for 8 seconds, and then replaced the grip apparatus on the table. While all other factors of the grip apparatus remained unchanged, visual appearance was altered graphically in a 3-D augmented environment. The grip apparatus measured grip and load forces independently. Grip and load forces demonstrated significant rates of increase as well as peak forces as the size of graphical images increased; an aspect that occurred in spite of the fact that extraneous haptic information remained constant throughout the trials. By indicating a human tendency to rely - even unconsciously - on visual input to program the forces in the initial lifting phase, this finding provides further confirmation of previous research findings obtained in the physical environment; including the possibility of extraneous haptic effects (Gordon et al. 1991a, Mon-Williams and Murray 2000, Kawai et al. 2000). The present results also suggest that existing knowledge concerning human manipulation tasks in the physical world may be applied to an augmented environment where the physical objects are enhanced by computer generated visual components.  相似文献   

20.
Haptic feedback is an important component of immersive virtual reality (VR) applications that is often suggested to complement visual information through the sense of touch. This paper investigates the use of a haptic vest in navigation tasks. The haptic vest produces a repulsive vibrotactile feedback from nearby static virtual obstacles that augments the user spatial awareness. The tasks require the user to perform complex movements in a 3D cluttered virtual environment, like avoiding obstacles while walking backwards and pulling a virtual object. The experimental setup consists of a room-scale environment. Our approach is the first study where a haptic vest is tracked in real time using a motion capture device so that proximity-based haptic feedback can be conveyed according to the actual movement of the upper body of the user.User study experiments have been conducted with and without haptic feedback in virtual environments involving both normal and limited visibility conditions. A quantitative evaluation was carried out by measuring task completion time and error (collision) rate. Multiple haptic rendering techniques have also been tested. Results show that under limited visibility conditions proximity-based haptic feedback generated by a wearable haptic vest can significantly reduce the number of collisions with obstacles in the virtual environment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号