首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Data-driven grasp synthesis using shape matching and task-based pruning   总被引:1,自引:0,他引:1  
Human grasps, especially whole-hand grasps, are difficult to animate because of the high number of degrees of freedom of the hand and the need for the hand to conform naturally to the object surface. Captured human motion data provides us with a rich source of examples of natural grasps. However, for each new object, we are faced with the problem of selecting the best grasp from the database and adapting it to that object. This paper presents a data-driven approach to grasp synthesis. We begin with a database of captured human grasps. To identify candidate grasps for a new object, we introduce a novel shape matching algorithm that matches hand shape to object shape by identifying collections of features having similar relative placements and surface normals. This step returns many grasp candidates, which are clustered and pruned by choosing the grasp best suited for the intended task. For pruning undesirable grasps, we develop an anatomically-based grasp quality measure specific to the human hand. Examples of grasp synthesis are shown for a variety of objects not present in the original database. This algorithm should be useful both as an animator tool for posing the hand and for automatic grasp synthesis in virtual environments.  相似文献   

2.
Neuro-psychological findings have shown that human perception of objects is based on part decomposition. Most objects are made of multiple parts which are likely to be the entities actually involved in grasp affordances. Therefore, automatic object recognition and robot grasping should take advantage from 3D shape segmentation. This paper presents an approach toward planning robot grasps across similar objects by part correspondence. The novelty of the method lies in the topological decomposition of objects that enables high-level semantic grasp planning.In particular, given a 3D model of an object, the representation is initially segmented by computing its Reeb graph. Then, automatic object recognition and part annotation are performed by applying a shape retrieval algorithm. After the recognition phase, queries are accepted for planning grasps on individual parts of the object. Finally, a robot grasp planner is invoked for finding stable grasps on the selected part of the object. Grasps are evaluated according to a widely used quality measure. Experiments performed in a simulated environment on a reasonably large dataset show the potential of topological segmentation to highlight candidate parts suitable for grasping.  相似文献   

3.
This paper addresses the problem of designing a practical system able to grasp real objects with a three-fingered robot hand. A general approach for synthesizing two- and three-finger grasps on planar unknown objects is presented. Visual perception is used to reduce the uncertainty and to obtain relevant information about the objects.We focus on non-modeled planar extruded objects, which can represent many real-world objects. In addition, particular mechanical constraints of the robot hand are considered.First, a vision processing module that extracts from object images the relevant information for the grasp synthesis is developed. This is completed with a set of algorithms for synthesizing two- and three-finger grasps taking into account force-closure and contact stability conditions, with a low computational effort. Finally, a procedure for constraining these results to the kinematics of the particular hand, is also developed. In addition, a set of heuristic metrics for assessing the quality of the computed grasps is described.All these components are integrated in a complete system. Experimental results using the Barrett hand are shown and discussed.  相似文献   

4.
Several authors have observed that spatial dimensions tend to be underestimated in virtual environments. In this study, we hypothesize that the availability of visual cues in virtual environments has an influence on the accuracy of perception. An experiment was conducted to compare spatial perception in real and virtual environments that were modeled differently and visualized using a head-mounted display. Results suggest that the greater the availability of visual cues, the greater the level of accuracy in the estimates, especially for egocentric dimensions (p < 0.001). In the end, this study contributes to a better understanding of how architectural virtual environments should be modeled for use in professional or commercial applications where accurate and reliable simulations are required.  相似文献   

5.
We address a “sticking object” problem for the release of whole-hand virtual grasps. The problem occurs when grasping techniques require fingers to be moved outside an object's boundaries after a user's (real) fingers interpenetrate virtual objects due to a lack of physical motion constraints. This may be especially distracting for grasp techniques that introduce mismatches between tracked and visual hand configurations to visually prevent interpenetration. Our method includes heuristic analysis of finger motion and a transient incremental motion metaphor to manage a virtual hand during grasp release. We integrate the method into a spring model for whole-hand virtual grasping to maintain the physically-based pickup and manipulation behavior of such models. We show that the new spring model improves release speed and accuracy based on pick-and-drop, targeted ball-drop, and cube-alignment experiments. In contrast to a standard spring-based grasping method, measured release quality does not depend notably on object size. Users subjectively prefer the new approach and it can be tuned to avoid potential side effects such as increased drops or visual distractions. We further investigated a convergence speed parameter to find the subjectively good range and to better understand tradeoffs in subjective artifacts on the continuum between pure incremental motion and rubber-band-like convergence behavior.  相似文献   

6.
This paper presents a simple grasp planning method for a multi-fingered hand. Its purpose is to compute a context-independent and dense set or list of grasps, instead of just a small set of grasps regarded as optimal with respect to a given criterion. By context-independent, we mean that only the robot hand and the object to grasp are considered. The environment and the position of the robot base with respect to the object are considered in a further stage. Such a dense set can be computed offline and then used to let the robot quickly choose a grasp adapted to a specific situation. This can be useful for manipulation planning of pick-and-place tasks. Another application is human–robot interaction when the human and robot have to hand over objects to each other. If human and robot have to work together with a predefined set of objects, grasp lists can be employed to allow a fast interaction.The proposed method uses a dense sampling of the possible hand approaches based on a simple but efficient shape feature. As this leads to many finger inverse kinematics tests, hierarchical data structures are employed to reduce the computation times. The data structures allow a fast determination of the points where the fingers can realize a contact with the object surface. The grasps are ranked according to a grasp quality criterion so that the robot will first parse the list from best to worse quality grasps, until it finds a grasp that is valid for a particular situation.  相似文献   

7.
A virtual reality system enabling high-level programming of robot grasps is described. The system is designed to support programming by demonstration (PbD), an approach aimed at simplifying robot programming and empowering even unexperienced users with the ability to easily transfer knowledge to a robotic system. Programming robot grasps from human demonstrations requires an analysis phase, comprising learning and classification of human grasps, as well as a synthesis phase, where an appropriate human-demonstrated grasp is imitated and adapted to a specific robotic device and object to be grasped. The virtual reality system described in this paper supports both phases, thereby enabling end-to-end imitation-based programming of robot grasps. Moreover, as in the PbD approach robot environment interactions are no longer explicitly programmed, the system includes a method for automatic environment reconstruction that relieves the designer from manually editing the pose of the objects in the scene and enables intelligent manipulation. A workspace modeling technique based on monocular vision and computation of edge-face graphs is proposed. The modeling algorithm works in real time and supports registration of multiple views. Object recognition and workspace reconstruction features, along with grasp analysis and synthesis, have been tested in simulated tasks involving 3D user interaction and programming of assembly operations. Experiments reported in the paper assess the capabilities of the three main components of the system: the grasp recognizer, the vision-based environment modeling system, and the grasp synthesizer.  相似文献   

8.
This paper describes a set of visual cues of contact designed to improve the interactive manipulation of virtual objects in industrial assembly/maintenance simulations. These visual cues display information of proximity, contact and effort between virtual objects when the user manipulates a part inside a digital mock-up. The set of visual cues encloses the apparition of glyphs (arrow, disk, or sphere) when the manipulated object is close or in contact with another part of the virtual environment. Light sources can also be added at the level of contact points. A filtering technique is proposed to decrease the number of glyphs displayed at the same time. Various effects--such as change in color, change in size, and deformation of shape- can be applied to the glyphs as a function of proximity with other objects or amplitude of the contact forces. A preliminary evaluation was conducted to gather the subjective preference of a group of participants during the simulation of an automotive assembly operation. The collected questionnaires showed that participants globally appreciated our visual cues of contact. The changes in color appeared to be preferred concerning the display of distances and proximity information. Size changes and deformation effects appeared to be preferred in terms of perception of contact forces between the parts. Last, light sources were selected to focus the attention of the user on the contact areas.  相似文献   

9.
This paper presents an iterative procedure to find locally optimum force-closure grasps on 3D objects, with or without friction and with any number of fingers. The object surface is discretized in a cloud of points, so the approach is applicable to objects with any arbitrary shape. The approach finds an initial force-closure grasp that is then iteratively improved through an oriented search procedure. The grasp quality is measured considering the largest perturbation wrench that the grasp can resist with independence of the direction of perturbation. The efficiency of the algorithm is illustrated through numerical examples.  相似文献   

10.
This paper considers tactile augmentation, the addition of a physical object within a virtual environment (VE) to provide haptic feedback. The resulting mixed reality environment is limited in terms of the ease with which changes can be made to the haptic properties of objects within it. Therefore sensory enhancements or illusions that make use of visual cues to alter the perceived hardness of a physical object allowing variation in haptic properties are considered. Experimental work demonstrates that a single physical surface can be made to ‘feel’ both softer and harder than it is in reality by the accompanying visual information presented. The strong impact visual cues have on the overall perception of object hardness, indicates haptic accuracy may not be essential for a realistic virtual experience. The experimental results are related specifically to the development of a VE for surgical training; however, the conclusions drawn are broadly applicable to the simulation of touch and the understanding of haptic perception within VEs.  相似文献   

11.
Diorama artists produce a spectacular 3D effect in a confined space by generating depth illusions that are faithful to the ordering of the objects in a large real or imaginary scene. Indeed, cognitive scientists have discovered that depth perception is mostly affected by depth order and precedence among objects. Motivated by these findings, we employ ordinal cues to construct a model from a single image that similarly to Dioramas, intensifies the depth perception. We demonstrate that such models are sufficient for the creation of realistic 3D visual experiences. The initial step of our technique extracts several relative depth cues that are well known to exist in the human visual system. Next, we integrate the resulting cues to create a coherent surface. We introduce wide slits in the surface, thus generalizing the concept of cardboard cutout layers. Lastly, the surface geometry and texture are extended alongside the slits, to allow small changes in the viewpoint which enriches the depth illusion.  相似文献   

12.
With the recent growth in the development of augmented reality (AR) technologies, it is becoming important to study human perception of AR scenes. In order to detect whether users will suffer more from visual and operator fatigue when watching virtual objects through optical see‐through head‐mounted displays (OST‐HMDs), compared with watching real objects in the real world, we propose a comparative experiment including a virtual magic cube task and a real magic cube task. The scores of the subjective questionnaires (SQ) and the values of the critical flicker frequency (CFF) were obtained from 18 participants. In our study, we use several electrooculogram (EOG) and heart rate variability (HRV) measures as objective indicators of visual and operator fatigue. Statistical analyses were performed to deal with the subjective and objective indicators in the two tasks. Our results suggest that participants were very likely to suffer more from visual and operator fatigue when watching virtual objects presented by the OST‐HMD. In addition, the present study provides hints that HRV and EOG measures could be used to explore how visual and operator fatigue are induced by AR content. Finally, three novel HRV measures are proposed to be used as potential indicators of operator fatigue.  相似文献   

13.
14.
Implicit surfaces are often used in computer graphics. They can be easily modeled and rendered, and many objects are composed of them in our daily life. In this paper, based on the concept of virtual objects, a novel method of real-time rendering is presented for reflection and refraction on implicit surface. The method is used to construct virtual objects from real objects quickly, and then render the virtual objects as if they were real objects except for one more step of merging their images with the real objects' images. Characteristics of implicit surfaces are used to compute virtual objects effectively and quickly. GPUs (Graphics Processing Units) are used to compute virtual vertices quickly and further accelerate the computing and rendering processes. As a result, realistic effects of reflections and refractions on implicit surfaces are rendered in real time.  相似文献   

15.
在拍摄战斗机等物体的高速飞行和旋转过程中,真实的拍摄受制于环境等因素一般为固定拍摄或跟踪拍摄,三维虚拟场景下的仿真高速运动和虚拟拍摄可以不受环境等因素影响,有更多的速度表现方式,但目前没有比较通用的速度感表现规律。基于人类对运动的视觉感知理论,将真实相机的拍摄方式应用于虚拟摄像机,研究不同的单镜头下如何通过虚拟摄像机的拍摄与取景将三维场景中仿真物体的高速运动更好地加以表现,实验过程中以战斗机直线飞行仿真动画及离心机旋转仿真动画为例,采用控制变量法与等级排列法,最终得到一系列较为实用的速度感表现方法。  相似文献   

16.
Humans can perceive a wide and small surface undulation that is hundreds micrometers in height and hundreds millimeters in width by scanning the surface with their whole fingers and palm in the distal and proximal directions. We developed a wearable haptic device that presents a surface undulation to the hand. The device is composed of nine independent stimulator units that control the heights of nine finger pads of the index finger, the middle finger, and the ring finger (three units on each finger) according to the virtual surface. Three experiments are carried out to evaluate the haptic perception by the haptic device. A first experiment shows that the perceived dimensions are diminished as compared to the dimensions applied by the haptic device. On the basis of this result, the applied dimensions are calibrated to match the virtual surface undulation to the real surface undulation. A second experiment shows that the shape of gently-curved surfaces can be estimated with the haptic display. A third experiment shows that the discrimination threshold is not different between the virtual surface undulation and the real surface undulation. These experimental results show the applicability of the haptic device as a haptic interface.  相似文献   

17.
A fundamental problem in optical, see-through augmented reality (AR) is characterizing how it affects the perception of spatial layout and depth. This problem is important because AR system developers need to both place graphics in arbitrary spatial relationships with real-world objects, and to know that users would perceive them in the same relationships. Furthermore, AR makes possible enhanced perceptual techniques that have no real-world equivalent, such as X-ray vision, where AR users are supposed to perceive graphics as being located behind opaque surfaces. This paper reviews and discusses protocols for measuring egocentric depth judgments in both virtual and augmented environments, and discusses the well-known problem of depth underestimation in virtual environments. It then describes two experiments that measured egocentric depth judgments in AR. Experiment I used a perceptual matching protocol to measure AR depth judgments at medium and far-field distances of 5 to 45 meters. The experiment studied the effects of upper versus lower visual field location, the X-ray vision condition, and practice on the task. The experimental findings include evidence for a switch in bias, from underestimating to overestimating the distance of AR-presented graphics, at ~ 23 meters, as well as a quantification of how much more difficult the X-ray vision condition makes the task. Experiment II used blind walking and verbal report protocols to measure AR depth judgments at distances of 3 to 7 meters. The experiment examined real-world objects, real-world objects seen through the AR display, virtual objects, and combined real and virtual objects. The results give evidence that the egocentric depth of AR objects is underestimated at these distances, but to a lesser degree than has previously been found for most virtual reality environments. The results are consistent with previous studies that have implicated a restricted field-of-view, combined with an inability for observers to sc  相似文献   

18.
When giving directions to the location of an object, people typically use other attractive objects as reference, that is, reference objects. With the aim to select proper reference objects, useful for locating a target object within a virtual environment (VE), a computational model to identify perceptual saliency is presented. Based on the object’s features with the major stimulus for the human visual system, three basic features of a 3D object (i.e., color, size, and shape) are individually evaluated and then combined to get a degree of saliency for each 3D object in a virtual scenario. An experiment was conducted to evaluate the extent to which the proposed measure of saliency matches with the people’s subjective perception of saliency; the results showed a good performance of this computational model.  相似文献   

19.
Computation of grasps with form/force closure is one of the fundamental problems in the study of multifingered grasping and dextrous manipulation. Based on the geometric condition of the closure property, this paper presents a numerical test to quantify how far a grasp is from losing form/force closure. With the polyhedral approximation of the friction cone, the proposed numerical test can be formulated as a single linear program. An iterative algorithm for computing optimal force-closure grasps, which is implemented by minimizing the proposed numerical test in the grasp configuration space, is also developed. The algorithm is computationally efficient and generally applicable. It can be used for computing form/force-closure grasps on 3-D objects with curved surfaces, and with any number of contact points. Simulation examples are given to show the effectiveness and computational efficiency of the proposed algorithm.  相似文献   

20.
In the real world, vision operates in harmony with self-motion yielding the observer to unambiguous perception of the three-dimensional (3D) space. In laboratory conditions, because of technical difficulties, researchers studying 3D perception have often preferred to use the substitute of a stationary observer, somehow neglecting aspects of the action-perception cycle. Recent results in visual psychophysics have proved that self-motion and visual processes interact, leading the moving observer to interpret a 3D virtual scene differently from a stationary observer. In this paper we describe a virtual environment (VE) framework which presents very interesting characteristics for designing experiments in visual perception during action. These characteristics arise in a number of ways from the design of a unique motion capture device. First, its accuracy and the minimal latency in position measurement; second, its ease of use and the adaptability to different display interfaces. Such a VE framework enables the experimenter to recreate stimulation conditions characterised by a degree of sensory coherence typical of the real world. Moreover, because of its accuracy and flexibility, the same device can be used as a measurement tool to perform elementary but essential calibration procedures. The VE framework has been used to conduct two studies which compare the perception of 3D variables of the environment in moving and in stationary observers under monocular vision. The first study concerns the perception of absolute distance, i.e. the distance separating an object and the observer. The second study refers to the perception of the orientation of a surface both in the absence and presence of conflicts between static and dynamic visual cues. In the two cases, the VE framework has enabled the design of optimal experimental conditions, permitting light to be shed on the role of action in 3D visual perception.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号