共查询到20条相似文献,搜索用时 93 毫秒
1.
SmartTouch: electric skin to touch the untouchable 总被引:3,自引:0,他引:3
Kajimoto H. Kawakami N. Tachi S. Inami M. 《Computer Graphics and Applications, IEEE》2004,24(1):36-43
Augmented haptics lets users touch surface information of any modality. SmartTouch uses optical sensors to gather information and electrical stimulation to translate it into tactile display. Augmented reality is an engineer's approach to this dream. In AR, sensors capture artificial information from the world, and existing sensing channels display it. Hence, we virtually acquire the sensor's physical ability as our own. Augmented haptics, the result of applying AR to haptics, would allow a person to touch the untouchable. Our system, SmartTouch, uses a tactile display and a sensor. When the sensor contacts an object, an electrical stimulation translates the acquired information into a tactile sensation, such as a vibration or pressure, through the tactile display. Thus, an individual not only makes physical contact with an object, but also touches the surface information of any modality, even those that are typically untouchable. 相似文献
2.
This paper presents the development of a sensor system for collecting tactile information. An active sensing system using the piezoelectric effect and the pyroelectric effect of a PVDF (Polyvinylidene fluoride) film is proposed. The active sensing is designed with human motions for tactile perception in mind. First, as the pretest, the distinction examination of six fabrics with different textures is carried out through human tactile perception. Next, the proposed sensor system is assembled. The sensor is composed of a PVDF film and a soft rubber. The surface of the sensor can be heated through temperature control. The sensor is attached on the tip of a robot finger driven by a piezoelectric bimorph strip and the root of the finger is mounted on a linear slider. Two kinds of active sensing are introduced. First, the heated sensor is contacted with an object and pyroelectric output signals are collected in order to obtain the information on tactile warmth. Next, the heated sensor is slid over the object and piezoelectric output signals are collected in order to obtain the information on feelings of vibration. Through the discussion about each sensing, three indexes representing features of the collected data are extracted and proposed as the sensor outputs for the evaluation of tactile sensation. The measurement using the sensor system is done on the samples used in the distinction examination. Comparison with the results shows that the sensor system extracts features on feelings of vibration and warmth. 相似文献
3.
目前多点触摸桌面广泛采用计算机视觉技术实现.触摸信息是通过手指反射的红外线在红外摄像机下成像,然后对红外相机得到灰度图像进行手指区域提取,跟踪,校正得到.由于桌面表面红外光照射不均衡,环境噪声干扰等因素,目前存在的多点触摸工具包在检测和跟踪方面效果较差,也没有考虑手指运动和摄像机畸变的影响.本文提出基于图像局部极值点检测手指触摸区域,结合手指运动和相邻帧信息进行手指跟踪,实现摄像机畸变校正的多点触摸桌面系统工具包MTDriver.实验结果表明,MTDriver 跟踪识别准确,效率高,鲁棒性强,具有实用性. 相似文献
4.
Presern S Gyergyek L 《IEEE transactions on pattern analysis and machine intelligence》1983,(2):217-220
This paper describes an on-line computer supported tactile sensor that can recognize complex industrial objects, identify a seam location, and track a three-dimensional seam trajectory for an arc welding robot. A one finger tactile sensor with two degrees of freedom is used. The supporting microcomputer system organizes and reduces the useful sensor data from digitized tactile information to a compact symbolic representation which is matched to a knowledge network. A beam search algorithm is used. The hierarchical organization provides that simple features are first detected on the object, and more complex features are examined later. Pruning of the search tree is based on a likelihood function. 相似文献
5.
《Ergonomics》2012,55(7):874-894
During laparoscopic surgery video images are used to guide the movements of the hand and instruments, and objects in the operating field often obscure these images. Thus, surgeons often rely heavily on tactile information (sense of touch) to help guide their movements. It is important to understand how tactile perception is affected when using laparoscopic instruments, since many surgical judgements are based on how a tissue ‘feels’ to the surgeon, particularly in situations where visual inputs are degraded. Twelve naïve participants used either their index finger or a laparoscopic instrument to explore sandpaper surfaces of various grits (60, 100, 150 and 220). These movements were generated with either vision or no vision. Participants were asked to estimate the roughness of the surfaces they explored. The normal and tangential forces of either the finger or instrument on the sandpaper surfaces were measured. Results showed that participants were able to judge the roughness of the sandpaper surfaces when using both the finger and the instrument. However, post hoc comparisons showed that perceptual judgements of surface texture were altered in the no vision condition compared to the vision condition. This was also the case when using the instrument, compared to the judgements provided when exploring with the finger. This highlights the importance of the completeness of the video images during laparoscopic surgery. More normal and tangential force was used when exploring the surfaces with the finger as opposed to the instrument. This was probably an attempt to increase the contact area of the fingertip to maximize tactile input. With the instrument, texture was probably sensed through vibrations of the instrument in the hand. Applications of the findings lie in the field of laparoscopic surgery simulation techniques and tactile perception. 相似文献
6.
During laparoscopic surgery video images are used to guide the movements of the hand and instruments, and objects in the operating field often obscure these images. Thus, surgeons often rely heavily on tactile information (sense of touch) to help guide their movements. It is important to understand how tactile perception is affected when using laparoscopic instruments, since many surgical judgements are based on how a tissue 'feels' to the surgeon, particularly in situations where visual inputs are degraded. Twelve na?ve participants used either their index finger or a laparoscopic instrument to explore sandpaper surfaces of various grits (60, 100, 150 and 220). These movements were generated with either vision or no vision. Participants were asked to estimate the roughness of the surfaces they explored. The normal and tangential forces of either the finger or instrument on the sandpaper surfaces were measured. Results showed that participants were able to judge the roughness of the sandpaper surfaces when using both the finger and the instrument. However, post hoc comparisons showed that perceptual judgements of surface texture were altered in the no vision condition compared to the vision condition. This was also the case when using the instrument, compared to the judgements provided when exploring with the finger. This highlights the importance of the completeness of the video images during laparoscopic surgery. More normal and tangential force was used when exploring the surfaces with the finger as opposed to the instrument. This was probably an attempt to increase the contact area of the fingertip to maximize tactile input. With the instrument, texture was probably sensed through vibrations of the instrument in the hand. Applications of the findings lie in the field of laparoscopic surgery simulation techniques and tactile perception. 相似文献
7.
This paper describes pattern classification with an artificial tactile sense. In this method, an object's shape is determined by touching, groping and grasping it with an artificial hand with tactile sense.
A simplified experiment classifying cylinders and square pillars was performed by an artificial hand with on-off switches instead of pressure sensitive elements. Highly reliable results were obtained. In addition, results of a surface groping experiment are given. 相似文献
8.
《Advanced Robotics》2013,27(16):2065-2081
This study attempted to observe what effects the frequency modulation of vibration elements produce in representing a tactile shape. Tactile shapes were modulated based on frequency difference at constant amplitude through a tactile feedback array of 30 (5 × 6) pins, which stimulated the finger pad. Experiment I showed that participants feel height changes when modulating frequency. In Experiment II, the participants were asked to discriminate three basic tactile shape patterns, which were generated with different frequencies at constant amplitude. Experiment II proved that spatial height information can be represented by modulating temporal information. In Experiment III, the frequency modulation method was applied to the tactile mouse system. Dynamic frequency modulation at passive touch can be used to transmit tactile height pattern information to the user of the mouse pointer for more practical application. The results showed that the participants were able to discern eight predefined shapes with an accuracy of 98.4% upon passive touch. 相似文献
9.
Human shape recognition performance for 3D tactile display 总被引:2,自引:0,他引:2
《IEEE transactions on systems, man, and cybernetics. Part A, Systems and humans : a publication of the IEEE Systems, Man, and Cybernetics Society》1999,29(6):637-644
The paper describes the relationship between the pin-matrix density of a tactile display and the recognition performance of displayed 3D shapes. Three types of pin-matrix tactile display, that generate 3D shapes, were used for the experiment. The pitch of pins was 2 mm, 3 mm, 5 mm each. We assumed that surfaces, edges, and vertices were primitive 3D shape information, so tested shapes were classified into these three categories. We assumed two types of finger touching mode: 1) fingertip-only, allowed full use of spatial shape information given to the fingertip; and 2) allowed tracing of the object. Recognition time and the classified error rate were measured. We obtained results on the relationship between pin pitch and recognition performance data. Regression curves for pin pitch and recognition time were plotted. A significance test of recognition time versus pin pitch was done. The error rate of identification versus pin pitch was described. Our results provide basic knowledge for developing tactile presentation devices 相似文献
10.
11.
Diego R. Faria Ricardo Martins Jorge Lobo Jorge Dias 《Robotics and Autonomous Systems》2012,60(3):396-410
Humans excel in manipulation tasks, a basic skill for our survival and a key feature in our manmade world of artefacts and devices. In this work, we study how humans manipulate simple daily objects, and construct a probabilistic representation model for the tasks and objects useful for autonomous grasping and manipulation by robotic hands. Human demonstrations of predefined object manipulation tasks are recorded from both the human hand and object points of view. The multimodal data acquisition system records human gaze, hand and fingers 6D pose, finger flexure, tactile forces distributed on the inside of the hand, colour images and stereo depth map, and also object 6D pose and object tactile forces using instrumented objects. From the acquired data, relevant features are detected concerning motion patterns, tactile forces and hand-object states. This will enable modelling a class of tasks from sets of repeated demonstrations of the same task, so that a generalised probabilistic representation is derived to be used for task planning in artificial systems. An object centred probabilistic volumetric model is proposed to fuse the multimodal data and map contact regions, gaze, and tactile forces during stable grasps. This model is refined by segmenting the volume into components approximated by superquadrics, and overlaying the contact points used taking into account the task context. Results show that the features extracted are sufficient to distinguish key patterns that characterise each stage of the manipulation tasks, ranging from simple object displacement, where the same grasp is employed during manipulation (homogeneous manipulation) to more complex interactions such as object reorientation, fine positioning, and sequential in-hand rotation (dexterous manipulation). The framework presented retains the relevant data from human demonstrations, concerning both the manipulation and object characteristics, to be used by future grasp planning in artificial systems performing autonomous grasping. 相似文献
12.
《Robotics, IEEE Transactions on》2009,25(4):839-850
13.
This article describes a three-dimensional artificial vision system for robotic applications using an ultrasonic sensor array. The array is placed on the robot grip so that it is possible to detect the presence of an object, to direct the robot tool towards it, and to locate the object position. It will provide visual information about the object's surface by means of superficial scanning and it permits the object shape reconstruction. The developed system uses an approximation of the ultrasonic radiation and reception beam shape for calculating the first contact points with the object's surface. On the other hand, the position of the array's sensors has been selected in order to provide the sensorial head with other useful capabilities, such as edge detection and edge tracking. Furthermore, the article shows the structure of the sensorial head for avoiding successive rebounds between the head and the object surface, and for eliminating the mechanical vibrations among sensors. 相似文献
14.
An integral part of successfully manipulating objects is the sensation of touch or force. Experiments with telerobots (robots controlled at a distance) show that the sensation of force and contact improves the efficiency and accuracy of such tasks. Many believe that the same can be said of tasks in a virtual environment. Unfortunately, it is not possible to actually grasp a virtual object in the same manner as you would a real object because virtual objects are defined in the computer, while the user exists in the real world. Thus, there must be some intermediate device that provides the user with the effects of touch, either through the virtual environment itself or in a physical model of the object, which then communicates information to the virtual environment and displays the virtual object to the user. At the University of Tokyo, we are experimenting with surface display, a method that allows users to directly manipulate an object in a virtual environment by touching a physical model of the object's surface as presented by an intermediate device outside the computer. We have created a prototype of such a device that measures the force exerted on the surface. We have also implemented control and calculation methods, and have evaluated both the device and the methods in experiments with users. These experiments have validated the concepts underlying our work, and we are continuing to investigate implementation issues 相似文献
15.
Tri Cong Phung Min Jeong Kim Hyungpil Moon Ja Choon Koo Hyouk Ryeol Choi 《International Journal of Control, Automation and Systems》2012,10(2):383-395
In this paper, we propose a method of exploring the surface geometry of an unknown object by touch. The method is based on
the idea that a three-dimensional surface geometry can be reconstructed from two principal curvatures of the object which
are estimated from three concurrent curves. First, the process to minimize the number of contact points is addressed for the
approximation of an arbitrary curve, which uses normal vectors at the contact points. Then, an algorithm for reconstructing
a three-dimensional local surface from four contact points, two of which can be used to compute a normal curvature, is presented.
Lastly, our method is applied to cylindrical, spherical and planar objects in simulation and experiments for validation. 相似文献
16.
Schneiter J.L. Sheridan T.B. 《IEEE transactions on pattern analysis and machine intelligence》1990,12(8):775-786
The planning problem associated with tactile exploration for object recognition and localization is addressed. Given that an object has been sensed and is one of a number of modeled objects, and given that the data obtained so far are insufficient for recognition and/or localization, the methods developed determin the paths along which a point contact sensor must be directed in order to obtain further highly diagnostic measurements. Three families of sensor paths are found. The first is the family of paths for which recognition and localization are guaranteed. The second guarantees only that something will be learned. The third represents paths to avoid because nothing new will be learned. The methods are based on a small but powerful set of geometric ideas and are developed for two-dimensional, planar-faced objects. They are conceptually easily generalized to handle three-dimensional objects, including objects with through holes 相似文献
17.
18.
《Robotics, IEEE Transactions on》2008,24(5):1157-1167
19.
一种基于阵列式触觉传感器的主动触觉搜索方法及仿真 总被引:2,自引:1,他引:1
本文在一种二指节手指模型的基础上,提出了一种新的主动触觉搜索策略和方法,综合利用阵列式触觉传感铭的输出图象来不断改变机械手的姿态,从而依次有序地搜索到三维物体的特征点,并据此生成物体的三维图形。此外还讨论了上述方法的计算机仿真实现。 相似文献
20.
Multi-oriented touching text character segmentation in graphical documents using dynamic programming
Partha Pratim Roy Umapada Pal Josep Lladós Mathieu Delalandre 《Pattern recognition》2012,45(5):1972-1983
The touching character segmentation problem becomes complex when touching strings are multi-oriented. Moreover in graphical documents sometimes characters in a single-touching string have different orientations. Segmentation of such complex touching is more challenging. In this paper, we present a scheme towards the segmentation of English multi-oriented touching strings into individual characters. When two or more characters touch, they generate a big cavity region in the background portion. Based on the convex hull information, at first, we use this background information to find some initial points for segmentation of a touching string into possible primitives (a primitive consists of a single character or part of a character). Next, the primitives are merged to get optimum segmentation. A dynamic programming algorithm is applied for this purpose using the total likelihood of characters as the objective function. A SVM classifier is used to find the likelihood of a character. To consider multi-oriented touching strings the features used in the SVM are invariant to character orientation. Experiments were performed in different databases of real and synthetic touching characters and the results show that the method is efficient in segmenting touching characters of arbitrary orientations and sizes. 相似文献