共查询到20条相似文献,搜索用时 15 毫秒
1.
Wu Huiyue Wang Yu Liu Jiayi Qiu Jiali Zhang Xiaolong 《Multimedia Tools and Applications》2020,79(1-2):263-288
Multimedia Tools and Applications - Gesture elicitation study, a technique emerging from the field of participatory design, has been extensively applied in emerging interaction and sensing... 相似文献
2.
3.
Tsai Tsung-Han Huang Chih-Chi Zhang Kung-Long 《Multimedia Tools and Applications》2020,79(9-10):5989-6007
Multimedia Tools and Applications - Human-Computer interaction (HCI) with gesture recognition is designed to recognize a number of meaningful human expressions, and has become a valuable and... 相似文献
4.
5.
《Robotics and Autonomous Systems》2007,55(8):643-657
In human–human communication we can adapt or learn new gestures or new users using intelligence and contextual information. Achieving natural gesture-based interaction between humans and robots, the system should be adaptable to new users, gestures and robot behaviors. This paper presents an adaptive visual gesture recognition method for human–robot interaction using a knowledge-based software platform. The system is capable of recognizing users, static gestures comprised of the face and hand poses, and dynamic gestures of face in motion. The system learns new users, poses using multi-cluster approach, and combines computer vision and knowledge-based approaches in order to adapt to new users, gestures and robot behaviors. In the proposed method, a frame-based knowledge model is defined for the person-centric gesture interpretation and human–robot interaction. It is implemented using the frame-based Software Platform for Agent and Knowledge Management (SPAK). The effectiveness of this method has been demonstrated by an experimental human–robot interaction system using a humanoid robot ‘Robovie’. 相似文献
6.
Rong Wen Wei-Liang Tay Binh P. Nguyen Chin-Boon Chng Chee-Kong Chui 《Computer methods and programs in biomedicine》2014
Radiofrequency (RF) ablation is a good alternative to hepatic resection for treatment of liver tumors. However, accurate needle insertion requires precise hand-eye coordination and is also affected by the difficulty of RF needle navigation. This paper proposes a cooperative surgical robot system, guided by hand gestures and supported by an augmented reality (AR)-based surgical field, for robot-assisted percutaneous treatment. It establishes a robot-assisted natural AR guidance mechanism that incorporates the advantages of the following three aspects: AR visual guidance information, surgeon's experiences and accuracy of robotic surgery. A projector-based AR environment is directly overlaid on a patient to display preoperative and intraoperative information, while a mobile surgical robot system implements specified RF needle insertion plans. Natural hand gestures are used as an intuitive and robust method to interact with both the AR system and surgical robot. The proposed system was evaluated on a mannequin model. Experimental results demonstrated that hand gesture guidance was able to effectively guide the surgical robot, and the robot-assisted implementation was found to improve the accuracy of needle insertion. This human–robot cooperative mechanism is a promising approach for precise transcutaneous ablation therapy. 相似文献
7.
《Expert systems with applications》2014,41(17):7837-7846
This paper presents a model of maritime safety management and its subareas. Furthermore, the paper links the safety management to the maritime traffic safety indicated by accident involvement, incidents reported by Vessel Traffic Service and the results from Port State Control inspections. Bayesian belief networks are applied as the modeling technique and the model parameters are based on expert elicitation and learning from historical data. The results from this new application domain of a Bayesian network based expert system suggest that, although several its subareas are functioning properly, the current status of the safety management on vessels navigating in the Finnish waters has room for improvement; the probability of zero poor safety management subareas is only 0.13. Furthermore, according to the model a good IT system for the safety management is the strongest safety-management related signal of an adequate overall safety management level. If no deficiencies have been discovered during a Port State Control inspection, the adequacy of the safety management is almost twice as probable as without knowledge on the inspection history. The resulted model could be applied to performing several safety management related queries and it thus provides support for maritime safety related decision making. 相似文献
8.
针对现有遥机器人人机交互系统依赖于视频信息使操控性能不稳定的问题,设计并实现一种基于姿态传感器的人机交互系统。利用姿态传感器获取人的姿态信息并对其进行解算,通过基于D‐S证据理论的信息融合法对姿态进行判定,使用语音确定动作指令的发送,对复杂动作进行组合,简化操作,通过“双虚拟模型”获取远端机器人的姿态,对运动轨迹进行修正。实验结果表明,该系统具有高稳定、高准确率及易操作的特点。 相似文献
9.
Kwang-Hyun Park Sung-Hoon Jeong Christopher Pelczar Z. Zenn. Bien 《Intelligent Service Robotics》2008,1(3):185-193
This paper introduces a piano playing robot in views of smart house and assistive robot technology to care the affective states
of the elderly. We address the current issues in this research area and propose a piano playing robot as a solution. For affective
interaction based on music, we first present a beat gesture recognition method to synchronize the tempo of a robot playing
a piano with the desired tempo of the user. To estimate the period of an unstructured beat gesture expressed by any part of
a body or an object, we apply an optical flow method, and use the trajectories of the center of gravity and normalized central
moments of moving objects in images. In addition, we also apply a motion control method by which robotic fingers are trained
to follow a set of trajectories. Since the ability to track the trajectories influences the sound a piano generates, we adopt
an iterative learning control method to reduce the tracking error. 相似文献
10.
11.
In recent years,vision-based gesture adaptation has attracted great attention from many experts in the field of human-robot interaction,and many methods have be... 相似文献
12.
Sara Bilal Rini Akmeliawati Amir A. Shafie Momoh Jimoh E. Salami 《Artificial Intelligence Review》2013,40(4):495-516
Human hand recognition plays an important role in a wide range of applications ranging from sign language translators, gesture recognition, augmented reality, surveillance and medical image processing to various Human Computer Interaction (HCI) domains. Human hand is a complex articulated object consisting of many connected parts and joints. Therefore, for applications that involve HCI one can find many challenges to establish a system with high detection and recognition accuracy for hand posture and/or gesture. Hand posture is defined as a static hand configuration without any movement involved. Meanwhile, hand gesture is a sequence of hand postures connected by continuous motions. During the past decades, many approaches have been presented for hand posture and/or gesture recognition. In this paper, we provide a survey on approaches which are based on Hidden Markov Models (HMM) for hand posture and gesture recognition for HCI applications. 相似文献
13.
Ashok N. Srivastava Johann Schumann 《Innovations in Systems and Software Engineering》2013,9(4):219-233
As software and software intensive systems are becoming increasingly ubiquitous, the impact of failures can be tremendous. In some industries such as aerospace, medical devices, or automotive, such failures can cost lives or endanger mission success. Software faults can arise due to the interaction between the software, the hardware, and the operating environment. Unanticipated environmental changes lead to software anomalies that may have significant impact on the overall success of the mission. Latent coding errors can at any time during system operation trigger faults despite the fact that usually a significant effort has been expended in verification and validation (V&V) of the software system. Nevertheless, it is becoming increasingly more apparent that pre-deployment V&V is not enough to guarantee that a complex software system meets all safety, security, and reliability requirements. Software Health Management (SWHM) is a new field that is concerned with the development of tools and technologies to enable automated detection, diagnosis, prediction, and mitigation of adverse events due to software anomalies, while the system is in operation. The prognostic capability of the SWHM to detect and diagnose failures before they happen will yield safer and more dependable systems for the future. This paper addresses the motivation, needs, and requirements of software health management as a new discipline and motivates the need for SWHM in safety critical applications. 相似文献
14.
Immersive manipulation of virtual objects through glove-based hand gesture interaction 总被引:1,自引:0,他引:1
Immersive visualisation is increasingly being used for comprehensive and rapid analysis of objects in 3D and object dynamic behaviour in 4D. Challenges are therefore presented to provide natural user interaction to enable effortless virtual object manipulation. Presented in this paper is the development and evaluation of an immersive human?Ccomputer interaction system based on stereoscopic viewing and natural hand gestures. For the development, it is based on the integration of a back-projection stereoscopic system for object and hand display, a hybrid inertial and ultrasonic tracking system to provide the absolute positions and orientations of the user??s head and hands, as well as a pair of high degrees-of-freedom data gloves to provide the relative positions and orientations of digit joints and tips on both hands. For the evaluation, it is based on a two-object scene with a virtual cube and a CT (computed tomography) volume created for demonstration of real-time immersive object manipulation. The system is shown to provide a correct user view of objects and hands in 3D with depth, as well as to enable a user to use a number of simple hand gestures to perform basic object manipulation tasks involving selection, release, translation, rotation and scaling. Also included in the evaluation are some quantitative tests of the system performance in terms of speed and latency. 相似文献
15.
Accelerometer-based gesture control for a design environment 总被引:2,自引:1,他引:2
Juha Kela Panu Korpipää Jani Mäntyjärvi Sanna Kallio Giuseppe Savino Luca Jozzo Sergio Di Marca 《Personal and Ubiquitous Computing》2006,10(5):285-299
Accelerometer-based gesture control is studied as a supplementary or an alternative interaction modality. Gesture commands freely trainable by the user can be used for controlling external devices with handheld wireless sensor unit. Two user studies are presented. The first study concerns finding gestures for controlling a design environment (Smart Design Studio), TV, VCR, and lighting. The results indicate that different people usually prefer different gestures for the same task, and hence it should be possible to personalise them. The second user study concerns evaluating the usefulness of the gesture modality compared to other interaction modalities for controlling a design environment. The other modalities were speech, RFID-based physical tangible objects, laser-tracked pen, and PDA stylus. The results suggest that gestures are a natural modality for certain tasks, and can augment other modalities. Gesture commands were found to be natural, especially for commands with spatial association in design environment control. 相似文献
16.
Diego Q. Leite Julio C. Duarte Luiz P. Neves Jauvane C. de Oliveira Gilson A. Giraldi 《Multimedia Tools and Applications》2017,76(20):20423-20455
This paper presents a real-time framework that combines depth data and infrared laser speckle pattern (ILSP) images, captured from a Kinect device, for static hand gesture recognition to interact with CAVE applications. At the startup of the system, background removal and hand position detection are performed using only the depth map. After that, tracking is started using the hand positions of the previous frames in order to seek for the hand centroid of the current one. The obtained point is used as a seed for a region growing algorithm to perform hand segmentation in the depth map. The result is a mask that will be used for hand segmentation in the ILSP frame sequence. Next, we apply motion restrictions for gesture spotting in order to mark each image as a ‘Gesture’ or ‘Non-Gesture’. The ILSP counterparts of the frames labeled as “Gesture” are enhanced by using mask subtraction, contrast stretching, median filter, and histogram equalization. The result is used as the input for the feature extraction using a scale invariant feature transform algorithm (SIFT), bag-of-visual-words construction and classification through a multi-class support vector machine (SVM) classifier. Finally, we build a grammar based on the hand gesture classes to convert the classification results in control commands for the CAVE application. The performed tests and comparisons show that the implemented plugin is an efficient solution. We achieve state-of-the-art recognition accuracy as well as efficient object manipulation in a virtual scene visualized in the CAVE. 相似文献
17.
Pre-collision safety strategies for human-robot interaction 总被引:2,自引:0,他引:2
Safe planning and control is essential to bringing human-robot interaction into common experience. This paper presents an
integrated human−robot interaction strategy that ensures the safety of the human participant through a coordinated suite of
safety strategies that are selected and implemented to anticipate and respond to varying time horizons for potential hazards
and varying expected levels of interaction with the user. The proposed planning and control strategies are based on explicit
measures of danger during interaction. The level of danger is estimated based on factors influencing the impact force during
a human-robot collision, such as the effective robot inertia, the relative velocity and the distance between the robot and
the human.
A second key requirement for improving safety is the ability of the robot to perceive its environment, and more specifically,
human behavior and reaction to robot movements. This paper also proposes and demonstrates the use of human monitoring information
based on vision and physiological sensors to further improve the safety of the human robot interaction. A methodology for
integrating sensor-based information about the user's position and physiological reaction to the robot into medium and short-term
safety strategies is presented. This methodology is verified through a series of experimental test cases where a human and
an articulated robot respond to each other based on the human's physical and physiological behavior.
相似文献
Dana KulićEmail: |
18.
19.
20.
Touchless interaction has received considerable attention in recent years with benefit of removing barriers of physical contact. Several approaches are available to achieve mid-air interactions. However, most of these techniques cause discomfort when the interaction method is not direct manipulation. In this paper, gestures based on unimanual and bimanual interactions with different tools for exploring CT volume dataset are designed to perform the similar tasks in realistic applications. Focus + context approach based on GPU volume ray casting by trapezoid-shaped transfer function is used for visualization and the level-of-detail technique is adopted for accelerating interactive rendering. Comparing the effectiveness and intuitiveness of interaction approach with others by experiments, ours has a better performance and superiority with less completion time. Moreover, the bimanual interaction with more advantages is timesaving when performing continuous exploration task. 相似文献