共查询到20条相似文献,搜索用时 46 毫秒
1.
《International journal of human-computer studies》2014,72(12):783-795
This paper investigates how social distance can serve as a lens through which we can understand human–robot relationships and develop guidelines for robot design. In two studies, we examine the effects of distance based on physical proximity (proxemic distance), organizational status (power distance), and task structure (task distance) on people׳s experiences with and perceptions of a humanlike robot. In Study 1, participants (n=32) played a card-matching game with a humanlike robot. We manipulated the power distance (supervisor vs. subordinate) and proxemic distance (close vs. distant) between participants and the robot. Participants who interacted with the supervisor robot reported a more positive user experience when the robot was close than when the robot was distant, while interactions with the subordinate robot resulted in a more positive experience when the robot was distant than when the robot was close. In Study 2, participants (n=32) played the game in two different task distances (cooperation vs. competition) and proxemic distances (close vs. distant). Participants who cooperated with the robot reported a more positive experience when the robot was distant than when it was close. In contrast, competing with the robot resulted in a more positive experience when it was close than when the robot was distant. The findings from the two studies highlight the importance of consistency between the status and proxemic behaviors of the robot and of task interdependency in fostering cooperation between the robot and its users. This work also demonstrates how social distance may guide efforts toward a better understanding of human–robot interaction and the development of effective design guidelines. 相似文献
2.
3.
Robotics will be a dominant area in society throughout future generations. Its presence is currently increasing in most daily life settings, with devices and mechanisms that facilitate the accomplishment of diverse tasks, as well as in work scenarios, where machines perform more and more jobs. This increase in the presence of autonomous robotic systems in society is due to their great efficiency and security compared to human capacity, which is thanks mainly to the enormous precision of their sensor and actuator systems. Among these, vision sensors are of the utmost importance. Humans and many animals naturally enjoy powerful perception systems, but, in robotics, this constitutes a constant line of research. In addition to having a high capacity for reasoning and decision-making, these robots incorporate important advances in their perceptual systems, allowing them to interact effectively in the working environments of this new industrial revolution. Drawing on the most basic interaction between humans, looking at the face, an innovative system is presented in this paper, which was developed for an autonomous and DIY robot. This system is composed of three modules. First, the face detection component, which detects human faces in the current image. Second, the scene representation algorithm, which offers a wider field of view than that of the single camera used, mounted on a servo-pan unit. Third, the active memory component, which was designed and implemented according to two competing dynamics: life and salience. The algorithm intelligently moves the servo-pan unit with the aim of finding new faces, follow existing ones and forgetting those that no longer appear on the scene. The system was developed and validated using a low-cost platform based on a Raspberry Pi3 board. 相似文献
4.
5.
Although the concept of industrial cobots dates back to 1999, most present day hybrid human–machine assembly systems are merely weight compensators. Here, we present results on the development of a collaborative human–robot manufacturing cell for homokinetic joint assembly. The robot alternates active and passive behaviours during assembly, to lighten the burden on the operator in the first case, and to comply to his/her needs in the latter. Our approach can successfully manage direct physical contact between robot and human, and between robot and environment. Furthermore, it can be applied to standard position (and not torque) controlled robots, common in the industry. The approach is validated in a series of assembly experiments. The human workload is reduced, diminishing the risk of strain injuries. Besides, a complete risk analysis indicates that the proposed setup is compatible with the safety standards, and could be certified. 相似文献
6.
Savić Srđan Ž. Gnjatović Milan Stefanović Darko Lalić Bojan Maček Nemanja 《Intelligent Service Robotics》2020,13(1):99-111
Intelligent Service Robotics - This paper introduces an approach to automatic domain modeling for human–robot interaction. The proposed approach is symbolic and intended for semantically... 相似文献
7.
Microsystem Technologies - With the rapid development of artificial intelligence (AI), the human–robot interaction (HRI) has been attracted considerable attention in recent years. The... 相似文献
8.
Gabriele Randelli Taigo Maria Bonanni Luca Iocchi Daniele Nardi 《Intelligent Service Robotics》2013,6(1):19-31
The limited understanding of the surrounding environment still restricts the capabilities of robotic systems in real world applications. Specifically, the acquisition of knowledge about the environment typically relies only on perception, which requires intensive ad hoc training and is not sufficiently reliable in a general setting. In this paper, we aim at integrating new acquisition devices, such as tangible user interfaces, speech technologies and vision-based systems, with established AI methodologies, to present a novel and effective knowledge acquisition approach. A natural interaction paradigm is presented, where humans move within the environment with the robot and easily acquire information by selecting relevant spots, objects, or other relevant landmarks. The synergy between novel interaction technologies and semantic knowledge leverages humans’ cognitive skills to support robots in acquiring and grounding knowledge about the environment; such richer representation can be exploited in the realization of robot autonomous skills for task accomplishment. 相似文献
9.
10.
For a robot to cohabit with people, it should be able to learn people’s nonverbal social behavior from experience. In this
paper, we propose a novel machine learning method for recognizing gestures used in interaction and communication. Our method
enables robots to learn gestures incrementally during human–robot interaction in an unsupervised manner. It allows the user
to leave the number and types of gestures undefined prior to the learning. The proposed method (HB-SOINN) is based on a self-organizing
incremental neural network and the hidden Markov model. We have added an interactive learning mechanism to HB-SOINN to prevent
a single cluster from running into a failure as a result of polysemy of being assigned more than one meaning. For example,
a sentence: “Keep on going left slowly” has three meanings such as, “Keep on (1)”, “going left (2)”, “slowly (3)”. We experimentally tested the clustering performance of the proposed method against data obtained from measuring gestures
using a motion capture device. The results show that the classification performance of HB-SOINN exceeds that of conventional
clustering approaches. In addition, we have found that the interactive learning function improves the learning performance
of HB-SOINN. 相似文献
11.
New forms of artificial intelligence on the one hand and the ubiquitous networking of “everything with everything” on the other hand characterize the fourth industrial revolution. This results in a changed understanding of human–machine interaction, in new models for production, in which man and machine together with virtual agents form hybrid teams. The empirical study “Socializing with robots” aims to gain insight especially into conditions of development and processes of hybrid human–machine teams. In the experiment, human–robot actions and interactions were closely observed in a virtual environment. Robots as partners differed in shape and behavior (reliable or faulty). Participants were instructed to achieve an objective that could only be achieved via close teamwork. This paper unites different aspects from core disciplines of social robotics and psychology contributing to anthropomorphization with the empirical insights of the experiment. It focuses on the psychological effects (e.g. reactions of different personality types) on anthropomorphization and mechanization, taking the inter- and transdisciplinary field of social robotics as a starting point. 相似文献
12.
In human–robot interaction, the robot controller must reactively adapt to sudden changes in the environment (due to unpredictable human behaviour). This often requires operating different modes, and managing sudden signal changes from heterogeneous sensor data. In this paper, we present a multimodal sensor-based controller, enabling a robot to adapt to changes in the sensor signals (here, changes in the human collaborator behaviour). Our controller is based on a unified task formalism, and in contrast with classical hybrid visicn–force–position control, it enables smooth transitions and weighted combinations of the sensor tasks. The approach is validated in a mock-up industrial scenario, where pose, vision (from both traditional camera and Kinect), and force tasks must be realized either exclusively or simultaneously, for human–robot collaboration. 相似文献
13.
14.
Yuanchao Li Carlos Toshinori Ishi Koji Inoue Shizuka Nakamura Tatsuya Kawahara 《Advanced Robotics》2013,27(20):1030-1041
Human–human interaction consists of various nonverbal behaviors that are often emotion-related. To establish rapport, it is essential that the listener respond to reactive emotion in a way that makes sense given the speaker's emotional state. However, human–robot interactions generally fail in this regard because most spoken dialogue systems play only a question-answer role. Aiming for natural conversation, we examine an emotion processing module that consists of a user emotion recognition function and a reactive emotion expression function for a spoken dialogue system to improve human–robot interaction. For the emotion recognition function, we propose a method that combines valence from prosody and sentiment from text by decision-level fusion, which considerably improves the performance. Moreover, this method reduces fatal recognition errors, thereby improving the user experience. For the reactive emotion expression function, the system's emotion is divided into emotion category and emotion level, which are predicted using the parameters estimated by the recognition function on the basis of distributions inferred from human–human dialogue data. As a result, the emotion processing module can recognize the user's emotion from his/her speech, and expresses a reactive emotion that matches. Evaluation with ten participants demonstrated that the system enhanced by this module is effective to conduct natural conversation. 相似文献
15.
Paulo Leica Flavio Roberti Matías Monllor Juan M. Toibero Ricardo Carelli 《Intelligent Service Robotics》2017,10(1):31-40
This paper presents a control strategy for human–robot interaction with physical contact, recognizing the human intention to control the movement of a non-holonomic mobile robot. The human intention is modeled by mechanical impedance, sensing the human-desired force intensity and the human-desired force direction to guide the robot through unstructured environments. Robot dynamics is included to improve the interaction performance. Stability analysis of the proposed control system is proved by using Lyapunov theory. Real experiments of the human–robot interaction show the performance of the proposed controllers. 相似文献
16.
Towards an intelligent system for generating an adapted verbal and nonverbal combined behavior in human–robot interaction 总被引:1,自引:0,他引:1
In human–robot interaction scenarios, an intelligent robot should be able to synthesize an appropriate behavior adapted to human profile (i.e., personality). Recent research studies discussed the effect of personality traits on human verbal and nonverbal behaviors. The dynamic characteristics of the generated gestures and postures during the nonverbal communication can differ according to personality traits, which similarly can influence the verbal content of human speech. This research tries to map human verbal behavior to a corresponding verbal and nonverbal combined robot behavior based on the extraversion–introversion personality dimension. We explore the human–robot personality matching aspect and the similarity attraction principle, in addition to the different effects of the adapted combined robot behavior expressed through speech and gestures, and the adapted speech-only robot behavior, on interaction. Experiments with the humanoid NAO robot are reported. 相似文献
17.
《International Journal of Industrial Ergonomics》1999,23(1-2):83-94
This work examines a robot drilling system in the Aerospace Industrial Development Corporation of Taiwan. Work procedures, human errors and robot failures are also assessed. Based on those assessments, countermeasures and feasible recommendations are proposed to enhance the hybrid system's safety and performance. In addition, some of the recommendations are applied toward the system studied herein, along with the implementation results presented as well.Relevance to industryIndustrial robot safety and performance can be advanced through the collaborative efforts of ergonomists and practitioners investigating the work environment and the interaction between humans and robots. based on their observation, sound recommendations regarding human factor-related issues can be made. Therefore, this survey provides a valuable reference for human–robot system designers and practitioners. 相似文献
18.
In this article, a learning framework that enables robotic arms to replicate new skills from human demonstration is proposed. The learning framework makes use of online human motion data acquired using wearable devices as an interactive interface for providing the anticipated motion to the robot in an efficient and user-friendly way. This approach offers human tutors the ability to control all joints of the robotic manipulator in real-time and able to achieve complex manipulation. The robotic manipulator is controlled remotely with our low-cost wearable devices for easy calibration and continuous motion mapping. We believe that our approach might lead to improving the human-robot skill learning, adaptability, and sensitivity of the proposed human-robot interaction for flexible task execution and thereby giving room for skill transfer and repeatability without complex coding skills. 相似文献
19.
Jung-Ho Ahn Cheolmin Choi Sooyeong Kwak Kilcheon Kim Hyeran Byun 《Pattern Analysis & Applications》2009,12(2):167-177
In this study, we propose a new integrated computer vision system designed to track multiple human beings and extract their
silhouette with a pan-tilt stereo camera, so that it can assist in gesture and gait recognition in the field of Human–Robot
Interaction (HRI). The proposed system consists of three modules: detection, tracking and silhouette extraction. These modules
are robust to camera movements, and they work interactively in near real-time. Detection was performed by camera ego-motion
compensation and disparity segmentation. For tracking, we present an efficient mean shift-based tracking method in which the
tracking objects are characterized as disparity weighted color histograms. The silhouette was obtained by two-step segmentation.
A trimap was estimated in advance and then effectively incorporated into the graph-cut framework for fine segmentation. The
proposed system was evaluated with respect to ground truth data, and it was shown to detect and track multiple people very
well and also produce high-quality silhouettes.
相似文献
Hyeran ByunEmail: |
20.
DoHyung Kim Jaeyeon Lee Ho-Sub Yoon Jaehong Kim Joochan Sohn 《The Journal of supercomputing》2013,65(1):336-352
This paper proposes a vision-based human arm gesture recognition method for human–robot interaction, particularly at a long distance where speech information is not available. We define four meaningful arm gestures for a long-range interaction. The proposed method is capable of recognizing the defined gestures only with 320×240 pixel-sized low-resolution input images captured from a single camera at a long distance, approximately five meters from the camera. In addition, the system differentiates the target gestures from the users’ normal actions that occur in daily life without any constraints. For human detection at a long distance, the proposed approach combines results from mean-shift color tracking, short- and long-range face detection, and omega shape detection. The system then detects arm blocks using a background subtraction method with a background updating module and recognizes the target gestures based on information about the region, periodical motion, and shape of the arm blocks. From experiments using a large realistic database, a recognition rate of 97.235% is achieved, which is a sufficiently practical level for various pervasive and ubiquitous applications based on human gestures. 相似文献