首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper introduces a new approach to control a robot manipulator in a way that is safe for humans in the robot's workspace. Conceptually the robot is viewed as a tool with limited autonomy. The limited perception capabilities of automatic systems prohibits the construction of failsafe robots of the capability of people Instead, the goal of our control paradigm is to make the interaction with a robot manipulator safe by making the robot's actions predictable and understandable to the human operator. At the same time the forces the robot applies with any part of its body to its environment have to be controllable and limited. Experimental results are presented of a human-friendly robot controller that is under development for a Barrett Whole Arm Manipulator robot.  相似文献   

2.
《Advanced Robotics》2013,27(11-12):1473-1491
Considering the interaction between people and machines under safety constraints without utilizing difficult and complex control strategies, this paper proposes a new actuator design—adaptive coupled elastic actuator (ACEA) with adjustable characteristics adaptive to the applied output force and input force. This would provide oncoming robotic systems with an intrinsic compromise between performance and safety in unstructured environments (i.e., exhibiting desired intrinsic lower and higher output impedance depending on different operation situations). Having introduced its concept and design, this paper also presents modeling, control and analysis for the ACEA system, not only to provide useful information to investigate the performance and basic characteristics of the designed system, but also to benefit the design of a more advanced controller in the near future. Finally, experimental results are presented to show the desired properties of the proposed ACEA system.  相似文献   

3.
In this paper, we propose a two-layer sensor fusion scheme for multiple hypotheses multisensor systems. To reflect reality in decision making, uncertain decision regions are introduced in the hypotheses testing process. The entire decision space is partitioned into distinct regions of correct, uncertain and incorrect regions. The first layer of decision is made by each sensor indepedently based on a set of optimal decision rules. The fusion process is performed by treating the fusion center as an additional virtual sensor to the system. This virtual sensor makes decision based on the decisions reached by the set of sensors in the system. The optimal decision rules are derived by minimizing the Bayes risk function. As a consequence, the performance of the system as well as individual sensors can be quantified by the probabilities of correct, incorrect and uncertain decisions. Numerical examples of three hypotheses, two and four sensor systems are presented to illustrate the proposed scheme.  相似文献   

4.
Abstract

Fictional robots in literature and film have shaped our cultural image of robots. This paper studies what these images can contribute to our relationship with the real robots that are now entering our everyday lives. A review of science fiction literature and films comes to the conclusion that the predominant theme of the fictional robots – mostly androids and replicants – has been human identity, not robotics itself. Non-humanoid “just-robots” are presented as unproblematic sidekicks. It is argued that this focus has as consequence a tendency toward humanizing even non-humanoid robots in their presentation to the public. This tendency leads to a) breakdowns where technology contradicts the humanoid narrative, and b) a lack of productive narratives about emerging, more complex human-robot relationships.  相似文献   

5.
As a young and emerging field in social human–robot interaction (HRI), semantic-free utterances (SFUs) research has been receiving attention over the last decade. SFUs are an auditory interaction means for machines that allow emotion and intent expression, which are composed of vocalizations and sounds without semantic content or language dependence. Currently, SFUs are most commonly utilized in animation movies (e.g., R2-D2, WALL-E, Despicable Me), cartoons (e.g., “Teletubbies,” “Morph,” “La Linea”), and computer games (e.g., The Sims) and hold significant potential for applications in HRI. SFUs are categorized under four general types: Gibberish Speech (GS), Non-Linguistic Utterances (NLUs), Musical Utterances (MU), and Paralinguistic Utterances (PU). By introducing the concept of SFUs and bringing multiple sets of studies in social HRI that have never been analyzed jointly before, this article addresses the need for a comprehensive study of the existing literature for SFUs. It outlines the current grand challenges, open questions, and provides guidelines for future researchers considering to utilize SFU in social HRI.  相似文献   

6.
One of the known challenges in Children–Robot Interaction (cHRI) is to sustain children’s engagement during long-term interactions with robots. Researchers have hypothesized that robots that can adapt to children’s affective states and can also learn from the environment can result in sustaining engagement during cHRI. Recently, researchers have conducted a range of studies where robots portray different social capabilities and have shown that it has positively influenced children’s engagement. However, despite an immense body of research on implementation of different adaptive social robots, a pivotal question remains unanswered: Which adaptations portrayed by a robot can result in maintaining long-term social engagement during cHRI? In other words, what are the appropriate and effective adaptations portrayed by a robot that will sustain social engagement for an extended number of interactions? In this article, we report on a study conducted with three groups of children who played a snakes and ladders game with the NAO robot to address the aforementioned question. The NAO performed 1) game-based adaptations, 2) emotion-based adaptations, and 3) memory-based adaptation. Our results showed that emotion-based adaptations were found out to be most effective, followed by memory-based adaptations. Game adaptation didn’t result in sustaining long-term social engagement.  相似文献   

7.
《Advanced Robotics》2013,27(1-2):47-67
Depending on the emotion of speech, the meaning of the speech or the intention of the speaker differs. Therefore, speech emotion recognition, as well as automatic speech recognition is necessary to communicate precisely between humans and robots for human–robot interaction. In this paper, a novel feature extraction method is proposed for speech emotion recognition using separation of phoneme class. In feature extraction, the signal variation caused by different sentences usually overrides the emotion variation and it lowers the performance of emotion recognition. However, as the proposed method extracts features from speech in parts that correspond to limited ranges of the center of gravity of the spectrum (CoG) and formant frequencies, the effects of phoneme variation on features are reduced. Corresponding to the range of CoG, the obstruent sounds are discriminated from sonorant sounds. Moreover, the sonorant sounds are categorized into four classes by the resonance characteristics revealed by formant frequency. The result shows that the proposed method using 30 different speakers' corpora improves emotion recognition accuracy compared with other methods by 99% significance level. Furthermore, the proposed method was applied to extract several features including prosodic and phonetic features, and was implemented on 'Mung' robots as an emotion recognizer of users.  相似文献   

8.
9.
In this paper a new intelligent robot control scheme is presented which enables a cooperative work of humans and robots through direct contact interaction in a partially known environment. Because of the high flexibility and adaptability, the human–robot cooperation is expected to have a wide range of applications in uncertain environments, not only in future construction and manufacturing industries but also in service branches. A multi-agent control architecture gives an appropriate frame for the flexibility of the human–robot-team. Robots are considered as intelligent autonomous assistants of humans which can mutually interact on a symbolic level and a physical level. This interaction is achieved through the exchange of information between humans and robots, the interpretation of the transmitted information, the coordination of the activities and the cooperation between independent system components. Equipped with sensing modalities for the perception of the environment, the robot system KAMRO (Karlsruhe Autonomous Mobile Robot) is introduced to demonstrate the principles of the cooperation among humans and robot agents. Experiments were conducted to prove the effectiveness of our concept.  相似文献   

10.
《Advanced Robotics》2013,27(9-10):1271-1294
This study develops a method to compensate for the communication time delay for tactile transmission systems. For transmitting tactile information from remote sites, the communication time delay degrades the validity of feedback. However, so far time delay compensation methods for tactile transmissions have yet to be proposed. For visual or force feedback systems, local models of remote environments were adopted for compensating the communication delay. The local models cancel the perceived time delay in sensory feedback signals by synchronizing them with the users' operating movements. The objectives of this study are to extend the idea of the local model to tactile feedback systems and develop a system that delivers tactile roughness of textures from remote environments to the users of the system. The local model for tactile roughness is designed to reproduce the characteristic cutaneous deformations, including vibratory frequencies and amplitudes, similar to those that occur when a human finger scans rough textures. Physical properties in the local model are updated in real-time by a tactile sensor installed on the slave-side robot. Experiments to deliver the perceived roughness of textures were performed using the developed system. The results showed that the developed system can deliver the perceived roughness of textures. When the communication time delay was simulated, it was confirmed that the developed system eliminated the time delay perceived by the operators. This study concludes that the developed local model is effective for remote tactile transmissions.  相似文献   

11.
Guoray Cai 《GeoInformatica》2007,11(2):217-237
Human interactions with geographical information are contextualized by problem-solving activities which endow meaning to geospatial data and processing. However, existing spatial data models have not taken this aspect of semantics into account. This paper extends spatial data semantics to include not only the contents and schemas, but also the contexts of their use. We specify such a semantic model in terms of three related components: activity-centric context representation, contextualized ontology space, and context mediated semantic exchange. Contextualization of spatial data semantics allows the same underlying data to take multiple semantic forms, and disambiguate spatial concepts based on localized contexts. We demonstrate how such a semantic model supports contextualized interpretation of vague spatial concepts during human–GIS interactions. We employ conversational dialogue as the mechanism to perform collaborative diagnosis of context and to coordinate sharing of meaning across agents and data sources.
Guoray CaiEmail:
  相似文献   

12.
Robots come into physical contact with humans in both experimental and operational settings. Many potential factors motivate the detection of human contact, ranging from safe robot operation around humans, to robot behaviors that depend on human guidance. This article presents a review of current research within the field of Tactile Human–Robot Interactions (Tactile HRI), where physical contact from a human is detected by a robot during the execution or development of robot behaviors. Approaches are presented from two viewpoints: the types of physical interactions that occur between the human and robot, and the types of sensors used to detect these interactions. We contribute a structure for the categorization of Tactile HRI research within each viewpoint. Tactile sensing techniques are grouped into three categories, according to what covers the sensors: (i) a hard shell, (ii) a flexible substrate or (iii) no covering. Three categories of physical HRI likewise are identified, consisting of contact that (i) interferes with robot behavior execution, (ii) contributes to behavior execution and (iii) contributes to behavior development. We populate each category with the current literature, and furthermore identify the state-of-the-art within categories and promising areas for future research.  相似文献   

13.
14.
Citizen science broadly describes citizen involvement in science. Citizen science has gained significant momentum in recent years, brought about by widespread availability of smartphones and other Internet and communications technologies (ICT) used for collecting and sharing data. Not only are more projects being launched and more members of the public participating, but more human–computer interaction (HCI) researchers are focusing on the design, development, and use of these tools. Together, citizen science and HCI researchers can leverage each other’s skills to speed up science, accelerate learning, and amplify society’s well-being globally as well as locally. The focus of this article is on HCI and biodiversity citizen science as seen primarily through the lens of research in the author’s laboratory. The article is framed around five topics: community, data, technology, design, and a call to save all species, including ourselves. The article ends with a research agenda that focuses on these areas and identifies productive ways for HCI specialists, science researchers, and citizens to collaborate. In a nutshell, while species are disappearing at an alarming rate, citizen scientists who document species’ distributions help to support conservation and educate the public. HCI researchers can empower citizen scientists to dramatically increase what they do and how they do it.  相似文献   

15.
A fundamental objective of human–computer interaction research is to make systems more usable, more useful, and to provide users with experiences fitting their specific background knowledge and objectives. The challenge in an information-rich world is not only to make information available to people at any time, at any place, and in any form, but specifically to say the right thing at the right time in the right way. Designers of collaborative human–computer systems face the formidable task of writing software for millions of users (at design time) while making it work as if it were designed for each individual user (only known at use time). User modeling research has attempted to address these issues. In this article, I will first review the objectives, progress, and unfulfilled hopes that have occurred over the last ten years, and illustrate them with some interesting computational environments and their underlying conceptual frameworks. A special emphasis is given to high-functionality applications and the impact of user modeling to make them more usable, useful, and learnable. Finally, an assessment of the current state of the art followed by some future challenges is given.  相似文献   

16.
The studies presented in this article explore a human-centered conceptualization of agents and agency based on the observation that people attribute agency to sufficiently complex interactive systems. Although agency attribution appears to be an unconscious human response, findings from social psychology, affective computing, and perceptual-motor studies suggest agency attribution influences human–computer interaction (HCI). Three studies are presented that examine whether recent findings on agency attribution in physical environments also apply in the virtual environments characteristic of HCI. Results of the studies indicate that agency effects operate in desktop computing environments. Agency effects, however, appear to be influenced by learning effects that preserve a previously observed relationship between perception and action but alter how this effect is expressed. Results suggest that there are both bottom-up and top-down contributions to agency effects in HCI.  相似文献   

17.
《Advanced Robotics》2013,27(1-2):203-225
Pneumatic systems are well known for their advantages and simplicity, and have been applied in various applications. This paper presents the development and experimental evaluation of an intelligent pneumatic cylinder and its control system. The cylinder is designed to have an optical encoder, pressure sensor, valve and a Programmable System on a Chip (PSoC) as the central processing unit. The PSoC will handle I2C communication, input and output data from the analogue to digital converter, counter program and pulse width modulation (PWM) duty cycle. An application tool for a distributed physical human–machine interaction is proposed using an intelligent pneumatic cylinder. The system applied 36 links of the actuator to form an Intelligent Chair Tool (ICT). The control methodology presented contains an inner force loop and an outer position loop implemented using a unified control system driven by PWM to an on/off valve. In this research, four control approaches, i.e., position control, force control, compliance control and viscosity control, were constructed and experimented. The physical properties of various objects were also detected by the intelligent cylinder through the detecting function experiment. Finally, an emulation experiment using mass was carried out and the results clearly show the ability of the intelligent cylinder, and the control approaches towards realization of the future ICT application.  相似文献   

18.
19.
20.
In minimally invasive surgery, tools go through narrow openings and manipulate soft organs to perform surgical tasks. There are limitations in current robot-assisted surgical systems due to the rigidity of robot tools. The aim of the STIFF-FLOP European project is to develop a soft robotic arm to perform surgical tasks. The flexibility of the robot allows the surgeon to move within organs to reach remote areas inside the body and perform challenging procedures in laparoscopy. This article addresses the problem of designing learning interfaces enabling the transfer of skills from human demonstration. Robot programming by demonstration encompasses a wide range of learning strategies, from simple mimicking of the demonstrator's actions to the higher level imitation of the underlying intent extracted from the demonstrations. By focusing on this last form, we study the problem of extracting an objective function explaining the demonstrations from an over-specified set of candidate reward functions, and using this information for self-refinement of the skill. In contrast to inverse reinforcement learning strategies that attempt to explain the observations with reward functions defined for the entire task (or a set of pre-defined reward profiles active for different parts of the task), the proposed approach is based on context-dependent reward-weighted learning, where the robot can learn the relevance of candidate objective functions with respect to the current phase of the task or encountered situation. The robot then exploits this information for skills refinement in the policy parameters space. The proposed approach is tested in simulation with a cutting task performed by the STIFF-FLOP flexible robot, using kinesthetic demonstrations from a Barrett WAM manipulator.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号