首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Completely autonomous performance of a mobile robot within noncontrolled and dynamic environments is not possible yet due to different reasons including environment uncertainty, sensor/software robustness, limited robotic abilities, etc. But in assistant applications in which a human is always present, she/he can make up for the lack of robot autonomy by helping it when needed. In this paper, the authors propose human-robot integration as a mechanism to augment/improve the robot autonomy in daily scenarios. Through the human-robot-integration concept, the authors take a further step in the typical human-robot relation, since they consider her/him as a constituent part of the human-robot system, which takes full advantage of the sum of their abilities. In order to materialize this human integration into the system, they present a control architecture, called architecture for human-robot integration, which enables her/him from a high decisional level, i.e., deliberating a plan, to a physical low level, i.e., opening a door. The presented control architecture has been implemented to test the human-robot integration on a real robotic application. In particular, several real experiences have been conducted on a robotic wheelchair aimed to provide mobility to elderly people.  相似文献   

2.
The success of social robots in achieving natural coexistence with humans depends on both their level of autonomy and their interactive abilities. Although a lot of robotic architectures have been suggested and many researchers have focused on human–robot interaction, a robotic architecture that can effectively combine interactivity and autonomy is still unavailable. This paper contributes to the research efforts toward this architecture in the following ways. First a theoretical analysis is provided that leads to the notion of co-evolution between the agent and its environment and with other agents as the condition needed to combine both autonomy and interactivity. The analysis also shows that the basic competencies needed to achieve the required level of autonomy and the envisioned level of interactivity are similar but not the same. Secondly nine specific requirements are then formalized that should be achieved by the architecture. Thirdly a robotic architecture that tries to achieve those requirements by utilizing two main theoretical hypothesis and several insights from social science, developmental psychology and neuroscience is detailed. Lastly two experiments with a humanoid robot and a simulated agent are reported to show the potential of the proposed architecture.  相似文献   

3.
In this paper, we propose a novel trend in multiagent robotics: energy autonomy. A definition of energy autonomy is developed from an original concept, “potential energy,” that is under the constraints of remaining energy capacity and the relative distance among robotic agents. Toward energy autonomy, we initially present a simulation of a multiagent robotic system in which each robot is capable of exchanging energy cells with other robots. Our simulation points out that: (1) each robot is able not only to act as an autonomous agent, but also to interact with others to be beyond the individual capabilities; (2) in order to adapt to changes in the environment, each robot is situated as an adaptive agent in a network of neighboring robots, which leads to a state of energy autonomy. Finally, based on the results of the simulation, we adjust the rules for our real multirobot system. This work was presented in part at the 12th International Symposium on Artificial Life and Robotics, Oita, Japan, January 25–27, 2007  相似文献   

4.
Human–Robot Collaboration (HRC) has a pivotal role in smart manufacturing for strict requirements of human-centricity, sustainability, and resilience. However, existing HRC development mainly undertakes either a human-dominant or robot-dominant manner, where human and robotic agents reactively perform operations by following pre-defined instructions, thus far from an efficient integration of robotic automation and human cognition. The stiff human–robot relations fail to be qualified for complex manufacturing tasks and cannot ease the physical and psychological load of human operators. In response to these realistic needs, this paper presents our arguments on the obvious trend, concept, systematic architecture, and enabling technologies of Proactive HRC, serving as a prospective vision and research topic for future work in the human-centric smart manufacturing era. Human–robot symbiotic relation is evolving with a 5C intelligence — from Connection, Coordination, Cyber, Cognition to Coevolution, and finally embracing mutual-cognitive, predictable, and self-organising intelligent capabilities, i.e., the Proactive HRC. With proactive robot control, multiple human and robotic agents collaboratively operate manufacturing tasks, considering each others’ operation needs, desired resources, and qualified complementary capabilities. This paper also highlights current challenges and future research directions, which deserve more research efforts for real-world applications of Proactive HRC. It is hoped that this work can attract more open discussions and provide useful insights to both academic and industrial practitioners in their exploration of human–robot flexible production.  相似文献   

5.
Human-Robot Interaction (HRI) is a growing field of research that targets the development of robots which are easy to operate, more engaging and more entertaining. Natural human-like behavior is considered by many researchers as an important target of HRI. Research in Human-Human communications revealed that gaze control is one of the major interactive behaviors used by humans in close encounters. Human-like gaze control is then one of the important behaviors that a robot should have in order to provide natural interactions with human partners. To develop human-like natural gaze control that can integrate easily with other behaviors of the robot, a flexible robotic architecture is needed. Most robotic architectures available were developed with autonomous robots in mind. Although robots developed for HRI are usually autonomous, their autonomy is combined with interactivity, which adds more challenges on the design of the robotic architectures supporting them. This paper reports the development and evaluation of two gaze controllers using a new cross-platform robotic architecture for HRI applications called EICA (The Embodied Interactive Control Architecture), that was designed to meet those challenges emphasizing how low level attention focusing and action integration are implemented. Evaluation of the gaze controllers revealed human-like behavior in terms of mutual attention, gaze toward partner, and mutual gaze. The paper also reports a novel Floating Point Genetic Algorithm (FPGA) for learning the parameters of various processes of the gaze controller.  相似文献   

6.
ROGUE is an architecture built on a real robot which provides algorithms for the integration of high-level planning, low-level robotic execution, and learning. ROGUE addresses successfully several of the challenges of a dynamic office gopher environment. This article presents the techniques for the integration of planning and execution.ROGUE uses and extends a classical planning algorithm to create plans for multiple interacting goals introduced by asynchronous user requests. ROGUE translates the planner';s actions to robot execution actions and monitors real world execution. ROGUE is currently implemented using the PRODIGY4.0 planner and the Xavier robot. This article describes how plans are created for multiple asynchronous goals, and how task priority and compatibility information are used to achieve appropriate efficient execution. We describe how ROGUE communicates with the planner and the robot to interleave planning with execution so that the planner can replan for failed actions, identify the actual outcome of an action with multiple possible outcomes, and take opportunities from changes in the environment.ROGUE represents a successful integration of a classical artificial intelligence planner with a real mobile robot.  相似文献   

7.
Controlling someone’s attention can be defined as shifting his/her attention from the existing direction to another. To shift someone’s attention, gaining attention and meeting gaze are two most important prerequisites. If a robot would like to communicate a particular person, it should turn its gaze to him/her for eye contact. However, it is not an easy task for the robot to make eye contact because such a turning action alone may not be effective in all situations, especially when the robot and the human are not facing each other or the human is intensely attending to his/her task. Therefore, the robot should perform some actions so that it can attract the target person and make him/her respond to the robot to meet gaze. In this paper, we present a robot that can attract a target person’s attention by moving its head, make eye contact through showing gaze awareness by blinking its eyes, and directs his/her attention by repeating its eyes and head turns from the person to the target object. Experiments using 20 human participants confirm the effectiveness of the robot actions to control human attention.  相似文献   

8.
Recent research results on human–robot interaction and collaborative robotics are leaving behind the traditional paradigm of robots living in a separated space inside safety cages, allowing humans and robot to work together for completing an increasing number of complex industrial tasks. In this context, safety of the human operator is a main concern. In this paper, we present a framework for ensuring human safety in a robotic cell that allows human–robot coexistence and dependable interaction. The framework is based on a layered control architecture that exploits an effective algorithm for online monitoring of relative human–robot distance using depth sensors. This method allows to modify in real time the robot behavior depending on the user position, without limiting the operative robot workspace in a too conservative way. In order to guarantee redundancy and diversity at the safety level, additional certified laser scanners monitor human–robot proximity in the cell and safe communication protocols and logical units are used for the smooth integration with an industrial software for safe low-level robot control. The implemented concept includes a smart human-machine interface to support in-process collaborative activities and for a contactless interaction with gesture recognition of operator commands. Coexistence and interaction are illustrated and tested in an industrial cell, in which a robot moves a tool that measures the quality of a polished metallic part while the operator performs a close evaluation of the same workpiece.  相似文献   

9.
《Advanced Robotics》2013,27(13):1583-1600
It is a formidable issue how robots can show behaviors to be considered as the corresponding human's ones since the body structure is different between robots and humans. As a simple case for such a correspondence problem, this paper presents a robot that learns to vocalize vowels through the interaction with its caregiver. Inspired by the findings of developmental psychology, we focus on the roles of maternal imitation (i.e., imitation of the robot voices by the caregiver) since it could play a role to instruct the correspondence of the sounds. Furthermore, we suppose that it causes unconscious anchoring in which the imitated voice by the caregiver is performed unconsciously, closely to one of his/her own vowels and hereby helps to make the robot's utterances to be more vowel-like. We propose a method for vowel learning with an imitative caregiver, under the assumptions that the robot knows the desired categories of the caregiver's vowels, and the rough estimate of mapping between the region of sounds that the caregiver can generate and the region that the robot can. Through experiments with a Japanese imitative caregiver, we show that a robot succeeds in acquiring more vowel-like utterances than a robot without such a caregiver, even when it is given different types of mapping functions.  相似文献   

10.
This paper describes an autonomous mobile device that was designed, developed and implemented as a library assistant robot. A complete autonomous system incorporating human–robot interaction has been developed and implemented within a real world environment. The robotic development is comprehensively described in terms of its localization systems, which incorporates simple image processing techniques fused with odometry and sonar data, which is validated through the use of an extended Kalman filter (EKF). The essential principles required for the development of a successful assistive robot are described and put into demonstration through a human–robot interaction application applied to the library assistant robot.  相似文献   

11.
Non-Cartesian robotics, which began with the introduction of subsumption architecture by Rodney Brooks, now encompasses a wide range of robotics that do not follow traditional cartesian principles in the running of a robot. The new field is sometimes called biorobotics as it draws its guiding principles from biology, physiology, behavioural sciences, genetics and theories of evolution, brain sciences, genetics and theories of evolution, brain sciences, ethology, psychology, and other related non-engineering disciplines. The difference in principles of operation, however, has roots deeper in the philosophical underpinnings of the way we view controlling artifacts and the concept of control itself when it is contrasted against the concept of autonomy. Realization of increasingly higher levels of autonomy is routinely demanded today not only in industry where most robotic applications occur, but also in areas closer to our daily life where a gradual but steady increase in service applications of robotics is observed. This paper introduces the concept of non-Cartesian robotics as an antithesis to conventional (Cartesian) robotics and describes various aspects of this new way of running a robotic system. This work was presented, in part, at the International symposium on Artificial Life and Robotics, Oita, Japan, February 18–20, 1996  相似文献   

12.
Shared attention is a type of communication very important among human beings. It is sometimes reserved for the more complex form of communication being constituted by a sequence of four steps: mutual gaze, gaze following, imperative pointing and declarative pointing. Some approaches have been proposed in Human?Robot Interaction area to solve part of shared attention process, that is, the most of works proposed try to solve the first two steps. Models based on temporal difference, neural networks, probabilistic and reinforcement learning are methods used in several works. In this article, we are presenting a robotic architecture that provides a robot or agent, the capacity of learning mutual gaze, gaze following and declarative pointing using a robotic head interacting with a caregiver. Three learning methods have been incorporated to this architecture and a comparison of their performance has been done to find the most adequate to be used in real experiment. The learning capabilities of this architecture have been analyzed by observing the robot interacting with the human in a controlled environment. The experimental results show that the robotic head is able to produce appropriate behavior and to learn from sociable interaction.  相似文献   

13.
We present a novel method for a robot to interactively learn, while executing, a joint human–robot task. We consider collaborative tasks realized by a team of a human operator and a robot helper that adapts to the human’s task execution preferences. Different human operators can have different abilities, experiences, and personal preferences so that a particular allocation of activities in the team is preferred over another. Our main goal is to have the robot learn the task and the preferences of the user to provide a more efficient and acceptable joint task execution. We cast concurrent multi-agent collaboration as a semi-Markov decision process and show how to model the team behavior and learn the expected robot behavior. We further propose an interactive learning framework and we evaluate it both in simulation and on a real robotic setup to show the system can effectively learn and adapt to human expectations.  相似文献   

14.
In this paper, first HumanPT architecture for low cost robotic applications is presented. HumanPT architecture differs than other architectures because it is implemented on existing robotic systems (robot  robotic controller) and exploits the minimum communication facilities for real-time control that these systems provide. It is based on well-known communication methods like serial communication (USB, RS232, IEEE-1394) and windows sockets (server–client model) and permits an important number of different type of components like actuators, sensors and particularly vision systems to be connected in a robotic system. The operating system (OS) used is Microsoft Windows, the most widely spread OS. The proposed architecture exploits features of this OS that is not a real-time one, to ensure – in case that the robotic system provide such a facility – control and real time communication with the robotic system controller and to integrate by means of sensors and actuators an important number of robotic tasks and procedures. As implementation of this architecture, HumanPT robotic application and experimental results concerning its performance and its implementation in real tasks are provided. HumanPT robotic application, developed in Visual C++, is an integrated, but simultaneously an open-source software that can be adapted in different types of robotic systems. An important number of robotic tasks or procedures including sensors and particularly vision systems can be generated and executed. Small enterprises by means of the proposed architecture and the open source software can be automated at low cost enhancing in this way their production.  相似文献   

15.
Behaviour-based models have been widely used to represent mobile robotic systems, which operate in uncertain dynamic environments and combine information from several sensory sources. The specification of complex mobile robotic applications is performed in such models by combining deliberative goal-oriented planning with reactive sensor driven operations. Consequently, the design of mobile robotic architectures requires the combination of time-constrained activities with deliberate time-consuming components. Furthermore, the temporal requirements of the reactive activities are variable and dependent on the environment (i.e. recognition processes) and/or on application parameters (i.e. process frequencies depend on robot speed).In this paper, a real-time mobile robotic architecture to cope with the functional and variable temporal characteristics of behaviour-based mobile robotic applications is proposed. Run-time flexibility is a main feature of the architecture that supports the variability of the temporal characteristics of the workload. The system has to be adapted to the environmental conditions, by adjusting robot control parameters (i.e. speed) or the system load (i.e. computational time), and take actions depending on it. This influence is focused on the ability of the system to select the appropriate activity to be executed depending on the available time, and, to change its behaviour depending on the environmental information. The flexibility of the system is allowed thanks to the definition of a real-time task model and the design of adaptation mechanisms for the regulation of the reactive load.The improvement of the robot quality of service (QoS) is another important aspect discussed in the paper. The architecture incorporates a quality of service manager (QoSM) that allows dynamically monitor analyse and improve the robot performances. Depending on the internal state, on the environment and on the objectives, the robot performance requirements vary (i.e. when the environment is overloaded, global map processes generating high-quality maps are required). The QoSM receives the performance requirements of the robot, and by adjusting the reactive load, the system enables the necessary slack time to schedule the more suitable deliberative processes and hence fulfilling the robot QoS. Moreover, the deliberative load can be scheduled by different heuristic strategies that provide answers of varying quality.  相似文献   

16.
Among the challenges of building robots for everyday environments is the need to integrate diverse systems and subsystems. Here, we describe a step in this direction: the Cognitive Map robot architecture. It supports a flexible multicomponent system that can be dynamically reconfigured to handle different applications with minimal changes to the system. Runtime activation of traditional and hybrid robot architecture paradigms for any particular task is supported. Our method of isolating the communication interface within a single application programming interface (API) layer supports loose coupling between components, allowing easy integration of legacy code and expansion of existing systems. We classify the components into four main roles: perception, knowledge/state representation, decision-making, and expression. Interaction, Task Matrix and Multimodal Communication are modules built in this system for facilitating human?robot interaction with the humanoid robot ASIMO built by Honda Motor Co., Ltd.We describe the key ideas behind the architecture and illustrate how they are implemented in a memory card game application where people interact with ASIMO. Through our experience and comparison with alternative approaches, we show that the Cognitive Map architecture significantly facilitates implementation of human?robot interactive scenarios.  相似文献   

17.
《Advanced Robotics》2013,27(4):277-291
This research aims to clarify behavior intelligence and human cooperation intelligence of robots by emotion models which are based on the robot's hardware structure. In this paper, a human's mental image (internal expression) is given consideration as a method for emotional expression of robots. The hypothesis model for the acquisition of the internal expression of robots and experimental results using a real autonomous robot are described.  相似文献   

18.
This paper describes an autonomous system for knowledge acquisition based on artificial curiosity. The proposed approach allows a humanoid robot to discover, in an indoor environment, the world in which it evolves, and to learn autonomously new knowledge about it. The learning process is accomplished by observation and by interaction with a human tutor, based on a cognitive architecture with two levels. Experimental results of deployment of this system on a humanoid robot in a real office environment are provided. We show that our cognitive system allows a humanoid robot to gain increased autonomy in matters of knowledge acquisition.  相似文献   

19.
We aim to achieve interaction between a robot and multiple people. For this, robots should localize people, select an interaction partner, and act appropriately for him/her. It is difficult to deal with all these problems using only the sensors installed into the robots. We focus on that people use a rough interaction distance among other people . We divide this interaction area into different spaces based on both the interaction distances and sensor abilities of robots. Our robots localize people roughly within this divided space. To select an interaction partner, they map friendliness holding the interaction history onto the divided space, and integrate the sensor information. Furthermore, we developed a method for appropriately changing the motions, sizes, and speeds based on the distance. Our robots regard the divided spaces as Q-Learning states, and learn the motion parameters. Our robot interacted with 27 visitors. It localized a partner with an F-value of 0.76 through integration, which is higher than that of a single sensor. A factor analysis was performed on the results from questionnaires. Exciting and Friendly were the representatives of the first and second factors, respectively. For both factors, a motion with friendliness provided higher impression scores than that without friendliness.  相似文献   

20.
The paper addresses the problem of constructing large space structures (~100 m) by using autonomous robots to assemble modular components in space. We are motivated by the problem of creating space structures at a scale greater than what is feasible with a single self‐deploying design. We had two goals in this work. The first was to investigate and demonstrate the feasibility of long‐order multitask autonomy. The second was to study the balance between required tolerances in hardware design and robotic autonomy. This paper reports on a payload‐centric autonomy paradigm and presents results from laboratory demonstrations of automated assembly of structures using a multilimbed robotic platform. We present results with deployable 20 lb payloads (1 m trusses) that are robotically assembled to form a 3‐m diameter kinematically closed loop structure to subcentimeter accuracy. The robot uses its limbs to deploy the stowed modular structural components, manipulate them in free space, and assemble them via dual‐arm force control. We report on results and lessons learned from multiple successful end‐to‐end in‐lab demonstrations of autonomous truss assembly with JPL's RoboSimian robot originally developed for the Defense Advanced Research Projects Agency (DARPA). Videos of these demonstrations can be seen at https://goo.gl/muNLJp (JPL, 2017 ). Each end‐to‐end run took precisely 26 min to execute with very little variance across runs. We present changes/improvements to the RoboSimian system post‐DARPA Robotics Challenge (DRC) (Karumanchi et al., 2016 ). The new architecture has been improved with a focus on scalable autonomy as opposed to semiautonomy as required at the DRC.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号