首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Commanding a humanoid to move objects in a multimodal language   总被引:2,自引:2,他引:0  
This article describes a study on a humanoid robot that moves objects at the request of its users. The robot understands commands in a multimodal language which combines spoken messages and two types of hand gesture. All of ten novice users directed the robot using gestures when they were asked to spontaneously direct the robot to move objects after learning the language for a short period of time. The success rate of multimodal commands was over 90%, and the users completed their tasks without trouble. They thought that gestures were preferable to, and as easy as, verbal phrases to inform the robot of action parameters such as direction, angle, step, width, and height. The results of the study show that the language is fairly easy for nonexperts to learn, and can be made more effective for directing humanoids to move objects by making the language more sophisticated and improving our gesture detector.  相似文献   

2.
This article proposes a multimodal language to communicate with life-supporting robots through a touch screen and a speech interface. The language is designed for untrained users who need support in their daily lives from cost-effective robots. In this language, the users can combine spoken and pointing messages in an interactive manner in order to convey their intentions to the robots. Spoken messages include verb and noun phrases which describe intentions. Pointing messages are given when the user’s finger touches a camera image, a picture containing a robot body, or a button on a touch screen at hand which convey a location in their environment, a direction, a body part of the robot, a cue, a reply to a query, or other information to help the robot. This work presents the philosophy and structure of the language.  相似文献   

3.
The recent increase in technological maturity has empowered robots to assist humans and provide daily services. Voice command usually appears as a popular human–machine interface for communication. Unfortunately, deaf people cannot exchange information from robots through vocal modalities. To interact with deaf people effectively and intuitively, it is desired that robots, especially humanoids, have manual communication skills, such as performing sign languages. Without ad hoc programming to generate a particular sign language motion, we present an imitation system to teach the humanoid robot performing sign languages by directly replicating observed demonstration. The system symbolically encodes the information of human hand–arm motion from low-cost depth sensors as a skeleton motion time-series that serves to generate initial robot movement by means of perception-to-action mapping. To tackle the body correspondence problem, the virtual impedance control approach is adopted to smoothly follow the initial movement, while preventing potential risks due to the difference in the physical properties between the human and the robot, such as joint limit and self-collision. In addition, the integration of the leg-joints stabilizer provides better balance of the whole robot. Finally, our developed humanoid robot, NINO, successfully learned by imitation from human demonstration to introduce itself using Taiwanese Sign Language.  相似文献   

4.
This article considers the success rates in a multimodal command language for home robot users. In the command language, the user specifies action types and action parameter values to direct robots in multiple modes such as speech, touch, and gesture. The success rates of commands in the language can be estimated by user evaluations in several ways. This article presents some user evaluation methods, as well as results from recent studies on command success rates. The results show that the language enables users without much training to command home robots at success rates as high as 88%–100%. It is also shown that multimodal commands combining speech and button-press actions included fewer words and were significantly more successful than single-modal spoken commands.  相似文献   

5.
This article describes a multimodal command language for home robot users, and a robot system which interprets users’ messages in the language through microphones, visual and tactile sensors, and control buttons. The command language comprises a set of grammar rules, a lexicon, and nonverbal events detected in hand gestures, readings of tactile sensors attached to the robots, and buttons on the controllers in the users’ hands. Prototype humanoid systems which immediately execute commands in the language are also presented, along with preliminary experiments of faceto-face interactions and teleoperations. Subjects unfamiliar with the language were able to command humanoids and complete their tasks with brief documents at hand, given a short demonstration beforehand. The command understanding system operating on PCs responded to multimodal commands without significant delay. This work was presented in part at the 13th International Symposium on Artificial Life and Robotics, Oita, Japan, January 31–February 2, 2008  相似文献   

6.
We describe a novel approach that allows humanoid robots to incrementally integrate motion primitives and language expressions, when there are underlying natural language and motion language modules. The natural language module represents sentence structure using word bigrams. The motion language module extracts the relations between motion primitives and the relevant words. Both the natural language module and the motion language module are expressed as probabilistic models and, therefore, they can be integrated so that the robots can both interpret observed motion in the form of sentences and generate the motion corresponding to a sentence command. Incremental learning is needed for a robot that develops these linguistic skills autonomously . The algorithm is derived from optimization of the natural language and motion language modules under constraints on their probabilistic variables such that the association between motion primitive and sentence in incrementally added training pairs is strengthened. A test based on interpreting observed motion in the forms of sentence demonstrates the validity of the incremental statistical learning algorithm.  相似文献   

7.
8.
The center of mass (CoM) of a humanoid robot occupies a special place in its dynamics. As the location of its effective total mass, and consequently, the point of resultant action of gravity, the CoM is also the point where the robot’s aggregate linear momentum and angular momentum are naturally defined. The overarching purpose of this paper is to refocus our attention to centroidal dynamics: the dynamics of a humanoid robot projected at its CoM. In this paper we specifically study the properties, structure and computation schemes for the centroidal momentum matrix (CMM), which projects the generalized velocities of a humanoid robot to its spatial centroidal momentum. Through a transformation diagram we graphically show the relationship between this matrix and the well-known joint-space inertia matrix. We also introduce the new concept of “average spatial velocity” of the humanoid that encompasses both linear and angular components and results in a novel decomposition of the kinetic energy. Further, we develop a very efficient $O(N)$ O ( N ) algorithm, expressed in a compact form using spatial notation, for computing the CMM, centroidal momentum, centroidal inertia, and average spatial velocity. Finally, as a practical use of centroidal dynamics we show that a momentum-based balance controller that directly employs the CMM can significantly reduce unnecessary trunk bending during balance maintenance against external disturbance.  相似文献   

9.
We present a multibody simulator being used for compliant humanoid robot modelling and report our reasoning for choosing the settings of the simulator’s key features. First, we provide a study on how the numerical integration speed and accuracy depend on the coordinate representation of the multibody system. This choice is particularly critical for mechanisms with long serial chains (e.g. legs and arms). Our second contribution is a full electromechanical model of the inner dynamics of the compliant actuators embedded in the COMAN robot, since joints’ compliance is needed for the robot safety and energy efficiency. Third, we discuss the different approaches for modelling contacts and selecting an appropriate contact library. The recommended solution is to couple our simulator with an open-source contact library offering both accurate and fast contact modelling. The simulator performances are assessed by two different tasks involving contacts: a bimanual manipulation task and a squatting tasks. The former shows reliability of the simulator. For the latter, we report a comparison between the robot behaviour as predicted by our simulation environment, and the real one.  相似文献   

10.
This paper illustrates through a practical example an integration of a humanoid robotic architecture, with an open-platform collaborative working environment called BSCW (Be Smart-Cooperate Worldwide). BSCW is primarily designed to advocate a futuristic shared workspace system for humans. We exemplify how a complex robotic system (such as a humanoid robot) can be integrated as a proactive collaborative agent which provides services and interacts with other agents sharing the same collaborative environment workspace. Indeed, the robot is seen as a ‘user’ of the BSCW which is able to handle simple tasks and reports on their achievement status. We emphasis on the importance of using standard software such as CORBA (Common Object Request Broker Architecture) in order to easily build interfaces between several interacting complex software layers, namely from real-time constraints up to basic Internet data exchange.  相似文献   

11.
Current opinion suggests that language is a cognitive process in which different modalities such as perceptual entities, communicative intentions and speech are inextricably linked. As such, the process of child language acquisition is one in which the child learns to decipher this inextricability and to acquire language capabilities starting from gesturing, followed by language dominated by single word utterances, through to full-blown native language capability. In this paper I review three multimodal neural network models of early child language acquisition. Using these models, I show how computational modelling, in conjunction with the availability of empirical data, can contribute towards our understanding of child language acquisition. I conclude this paper by proposing a control theoretic approach towards modelling child language acquisition using neural networks.  相似文献   

12.
Abstract

This paper describes a case study of user-participation focusing on the introduction of a new computer-based system in a large UK bank. We use Wall and Lischeron's (1977) characterization of participation as consisting of three interrelated elements (i.e., interaction, information, and influence) and Gowler and Legge's (1978) contextual interpretation exploring user participation as a ‘dependent’ rather than an ‘independent’ variable. The study examines the process of participation using a range of research methods. We argue that user participation in systems development can only be properly understood through consideration of the nature of the organizational context (e.g., structures and processes), the system and its users, and by analysis of the interactions between these elements.  相似文献   

13.
This paper describes a case study of user-participation focusing on the introduction of a new computer-based system in a large UK bank. We use Wall and Lischeron's (1977) characterization of participation as consisting of three interrelated elements (i.e., interaction, information, and influence) and Gowler and Legge's (1978) contextual interpretation exploring user participation as a 'dependent' rather than an 'independent' variable. The study examines the process of participation using a range of research methods. We argue that user participation in systems development can only be properly understood through consideration of the nature of the organizational context (e.g., structures and processes), the system and its users, and by analysis of the interactions between these elements.  相似文献   

14.
《Advanced Robotics》2013,27(4):415-435
This paper describes position-based impedance control for biped humanoid robot locomotion. The impedance parameters of the biped leg are adjusted in real-time according to the gait phase. In order to reduce the impact/contact forces generated between the contacting foot and the ground, the damping coefficient of the impedance of the landing foot is increased largely during the first half double support phase. In the last half double support phase, the walking pattern of the leg changed by the impedance control is returned to the desired walking pattern by using a polynomial. Also, the large stiffness of the landing leg is given to increase the momentum reduced by the viscosity of the landing leg in the first half single support phase. For the stability of the biped humanoid robot, a balance control that compensates for moments generated by the biped locomotion is employed during a whole walking cycle. For the confirmation of the impedance and balance control, we have developed a life-sized humanoid robot, WABIAN-RIII, which has 43 mechanical d.o.f. Through dynamic walking experiments, the validity of the proposed controls is verified.  相似文献   

15.
In this research, we have developed a swimming robot with a fluttering kick with two legs, which can swim freely both on the surface of water and under water. We have established a control method for all the different types of motion of this robot, e.g., swimming in a straight line, turning, diving, or rising up in the water. Furthermore, by optimizing the three-dimensional action of this underwater robot, we can expect an improvement in its performance for complex work.  相似文献   

16.
In this work we describe an integrated and automated workflow for a comprehensive and robust analysis of multimodal MR images from a cohort of more than hundred subjects. Image examinations are done three years apart and consist of 3D high-resolution anatomical images, low resolution tensor-valued DTI recordings and 4D resting state fMRI time series. The integrated analysis of the data requires robust tools for segmentation, registration and fiber tracking, which we combine in an automated manner. Our automated workflow is strongly desired due to the large number of subjects. Especially, we introduce the use of histogram segmentation to processed fMRI data to obtain functionally important seed and target regions for fiber tracking between them. This enables analysis of individually important resting state networks. We also discuss various approaches for the assessment of white matter integrity parameters along tracts, and in particular we introduce the use of functional data analysis (FDA) for this task.  相似文献   

17.
This paper presents the comparison for the role of bi-articular and mono-articular actuators in human and bipedal robot legs, in particular the hip and knee joint, for driving the design of a humanoid robot with inspirations from the biological system. The various constraints driving the design of both systems are also compared. Additional factors particular to robotic system are identified and incorporated in the design process. To do this, a dynamic simulation is used to determine loading conditions and the forces and power produced by each actuator under various arrangements. It is shown that while the design principles of humans and humanoids are similar, other constraints ensure that robots are still merely inspired by humans, and not direct copies. A simple design methodology that captures the complexity and constraints of such a system in this paper is proposed. Finally, a full-size humanoid robot that demonstrates the newfound principle is highlighted.  相似文献   

18.
In this paper, we describe a user study evaluating the usability of an augmented reality (AR) multimodal interface (MMI). We have developed an AR MMI that combines free-hand gesture and speech input in a natural way using a multimodal fusion architecture. We describe the system architecture and present a study exploring the usability of the AR MMI compared with speech-only and 3D-hand-gesture-only interaction conditions. The interface was used in an AR application for selecting 3D virtual objects and changing their shape and color. For each interface condition, we measured task completion time, the number of user and system errors, and user satisfactions. We found that the MMI was more usable than the gesture-only interface conditions, and users felt that the MMI was more satisfying to use than the speech-only interface conditions; however, it was neither more effective nor more efficient than the speech-only interface. We discuss the implications of this research for designing AR MMI and outline directions for future work. The findings could also be used to help develop MMIs for a wider range of AR applications, for example, in AR navigation tasks, mobile AR interfaces, or AR game applications.  相似文献   

19.
Procedures for comparing and evaluating aspects of the user interface of statistical computer packages are described. These procedures are implemented in a study of three packages. SPSS. BMDP and Minitab, by a class of 21 students with some statistical background. It was found that most participants exhibited consistent personal preferences among the packages. In selecting packages to solve specific problems, however, their choice was determined more by issues of good statistical practice than by personal preference for overall package features.  相似文献   

20.
Multimedia Tools and Applications - This paper demonstrates the capability of humanoid robot in the field of sketch drawing. Sketch drawing is a complex job which requires three basic problems to...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号