首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A basic agent     
A basic agent has been constructed which integrates limited natural language understanding and generation, temporal planning and reasoning, plan execution, simulated symbolic perception, episodic memory, and some general world knowledge. The agent is cast as a robot submarine operating in a two-dimensional simulated "Seaworld" about which it has only partial knowledge. It can communicate with people in a vocabulary of about 800 common English words using a medium coverage grammar. The agent maintains an episodic memory of events in its life and has a limited ability to reflect on those events. A person can make statements to the agent, ask it questions, and give it commands. In response to commands, a temporal task planner is invoked to synthesize a plan, which is then executed at an appropriate future time. A large variety of temporal references in natural language are interpreted with respect to agent time. The agent can form and retain compound future plans, and replan in response to new information or new commands. Natural language verbs are represented in a state transition semantics for compatibility with the planner. The agent is able to give terse answers to questions about its past experiences, present activities and perceptions, future intentions, and general knowledge. No other artificial intelligence artifact with this range of capabilities has previously been constructed.  相似文献   

2.
工业机器人通常采用特定的机器人语言进行示教编程与控制,对于操作人员需要具有较高专业与技能要求,并且示教周期长导致工作效率降低。为了提高工业机器人使用效率与易用性,提出一种基于受限自然语言解析器的设计方法。该系统通过对受限自然语言进行词法解析、语法解析、语义解析,得到所需求的工作意图,然后与实时生成的三维空间语义地图进行匹配,结合机械臂轨迹规划,生成能够完成工作任务的机器人作业程序,并完成了机器人作业程序的解析与实际机械臂的控制。通过实验证明设计的基于受限自然语言处理的分拣机器人解析器能够正确解析自然语言命令,实现对机械臂的控制。  相似文献   

3.
In robotics, the idea of human and robot interaction is receiving a lot of attention lately. In this paper, we describe a multi-modal system for generating a map of the environment through interaction of a human and home robot. This system enables people to teach a newcomer robot different attributes of objects and places in the room through speech commands and hand gestures. The robot learns about size, position, and topological relations between objects, and produces a map of the room based on knowledge learned through communication with the human. The developed system consists of several sections including: natural language processing, posture recognition, object localization and map generation. This system combines multiple sources of information and model matching to detect and track a human hand so that the user can point toward an object of interest and guide the robot to either go near it or to locate that object's position in the room. The positions of objects in the room are located by monocular camera vision and depth from focus method.  相似文献   

4.
Natural language commands are generated by intelligent human beings. As a result, they contain a lot of information. Therefore, if it is possible to learn from such commands and reuse that knowledge, it will be a very efficient process. In this paper, learning from such information rich voice commands for controlling a robot is studied. First, new concepts of fuzzy coach-player system and sub-coach are proposed for controlling robots with natural language commands. Then, the characteristics of the subjective human decision making process are discussed and a Probabilistic Neural Network (PNN) based learning method is proposed to learn from such commands and to reuse the acquired knowledge. Finally, the proposed concept is demonstrated and confirmed with experiments conducted using a PA-10 redundant manipulator.  相似文献   

5.
This article describes a novel qualitative navigation method for autonomous wheelchair robots in typical home environments. The method accepts as input a line diagram of the robot environment and converts it into an enhanced grid in which qualitative representations of variations in sensor behavior between adjacent regions in space are stored. An off-line planner uses these representations to store at each grid cell appropriate motion commands that will ideally move the wheelchair in and out of each room in a typical home environment. An online controller accepts as input this enhanced grid along with a starting and goal position for the robot. It then compares the actual behavior of the sensors with the one stored in the grid. The results of this comparison are used to estimate the current position of the robot, to retrieve the planner instructions and to combine these instructrions with appropriate risk avoidance behaviors during navigation. This method has been tested both in simulation and as one of the subsystems on a prototype for an autonomous wheelchair robot. Results from both trials are provided.  相似文献   

6.
In this paper, a voice activated robot arm with intelligence is presented. The robot arm is controlled with natural connected speech input. The language input allows a user to interact with the robot in terms which are familiar to most people. The advantages of speech activated robots are hands-free and fast data input operations. The proposed robot is capable of understanding the meaning of natural language commands. After interpreting the voice commands a series of control data for performing a tasks are generated. Finally the robot actually performs the task. Artificial Intelligence techniques are used to make the robot understand voice commands and act in the desired mode. It is also possible to control the robot using the keyboard input mode.  相似文献   

7.
Wentof  R.  Law  K. H.  Ackroyd  M. H. 《Engineering with Computers》1986,1(3):127-147
One of the major problems in developing computeraided design software is the establishment of effective man-machine communication. This paper describes a computer-aided plate girder design software package and its man-machine interface using natural language processing techniques. The natural language interpreter takes advantage of the user's communication and technical skills and accepts commands in the user's native language. The user can verify the correct interpretation of the commands from the responsive graphic display of the design.  相似文献   

8.
A central problem of many branches of artificial intelligence (AI) research is that ofunderstanding natural language (NL). Many attempts have been made to model understanding with computer systems that demonstrate competence at such tasks as question answering, paraphrasing, and following commands. The system to be described in this paper combines some of these language functions in a single, general process based on the creation of an associative memory net as a result of experience. The author has written a large, interactive computer program that accepts unsegmented input strings of natural language from a human trainer and, after processing each string, outputs a natural language response. The processing of the string may involve transforming it to some other form in the same or another language, or answering an input question based on information previously learned by the program.  相似文献   

9.
提出了一个基于J2ME平台的手机语音控制系统,该系统结合语音识别和自然语言处理技术,处理手机用户的语音输入,抽取语义信息并显示在手机终端.本系统采用C/S架构,客户端为手机终端,服务器端为PC.在客户端,收集语音输入流并发送给服务器,接收服务器发回的语义信息并显示;在服务器端,接收手机客户端传来的语音流,进行语音识别,自然语言处理,将处理的语义信息发回客户端.该系统能处理同一种手机控制命令的多种自然语言表达方式,能极大方便手机用户的使用.  相似文献   

10.
In this paper, a system based on several intelligent techniques, including speech recognition, natural language processing and linear planning is described. These techniques have been employed to generate a sequence of operations understandable by the control system of a robot that is to perform a semi-automatic surgical task.Thus, a system has been implemented that translates some surgeon's ‘natural’ language into robot-executable commands. A robotic simulator has then been implemented in order to test the planned sequence in a virtual environment.  相似文献   

11.
In this paper we present Caesar, an intelligent domestic service robot. In domestic settings for service robots complex tasks have to be accomplished. Those tasks benefit from deliberation, from robust action execution and from flexible methods for human?Crobot interaction that account for qualitative notions used in natural language as well as human fallibility. Our robot Caesar deploys AI techniques on several levels of its system architecture. On the low-level side, system modules for localization or navigation make, for instance, use of path-planning methods, heuristic search, and Bayesian filters. For face recognition and human?Cmachine interaction, random trees and well-known methods from natural language processing are deployed. For deliberation, we use the robot programming and plan language Readylog, which was developed for the high-level control of agents and robots; it allows combining programming the behaviour using planning to find a course of action. Readylog is a variant of the robot programming language Golog. We extended Readylog to be able to cope with qualitative notions of space frequently used by humans, such as ??near?? and ??far??. This facilitates human?Crobot interaction by bridging the gap between human natural language and the numerical values needed by the robot. Further, we use Readylog to increase the flexible interpretation of human commands with decision-theoretic planning. We give an overview of the different methods deployed in Caesar and show the applicability of a system equipped with these AI techniques in domestic service robotics.  相似文献   

12.
In order for robots to effectively understand natural language commands, they must be able to acquire meaning representations that can be mapped to perceptual features in the external world. Previous approaches to learning these grounded meaning representations require detailed annotations at training time. In this paper, we present an approach to grounded language acquisition which is capable of jointly learning a policy for following natural language commands such as “Pick up the tire pallet,” as well as a mapping between specific phrases in the language and aspects of the external world; for example the mapping between the words “the tire pallet” and a specific object in the environment. Our approach assumes a parametric form for the policy that the robot uses to choose actions in response to a natural language command that factors based on the structure of the language. We use a gradient method to optimize model parameters. Our evaluation demonstrates the effectiveness of the model on a corpus of commands given to a robotic forklift by untrained users.  相似文献   

13.
This article describes a user study of a life-supporting humanoid directed in a multimodal language, and discusses the results. Twenty inexperienced users commanded the humanoid in a computer-simulated remote home environment in the multimodal language by pressing keypad buttons and speaking to the robot. The results show that they comprehended the language well and were able to give commands successfully. They often chose a press-button action in place of verbal phrases to specify a direction, speed, length, angle, and/or temperature value, and preferred multimodal commands to spoken commands. However, they did not think that it was very easy to give commands in the language. This article discusses the results and points out both strong and weak points of the language and our robot.  相似文献   

14.
《Advanced Robotics》2013,27(17):2207-2232
In a human–robot spoken dialogue, the robot may misunderstand an ambiguous command from the user, such as 'Place the cup down (on the table)', thus running the risk of an accident. Although asking confirmation questions before the execution of any motion will decrease the risk of such failure, the user will find it more convenient if confirmation questions are not used in trivial situations. This paper proposes a method for estimating ambiguity in commands by introducing an active learning scheme with Bayesian logistic regression to human–robot spoken dialogue. We conduct physical experiments in which a user and a manipulator-based robot communicate using spoken language to manipulate objects.  相似文献   

15.
16.
In this paper, we report on the findings of a human-robot interaction study that aims at developing a communication language for transferring grasping skills from a nontechnical user to a robot. Participants with different backgrounds and education levels were asked to command a five-degree-of-freedom human-scale robot arm to grasp five small everyday objects. They were allowed to use either commands from an existing command set or develop their own equivalent natural language instructions. The study revealed several important findings. First, individual participants were more inclined to use simple, familiar commands than more powerful ones. In most cases, once a set of instructions was found to accomplish the grasping task, few participants deviated from that set. In addition, we also found that the participant's background does appear to play a role during the interaction process. Overall, participants with less technical backgrounds require more time and more commands on average to complete a grasping task as compared to participants with more technical backgrounds.  相似文献   

17.
郭海燕  刘清堂  陈矛  黄焕  葛强 《计算机科学》2012,39(103):503-506
在计算机几何作图软件日趋成熟的情况下,如何让用自然语言描述的几何命题自动生成几何图形一直是几何作图软件研究中的难题。结合已有几何作图软件的需求,通过对几何自然语言的研究,建立了一套有效可行的语言理解模型,设计并开发了由几何自然语言向几何作图命令转换的接口,实现了将自然语言描述的几何命题自动转换为几何作图命令并生成几何图形的功能。测试结果表明,其转换正确率在84.17%以上。  相似文献   

18.
基于伪自然语言理解的CAI开发平台   总被引:1,自引:0,他引:1  
基于伪自然语言理解,提出并实现了一个种高效率的知识获取方法,并把它用一诉开发中。首先知识工程师利用来自然语言的BL语言 写书本自然描述,然后利用知识编译系统处理BL程序以高效率地实现书本知识获取,再后领域专家在书本知识库的基本语义呆引导下利用知识求精系统对书本知识库加以少许求精,接着对领域知识库动态全局规划,把领域知识分解成一个个概念,最后通过方法生成组织成一个个课文传授予学生。  相似文献   

19.
《Advanced Robotics》2013,27(3-4):293-328
This paper presents a method of controlling robot manipulators with fuzzy voice commands. Recently, there has been some research on controlling robots using information-rich fuzzy voice commands such as 'go little slowly' and learning from such commands. However, the scope of all those works was limited to basic fuzzy voice motion commands. In this paper, we introduce a method of controlling the posture of a manipulator using complex fuzzy voice commands. A complex fuzzy voice command is composed of a set of fuzzy voice joint commands. Complex fuzzy voice commands can be used for complicated maneuvering of a manipulator, while fuzzy voice joint commands affect only a single joint. Once joint commands are learned, any complex command can be learned as a combination of some or all of them, so that, using the learned complex commands, a human user can control the manipulator in a complicated manner with natural language commands. Learning of complex commands is discussed in the framework of fuzzy coach–player model. The proposed idea is demonstrated with a PA-10 redundant manipulator.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号