首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A simple associationist neural network learns to factor abstract rules (i.e., grammars) from sequences of arbitrary input symbols by inventing abstract representations that accommodate unseen symbol sets as well as unseen but similar grammars. The neural network is shown to have the ability to transfer grammatical knowledge to both new symbol vocabularies and new grammars. Analysis of the state-space shows that the network learns generalized abstract structures of the input and is not simply memorizing the input strings. These representations are context sensitive, hierarchical, and based on the state variable of the finite-state machines that the neural network has learned. Generalization to new symbol sets or grammars arises from the spatial nature of the internal representations used by the network, allowing new symbol sets to be encoded close to symbol sets that have already been learned in the hidden unit space of the network. The results are counter to the arguments that learning algorithms based on weight adaptation after each exemplar presentation (such as the long term potentiation found in the mammalian nervous system) cannot in principle extract symbolic knowledge from positive examples as prescribed by prevailing human linguistic theory and evolutionary psychology.  相似文献   

2.
We propose an approach to efficiently teach robots how to perform dynamic manipulation tasks in cooperation with a human partner. The approach utilises human sensorimotor learning ability where the human tutor controls the robot through a multi-modal interface to make it perform the desired task. During the tutoring, the robot simultaneously learns the action policy of the tutor and through time gains full autonomy. We demonstrate our approach by an experiment where we taught a robot how to perform a wood sawing task with a human partner using a two-person cross-cut saw. The challenge of this experiment is that it requires precise coordination of the robot’s motion and compliance according to the partner’s actions. To transfer the sawing skill from the tutor to the robot we used Locally Weighted Regression for trajectory generalisation, and adaptive oscillators for adaptation of the robot to the partner’s motion.  相似文献   

3.
This paper describes an evolutionary approach to the problem of inferring stochastic context-free grammars from finite language samples. The approach employs a distributed, steady-state genetic algorithm, with a fitness function incorporating a prior over the space of possible grammars. Our choice of prior is designed to bias learning towards structurally simpler grammars. Solutions to the inference problem are evolved by optimizing the parameters of a covering grammar for a given language sample. Full details are given of our genetic algorithm (GA) and of our fitness function for grammars. We present the results of a number of experiments in learning grammars for a range of formal languages. Finally we compare the grammars induced using the GA-based approach with those found using the inside-outside algorithm. We find that our approach learns grammars that are both compact and fit the corpus data well.  相似文献   

4.
Context-free grammars are widely used for the simple form of their rules. A derivation step consists of the choice of a nonterminal of the sentential form and of an application of a rule rewriting it. Several regulations of the derivation process have been studied to increase the power of context-free grammars. In the resulting grammars, however, not only the symbols to be rewritten are restricted, but also the rules to be applied. In this paper, we study context-free grammars with a simpler restriction where only symbols to be rewritten are restricted, not the rules, in the sense that any rule rewriting the chosen nonterminal can be applied. We prove that these grammars have the same power as random context, matrix, or programmed grammars. We also present two improved normal forms and discuss the characterization of context-sensitive languages by a variant using strings of length at most two instead of symbols.  相似文献   

5.
Much research has been conducted on the application of reinforcement learning to robots. Learning time is a matter of concern in reinforcement learning. In reinforcement learning, information from sensors is projected on to a state space. A robot learns the correspondence between each state and action in state space and determines the best correspondence. When the state space is expanded according to the number of sensors, the number of correspondences learnt by the robot is increased. Therefore, learning the best correspondence becomes time consuming. In this study, we focus on the importance of sensors for a robot to perform a particular task. The sensors that are applicable to a task differ for different tasks. A robot does not need to use all installed sensors to perform a task. The state space should consist of only those sensors that are essential to a task. Using such a state space consisting of only important sensors, a robot can learn correspondences faster than in the case of a state space consisting of all installed sensors. Therefore, in this paper, we propose a relatively fast learning system in which a robot can autonomously select those sensors that are essential to a task and a state space for only such important sensors is constructed. We define the measure of importance of a sensor for a task. The measure is the coefficient of correlation between the value of each sensor and reward in reinforcement learning. A robot determines the importance of sensors based on this correlation. Consequently, the state space is reduced based on the importance of sensors. Thus, the robot can efficiently learn correspondences owing to the reduced state space. We confirm the effectiveness of our proposed system through a simulation.  相似文献   

6.
7.
The importance of the parsing task for NLP applications is well understood. However developing parsers remains difficult because of the complexity of the Arabic language. Most parsers are based on syntactic grammars that describe the syntactic structures of a language. The development of these grammars is laborious and time consuming. In this paper we present our method for building an Arabic parser based on an induced grammar, PCFG grammar. We first induce the PCFG grammar from an Arabic Treebank. Then, we implement the parser that assigns syntactic structure to each input sentence. The parser is tested on sentences extracted from the treebank (1650 sentences).We calculate the precision, recall and f-measure. Our experimental results showed the efficiency of the proposed parser for parsing modern standard Arabic sentences (Precision: 83.59 %, Recall: 82.98 % and F-measure: 83.23 %).  相似文献   

8.
We present a new approach to motion rearrangement that preserves the syntactic structures of an input motion automatically by learning a context‐free grammar from the motion data. For grammatical analysis, we reduce an input motion into a string of terminal symbols by segmenting the motion into a series of subsequences, and then associating a group of similar subsequences with the same symbol. To obtain the most repetitive and precise set of terminals, we search for an optimial segmentation such that a large number of subsequences can be clustered into groups with little error. Once the input motion has been encoded as a string, a grammar induction algorithm is employed to build up a context‐free grammar so that the grammar can reconstruct the original string accurately as well as generate novel strings sharing their syntactic structures with the original string. Given any new strings from the learned grammar, it is straightforward to synthesize motion sequences by replacing each terminal symbol with its associated motion segment, and stitching every motion segment sequentially. We demonstrate the usefulness and flexibility of our approach by learning grammars from a large diversity of human motions, and reproducing their syntactic structures in new motion sequences.  相似文献   

9.
郝颖明  董再励  王建刚 《机器人》2000,22(4):241-246
插件作业(parts mating)是装配机器人的一项基本作业环节.本文介绍了以双目立体视觉 实现该作业的视觉导引方法.该方法通过采用人机交互方式,借助于人的智慧,提高了图像 特征提取和匹配的准确性和可靠性、可直观准确地给出插件作业的动作参数,克服了自动视 觉计算复杂、鲁棒性差的缺点,适用于机器人遥操作作业.实验表明,基于人机交互的机器 人插件作业在立体视觉导引下是完全可行的.  相似文献   

10.
In this paper, we propose a novel method for human–robot collaboration, where the robot physical behaviour is adapted online to the human motor fatigue. The robot starts as a follower and imitates the human. As the collaborative task is performed under the human lead, the robot gradually learns the parameters and trajectories related to the task execution. In the meantime, the robot monitors the human fatigue during the task production. When a predefined level of fatigue is indicated, the robot uses the learnt skill to take over physically demanding aspects of the task and lets the human recover some of the strength. The human remains present to perform aspects of collaborative task that the robot cannot fully take over and maintains the overall supervision. The robot adaptation system is based on the Dynamical Movement Primitives, Locally Weighted Regression and Adaptive Frequency Oscillators. The estimation of the human motor fatigue is carried out using a proposed online model, which is based on the human muscle activity measured by the electromyography. We demonstrate the proposed approach with experiments on real-world co-manipulation tasks: material sawing and surface polishing.  相似文献   

11.
We investigate learning of flexible robot locomotion controllers, i.e., the controllers should be applicable for multiple contexts, for example different walking speeds, various slopes of the terrain or other physical properties of the robot. In our experiments, contexts are desired walking linear speed of the gait. Current approaches for learning control parameters of biped locomotion controllers are typically only applicable for a single context. They can be used for a particular context, for example to learn a gait with highest speed, lowest energy consumption or a combination of both. The question of our research is, how can we obtain a flexible walking controller that controls the robot (near) optimally for many different contexts? We achieve the desired flexibility of the controller by applying the recently developed contextual relative entropy policy search(REPS) method which generalizes the robot walking controller for different contexts, where a context is described by a real valued vector. In this paper we also extend the contextual REPS algorithm to learn a non-linear policy instead of a linear policy over the contexts which call it RBF-REPS as it uses Radial Basis Functions. In order to validate our method, we perform three simulation experiments including a walking experiment using a simulated NAO humanoid robot. The robot learns a policy to choose the controller parameters for a continuous set of forward walking speeds.  相似文献   

12.
In this paper we provide an implementation strategy to map a functional specification of an utterance into a syntactically well-formed sentence. We do this by integrating the functional and the syntactic perspectives on language, which we take to be exemplified by systemic grammars and tree adjoining grammars (TAGs) respectively. From systemic grammars we borrow the use of networks of choices to classify the set of possible constructions. The choices expressed in an input are mapped by our generator to a syntactic structure as defined by a TAG. We argue that the TAG structures can be appropriate structural units of realization in an implementation of a generator based on systemic grammar and also that a systemic grammar provides an effective means of deciding between various syntactic possibilities expressed in a TAG grammar. We have developed a generation strategy which takes advantage of what both paradigms offer to generation, without compromising either.  相似文献   

13.
《Advanced Robotics》2013,27(10):1165-1181
Cognitive scientists and developmental psychologists have suggested that development in perceptual, motor and memory functions of human infants as well as adaptive evaluation by caregivers facilitate learning for cognitive tasks by infants. This article presents a robotic approach to understanding the mechanism of how learning for joint attention can be helped by such functional development. A robot learns visuomotor mapping needed to achieve joint attention based on evaluations from a caregiver. The caregiver adjusts the criterion for evaluating the robot's performance from easy to difficult as the performance improves. At the same time, the robot also gradually develops its visual function by sharpening input images. Experiments reveal that the adaptive evaluation by the caregiver accelerates the robot's learning and that the visual development in the robot improves the accuracy of joint attention tasks due to its well-structured visuomotor mapping. These results constructively explain what roles synchronized functional development in infants and caregivers play in task learning by infants.  相似文献   

14.
In this article, we present a novel approach to learning efficient navigation policies for mobile robots that use visual features for localization. As fast movements of a mobile robot typically introduce inherent motion blur in the acquired images, the uncertainty of the robot about its pose increases in such situations. As a result, it cannot be ensured anymore that a navigation task can be executed efficiently since the robot’s pose estimate might not correspond to its true location. We present a reinforcement learning approach to determine a navigation policy to reach the destination reliably and, at the same time, as fast as possible. Using our technique, the robot learns to trade off velocity against localization accuracy and implicitly takes the impact of motion blur on observations into account. We furthermore developed a method to compress the learned policy via a clustering approach. In this way, the size of the policy representation is significantly reduced, which is especially desirable in the context of memory-constrained systems. Extensive simulated and real-world experiments carried out with two different robots demonstrate that our learned policy significantly outperforms policies using a constant velocity and more advanced heuristics. We furthermore show that the policy is generally applicable to different indoor and outdoor scenarios with varying landmark densities as well as to navigation tasks of different complexity.  相似文献   

15.
Conventional humanoid robotic behaviors are directly programmed depending on the programmer's personal experience. With this method, the behaviors usually appear unnatural. It is believed that a humanoid robot can acquire new adaptive behaviors from a human, if the robot has the criteria underlying such behaviors. The aim of this paper is to establish a method of acquiring human behavioral criteria. The advantage of acquiring behavioral criteria is that the humanoid robots can then autonomously produce behaviors for similar tasks with the same behavioral criteria but without transforming data obtained from morphologically different humans every time for every task. In this paper, a manipulator robot learns a model behavior, and another robot is created to perform the model behavior instead of being performed by a person. The model robot is presented some behavioral criteria, but the learning manipulator robot does not know them and tries to infer them. In addition, because of the difference between human and robot bodies, the body sizes of the learning robot and the model robot are also made different. The method of obtaining behavioral criteria is realized by comparing the efficiencies with which the learning robot learns the model behaviors. Results from the simulation have demonstrated that the proposed method is effective for obtaining behavioral criteria. The proposed method, the details regarding the simulation, and the results are presented in this paper.  相似文献   

16.
17.
Recently, robots are introduced to warehouses and factories for automation and are expected to execute dual-arm manipulation as human does and to manipulate large, heavy and unbalanced objects. We focus on target picking task in the cluttered environment and aim to realize a robot picking system which the robot selects and executes proper grasping motion from single-arm and dual-arm motion. In this paper, we propose a few-experiential learning-based target picking system with selective dual-arm grasping. In our system, a robot first learns grasping points and object semantic and instance label with automatically synthesized dataset. The robot then executes and collects grasp trial experiences in the real world and retrains the grasping point prediction model with the collected trial experiences. Finally, the robot evaluates candidate pairs of grasping object instance, strategy and points and selects to execute the optimal grasping motion. In the experiments, we evaluated our system by conducting target picking task experiments with a dual-arm humanoid robot Baxter in the cluttered environment as warehouse.  相似文献   

18.
We propose a method for learning novel objects from audio visual input. The proposed method is based on two techniques: out-of-vocabulary (OOV) word segmentation and foreground object detection in complex environments. A voice conversion technique is also involved in the proposed method so that the robot can pronounce the acquired OOV word intelligibly. We also implemented a robotic system that carries out interactive mobile manipulation tasks, which we call “extended mobile manipulation”, using the proposed method. In order to evaluate the robot as a whole, we conducted a task “Supermarket” adopted from the RoboCup@Home league as a standard task for real-world applications. The results reveal that our integrated system works well in real-world applications.  相似文献   

19.
ContextGenerating test cases based on software input interface is a black-box testing technique that can be made more effective by using structured input models such as input grammars. Automatically generating grammar-based test inputs may lead to structurally valid but semantically invalid inputs that may be rejected in early semantic error checking phases of a system under test.ObjectiveThis paper aims to introduce a method for specifying a grammar-based input model with the model’s semantic constraints to be used in the generation of positive test inputs. It is also important that the method can generate effective test suites based on appropriate grammar-based coverage criteria.MethodFormal specification of both input structure and input semantics provides the opportunity to use model instantiation techniques to create model instances that satisfy all specified constraints. The input interface of a subject system can be specified using a high-level specification scheme such as attribute grammars, and a transformation function from this scheme to an instantiable formal modeling language can generate the desired model instances.ResultsWe propose a declarative grammar-based input specification method that is based on a variation of attribute grammars and allows the user to specify input constraints in addition to input structure. The model can be instantiated automatically to generate structurally and semantically valid test inputs. The proposed method has the capability to specify test requirements and coverage criteria and use them to generate valid test suites that satisfy test coverage criteria requirements.ConclusionThe work presented in this paper provides a black-box test generation method for grammar-based software inputs that can automatically generate criteria-covering test suites.  相似文献   

20.
Eye movements are studied in neurophysiology, neurology, ophthalmology, and otology both clinically and in research. In this article, a syntactic method for recognition of horizontal nystagmus and smooth pursuit eye movements is presented. Eye movement signals, which are recorded, for example, electro-oculographically, are transformed into symbol strings of context free grammars. These symbol strings are fed to an LR(k) parser, which detects eye movements as sentences of the formal languages produced by these LR(k) grammars. Since LR(k) grammars have been used, the time required by the whole recognition method is directly proportional to the number of symbols in an input string.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号