首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
Eye movements are studied in neurophysiology, neurology, ophthalmology, and otology both clinically and in research. In this article, a syntactic method for recognition of horizontal nystagmus and smooth pursuit eye movements is presented. Eye movement signals, which are recorded, for example, electro-oculographically, are transformed into symbol strings of context free grammars. These symbol strings are fed to an LR(k) parser, which detects eye movements as sentences of the formal languages produced by these LR(k) grammars. Since LR(k) grammars have been used, the time required by the whole recognition method is directly proportional to the number of symbols in an input string.  相似文献   

2.
A subsequence is obtained from a string by deleting any number of characters; thus in contrast to a substring, a subsequence is not necessarily a contiguous part of the string. Counting subsequences under various constraints has become relevant to biological sequence analysis, to machine learning, to coding theory, to the analysis of categorical time series in the social sciences, and to the theory of word complexity. We present theorems that lead to efficient dynamic programming algorithms to count (1) distinct subsequences in a string, (2) distinct common subsequences of two strings, (3) matching joint embeddings in two strings, (4) distinct subsequences with a given minimum span, and (5) sequences generated by a string allowing characters to come in runs of a length that is bounded from above.  相似文献   

3.
A simple associationist neural network learns to factor abstract rules (i.e., grammars) from sequences of arbitrary input symbols by inventing abstract representations that accommodate unseen symbol sets as well as unseen but similar grammars. The neural network is shown to have the ability to transfer grammatical knowledge to both new symbol vocabularies and new grammars. Analysis of the state-space shows that the network learns generalized abstract structures of the input and is not simply memorizing the input strings. These representations are context sensitive, hierarchical, and based on the state variable of the finite-state machines that the neural network has learned. Generalization to new symbol sets or grammars arises from the spatial nature of the internal representations used by the network, allowing new symbol sets to be encoded close to symbol sets that have already been learned in the hidden unit space of the network. The results are counter to the arguments that learning algorithms based on weight adaptation after each exemplar presentation (such as the long term potentiation found in the mammalian nervous system) cannot in principle extract symbolic knowledge from positive examples as prescribed by prevailing human linguistic theory and evolutionary psychology.  相似文献   

4.
运动串:一种用于行为分割的运动捕获数据表示方法   总被引:1,自引:0,他引:1  
运动数据的行为分割是运动捕获过程中非常重要的一环.针对现有分割方法的不足,提出了一种可用于行为分割的运动数据表示方法,并基于该表示实现了数据的行为分割.运动数据经过谱聚类(spectral clustering)、时序恢复和最大值滤波法(max filtering)后生成一个字符串,该字符串称为运动串,然后采用后缀树(suffix tree)分析运动串,提取出所有静态子串和周期子串,对这些子串进行行为标注,从而实现运动数据的行为分割.实验表明,基于运动串的分割具有较好的鲁棒性和分割效果.  相似文献   

5.
Lexical states in JavaCC provide a powerful mechanism to scan regular expressions in a context sensitive manner. But lexical states also make it hard to reason about the correctness of the grammar. We first categorize the related correctness issues into two classes: errors and warnings. We then extend the traditional context sensitive and a context insensitive analysis to identify errors and warnings in context‐free grammars. We have implemented these analyses as a standalone tool (LSA ), the first of its kind, to identify errors and warnings in JavaCC grammars. The LSA tool outputs a graph that depicts the grammar and the error transitions. Importantly, it can also generate counter example strings that can be used to establish the errors. We have used LSA to analyze a host of open‐source JavaCC grammar files to good effect. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

6.
We present a novel method for retargeting human motion to arbitrary 3D mesh models with as little user interaction as possible. Traditional motion‐retargeting systems try to preserve the original motion, while satisfying several motion constraints. Our method uses a few pose‐to‐pose examples provided by the user to extract the desired semantics behind the retargeting process while not limiting the transfer to being only literal. Thus, mesh models with different structures and/or motion semantics from humanoid skeletons become possible targets. Also considering the fact that most publicly available mesh models lack additional structure (e.g. skeleton), our method dispenses with the need for such a structure by means of a built‐in surface‐based deformation system. As deformation for animation purposes may require non‐rigid behaviour, we augment existing rigid deformation approaches to provide volume‐preserving and squash‐and‐stretch deformations. We demonstrate our approach on well‐known mesh models along with several publicly available motion‐capture sequences.  相似文献   

7.
We present an important step towards the solution of the problem of inverse procedural modeling by generating parametric context‐free L‐systems that represent an input 2D model. The L‐system rules efficiently code the regular structures and the parameters represent the properties of the structure transformations. The algorithm takes as input a 2D vector image that is composed of atomic elements, such as curves and poly‐lines. Similar elements are recognized and assigned terminal symbols of an L‐system alphabet. The terminal symbols' position and orientation are pair‐wise compared and the transformations are stored as points in multiple 4D transformation spaces. By careful analysis of the clusters in the transformation spaces, we detect sequences of elements and code them as L‐system rules. The coded elements are then removed from the clusters, the clusters are updated, and then the analysis attempts to code groups of elements in (hierarchies) the same way. The analysis ends with a single group of elements that is coded as an L‐system axiom. We recognize and code branching sequences of linearly translated, scaled, and rotated elements and their hierarchies. The L‐system not only represents the input image, but it can also be used for various editing operations. By changing the L‐system parameters, the image can be randomized, symmetrized, and groups of elements and regular structures can be edited. By changing the terminal and non‐terminal symbols, elements or groups of elements can be replaced.  相似文献   

8.
This paper presents the modules that comprise a knowledge-based sign synthesis architecture for Greek sign language (GSL). Such systems combine natural language (NL) knowledge, machine translation (MT) techniques and avatar technology in order to allow for dynamic generation of sign utterances. The NL knowledge of the system consists of a sign lexicon and a set of GSL structure rules, and is exploited in the context of typical natural language processing (NLP) procedures, which involve syntactic parsing of linguistic input as well as structure and lexicon mapping according to standard MT practices. The coding on linguistic strings which are relevant to GSL provide instructions for the motion of a virtual signer that performs the corresponding signing sequences. Dynamic synthesis of GSL linguistic units is achieved by mapping written Greek structures to GSL, based on a computational grammar of GSL and a lexicon that contains lemmas coded as features of GSL phonology. This approach allows for robust conversion of written Greek to GSL, which is an essential prerequisite for access to e-content by the community of native GSL signers. The developed system is sublanguage oriented and performs satisfactorily as regards its linguistic coverage, allowing for easy extensibility to other language domains. However, its overall performance is subject to current well known MT limitations.  相似文献   

9.
The study of hairpin-free words has been initiated in the context of DNA computing. DNA strands that, theoretically speaking, are finite strings over the alphabet {A, G, C, T} are used in DNA computing to encode information. Due to the fact that A is complementary to T and G to C, DNA single strands that are complementary can bind to each other or to themselves in either intended or unintended ways. One of the structures that is usually undesirable for biocomputation, since it makes the affected DNA string unavailable for future interactions, is the hairpin: if some subsequences of a DNA single string are complementary to each other, the string will bind to itself forming a hairpin-like structure. This paper continues the theoretical study of hairpin-free languages. We study algebraic properties of hairpin-free words and hairpins. We also give a complete characterization of the syntactic monoid of the language consisting of all hairpin-free words over a given alphabet and illustrate it with an example using the DNA alphabet.  相似文献   

10.
Motion capture cannot generate cartoon‐style animation directly. We emulate the rubber‐like exaggerations common in traditional character animation as a means of converting motion capture data into cartoon‐like movement. We achieve this using trajectory‐based motion exaggeration while allowing the violation of link‐length constraints. We extend this technique to obtain smooth, rubber‐like motion by dividing the original links into shorter sub‐links and computing the positions of joints using Bézier curve interpolation and a mass‐spring simulation. This method is fast enough to be used in real time.  相似文献   

11.
An information source that generates messages with formal syntactic structures is proposed. It is modeled as a context-free grammar in Chomsky Normal Form. The messages are binary coded with or without error-correction capabilities, and are transmitted over a memoryless symmetric noisy channel. At the receiver, an algorithmic procedure is proposed for the correction of syntactic errors in the incoming messages. This is similar to the Cocke-Younger-Kasami parser with the additional capability of detecting and correcting syntactic errors in the input strings. Simulation results are presented which show that the present syntactic decoding scheme has a lower probability of decoding errors than conventional algebraic decoding schemes. Its performance is also compared with that of an alternate syntactic decoder studied recently.This work was supported by the National Science Foundation under grant ENG 74-17586.  相似文献   

12.
Automatic Conversion of Mesh Animations into Skeleton-based Animations   总被引:1,自引:0,他引:1  
Recently, it has become increasingly popular to represent animations not by means of a classical skeleton‐based model, but in the form of deforming mesh sequences. The reason for this new trend is that novel mesh deformation methods as well as new surface based scene capture techniques offer a great level of flexibility during animation creation. Unfortunately, the resulting scene representation is less compact than skeletal ones and there is not yet a rich toolbox available which enables easy post‐processing and modification of mesh animations. To bridge this gap between the mesh‐based and the skeletal paradigm, we propose a new method that automatically extracts a plausible kinematic skeleton, skeletal motion parameters, as well as surface skinning weights from arbitrary mesh animations. By this means, deforming mesh sequences can be fully‐automatically transformed into fullyrigged virtual subjects. The original input can then be quickly rendered based on the new compact bone and skin representation, and it can be easily modified using the full repertoire of already existing animation tools.  相似文献   

13.
Type classification of fingerprints: a syntactic approach   总被引:19,自引:0,他引:19  
A fingerprint classification procedure using a computer is described. It classifies the prints into one of ten defined types. The procedure is implemented using PICAP (picture array processor). The picture processing system includes a TV camera input and a special picture processor. The first part of the procedure is a transformation of the original print to a sampling matrix, where the dominant direction of the ridges for each subpicture is indicated. After smoothing, the lines in this pattern are traced out and converted to strings of symbols. Finally, a syntactic approach is adopted to make the type classification based on this string of symbols.  相似文献   

14.
Hai‐Feng Guo  Zongyan Qiu 《Software》2015,45(11):1519-1547
Grammar‐based test generation provides a systematic approach to producing test cases from a given context‐free grammar. Unfortunately, naive grammar‐based test generation is problematic because of the fact that exhaustive random test case production is often explosive, and grammar‐based test generation with explicit annotation controls often causes unbalanced testing coverage. In this paper, we present an automatic grammar‐based test generation approach, which takes a symbolic grammar as input, requires zero control input from users, and produces well‐distributed test cases. Our approach utilizes a novel dynamic stochastic model where each variable is associated with a tuple of probability distributions, which are dynamically adjusted along the derivation. We further present a coverage tree illustrating the distribution of generated test cases and their detailed derivations. More importantly, the coverage tree supports various implicit derivation control mechanisms. We implemented this approach in a Java‐based system, named Gena. Each test case generated by Gena automatically comes with a set of structural features, which can play an important and effective role on automated failure causes localization. Experimental results demonstrate the effectiveness of our approach, the well‐balanced distribution of generated test cases over grammatical structures, and a case study on grammar‐based failure causes localization. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

15.
We present a new approach for online incremental word acquisition and grammar learning by humanoid robots. Using no data set provided in advance, the proposed system grounds language in a physical context, as mediated by its perceptual capacities. It is carried out using show-and-tell procedures, interacting with its human partner. Moreover, this procedure is open-ended for new words and multiword utterances. These facilities are supported by a self-organizing incremental neural network, which can execute online unsupervised classification and topology learning. Embodied with a mental imagery, the system also learns by both top-down and bottom-up processes, which are the syntactic structures that are contained in utterances. Thereby, it performs simple grammar learning. Under such a multimodal scheme, the robot is able to describe online a given physical context (both static and dynamic) through natural language expressions. It can also perform actions through verbal interactions with its human partner.  相似文献   

16.
We present a method for capturing the skeletal motions of humans using a sparse set of potentially moving cameras in an uncontrolled environment. Our approach is able to track multiple people even in front of cluttered and non‐static backgrounds, and unsynchronized cameras with varying image quality and frame rate. We completely rely on optical information and do not make use of additional sensor information (e.g. depth images or inertial sensors). Our algorithm simultaneously reconstructs the skeletal pose parameters of multiple performers and the motion of each camera. This is facilitated by a new energy functional that captures the alignment of the model and the camera positions with the input videos in an analytic way. The approach can be adopted in many practical applications to replace the complex and expensive motion capture studios with few consumer‐grade cameras even in uncontrolled outdoor scenes. We demonstrate this based on challenging multi‐view video sequences that are captured with unsynchronized and moving (e.g. mobile‐phone or GoPro) cameras.  相似文献   

17.
Facial animation is a time‐consuming and cumbersome task that requires years of experience and/or a complex and expensive set‐up. This becomes an issue, especially when animating the multitude of secondary characters required, e.g. in films or video‐games. We address this problem with a novel technique that relies on motion graphs to represent a landmarked database. Separate graphs are created for different facial regions, allowing a reduced memory footprint compared to the original data. The common poses are identified using a Euclidean‐based similarity metric and merged into the same node. This process traditionally requires a manually chosen threshold, however, we simplify it by optimizing for the desired graph compression. Motion synthesis occurs by traversing the graph using Dijkstra's algorithm, and coherent noise is introduced by swapping some path nodes with their neighbours. Expression labels, extracted from the database, provide the control mechanism for animation. We present a way of creating facial animation with reduced input that automatically controls timing and pose detail. Our technique easily fits within video‐game and crowd animation contexts, allowing the characters to be more expressive with less effort. Furthermore, it provides a starting point for content creators aiming to bring more life into their characters.  相似文献   

18.
Attribute grammars (AG) allow the addition of context-sensitive properties into context free grammars, augmenting their expressional capabilities by using syntactic and semantic notations, making them in this way a really useful tool for a considerable number of applications. AGs have extensively been utilized in applications such as artificial intelligence, structural pattern recognition, compiler construction and even text editing. Obviously, the performance of an attribute evaluation system resides in the efficiency of the syntactic and semantic subsystems. In this paper, a hardware architecture for an attribute evaluation system is presented, which is based on an efficient combinatorial implementation of Earley's parallel parsing algorithm for the syntax part of the attribute grammar. The semantic part is managed by a special purpose module that traverses the parse tree and evaluates the attributes based on a proposed stack-based approach. The entire system is described in Verilog HDL (hardware design language), in a template form that given the specification of an arbitrary attribute grammar, the HDL synthesizable source code of the system is produced on the fly by a proposed automated tool. The generated code has been simulated for validation, synthesized and tested on an Xilinx FPGA (field programmable gate arrays) board for various AGs. Our method increases the performance up to three orders of magnitude compared to previous approaches, depending on the implementation, the size of the grammar and the input string length. This makes it particularly appealing for applications where attribute evaluation is a crucial aspect, like in real-time and embedded systems. Specifically, a natural language interface is presented, based on a question-answering application from the area of airline flights.  相似文献   

19.
In many applications, it is useful to extract structured data from sections of unstructured text. A common approach is to use pattern matching (e.g., regular expressions) or more general grammar-based techniques. In cases where exact templates or grammar fragments are not known, it is possible to use machine learning approaches, based on words or n-grams, to identify the structured data. This is generally a two-stage (train/use) process that cannot easily cope with incremental extensions of the training set. In this paper, we combine a fuzzy grammar-based approach with incremental learning. This enables a set of grammar fragments to evolve incrementally, each time a new example is given, while guaranteeing that it can parse previously seen examples. We propose a novel measure of overlap between fuzzy grammar fragments that can also be used to determine the degree to which a string is parsed by a grammar fragment. This measure of overlap allows us to compare the range of two fuzzy grammar fragments (i.e., to estimate and compare the sets of strings that fuzzily conform to each grammar) without explicitly parsing any strings. A simple application shows the method's validity.   相似文献   

20.
Computer recognition of machine-printed letters of the Tamil alphabet is described. Each character is represented as a binary matrix and encoded into a string using two different methods. The encoded strings form a dictionary. A given text is presented symbol by symbol and information from each symbol is extracted in the form of a string and compared with the strings in the dictionary. When there is agreement the letters are recognized and printed out in Roman letters following a special method of transliteration. The lengthening of vowels and hardening of consonants are indicated by numerals printed above each letter.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号