共查询到20条相似文献,搜索用时 0 毫秒
1.
Computational methods for Traditional Chinese Medicine: a survey 总被引:1,自引:0,他引:1
Traditional Chinese Medicine (TCM) has been actively researched through various approaches, including computational techniques. A review on basic elements of TCM is provided to illuminate various challenges and progresses in its study using computational methods. Information on various TCM formulations, in particular resources on databases of TCM formulations and their integration to Western medicine, are analyzed in several facets, such as TCM classifications, types of databases, and mining tools. Aspects of computational TCM diagnosis, namely inspection, auscultation, pulse analysis as well as TCM expert systems are reviewed in term of their benefits and drawbacks. Various approaches on exploring relationships among TCM components and finding genes/proteins relating to TCM symptom complex are also studied. This survey provides a summary on the advance of computational approaches for TCM and will be useful for future knowledge discovery in this area. 相似文献
2.
《Computer methods and programs in biomedicine》2008,89(3):283-294
Traditional Chinese Medicine (TCM) has been actively researched through various approaches, including computational techniques. A review on basic elements of TCM is provided to illuminate various challenges and progresses in its study using computational methods. Information on various TCM formulations, in particular resources on databases of TCM formulations and their integration to Western medicine, are analyzed in several facets, such as TCM classifications, types of databases, and mining tools. Aspects of computational TCM diagnosis, namely inspection, auscultation, pulse analysis as well as TCM expert systems are reviewed in term of their benefits and drawbacks. Various approaches on exploring relationships among TCM components and finding genes/proteins relating to TCM symptom complex are also studied. This survey provides a summary on the advance of computational approaches for TCM and will be useful for future knowledge discovery in this area. 相似文献
3.
Narayanan Ramanathan Rama Chellappa Soma Biswas 《Journal of Visual Languages and Computing》2009,20(3):131-144
Facial aging, a new dimension that has recently been added to the problem of face recognition, poses interesting theoretical and practical challenges to the research community. The problem which originally generated interest in the psychophysics and human perception community has recently found enhanced interest in the computer vision community. How do humans perceive age? What constitutes an age-invariant signature that can be derived from faces? How compactly can the facial growth event be described? How does facial aging impact recognition performance? In this paper, we give a thorough analysis on the problem of facial aging and further provide a complete account of the many interesting studies that have been performed on this topic from different fields. We offer a comparative analysis of various approaches that have been proposed for problems such as age estimation, appearance prediction, face verification, etc. and offer insights into future research on this topic. 相似文献
4.
Computational methods for parametric LQ problems--A survey 总被引:1,自引:0,他引:1
Iterative methods for finding the optimal constant feedback gains for parametric LQ problems, notably for optimal constant output feedback problems, are surveyed. The connections of several methods to loss function expansions are discussed with important implications to the understanding of their convergence properties. Especially, the descent Anderson-Moore method, Levine-Athans like methods, and the Newton method are considered. Convergence results are also included. The initialization problem and the output feedback stabilization problem are also discussed. Furthermore, it is shown that the concepts and methods surveyed in this paper are useful in solving many realistic generalized parametric LQ problems as well, notably so-called robust parametric LQ problems. 相似文献
5.
Qu Yanwen 《计算机科学技术学报》1986,1(3):80-91
AGDL is a definition language for attribute grammars. It is a specification language usedto generate a compiler automatically. AGDL, whose rules are easy to read, is an applicativelanguage with abstract data types, and it will be used as an important tool language to developcompiler generators by NCI. 相似文献
6.
7.
Extensive work has been done on different activities of natural language processing for Western languages as compared to its Eastern counterparts particularly South Asian Languages. Western languages are termed as resource-rich languages. Core linguistic resources e.g. corpora, WordNet, dictionaries, gazetteers and associated tools being developed for Western languages are customarily available. Most South Asian Languages are low resource languages e.g. Urdu is a South Asian Language, which is among the widely spoken languages of sub-continent. Due to resources scarcity not enough work has been conducted for Urdu. The core objective of this paper is to present a survey regarding different linguistic resources that exist for Urdu language processing, to highlight different tasks in Urdu language processing and to discuss different state of the art available techniques. Conclusively, this paper attempts to describe in detail the recent increase in interest and progress made in Urdu language processing research. Initially, the available datasets for Urdu language are discussed. Characteristic, resource sharing between Hindi and Urdu, orthography, and morphology of Urdu language are provided. The aspects of the pre-processing activities such as stop words removal, Diacritics removal, Normalization and Stemming are illustrated. A review of state of the art research for the tasks such as Tokenization, Sentence Boundary Detection, Part of Speech tagging, Named Entity Recognition, Parsing and development of WordNet tasks are discussed. In addition, impact of ULP on application areas, such as, Information Retrieval, Classification and plagiarism detection is investigated. Finally, open issues and future directions for this new and dynamic area of research are provided. The goal of this paper is to organize the ULP work in a way that it can provide a platform for ULP research activities in future. 相似文献
8.
Michael C. McCord 《Artificial Intelligence》1982,18(3):327-367
In this paper, ideas are presented for the expression of natural language grammars in clausal logic, following the work of Colmerauer, Kowalski, Dahl, Warren, and F. Pereira. A uniform format for syntactic structures is proposed, in which every syntactic item consists of a central predication, a cluster of modifiers, a list of features, and a determiner. The modifiers of a syntactic item are again syntactic items (of the same format), and a modifier's determiner shows its function in the semantic structure. Rules for semantic interpretation are given which include the determination of scoping of modifiers (with quantifier scoping as a special case). In the rules for syntax, the notions of slots and slot-filling play an important role, based on previous work by the author. The ideas have been tested in an English data base query system, implemented in Prolog. 相似文献
9.
Tomek Strzalkowski 《Computational Intelligence》1990,6(3):145-171
The use of a single grammar in natural language parsing and generation is most desirable for a variety of reasons, including efficiency, perspicuity, integrity, robustness, and a certain amount of elegance. These characteristics have been noted before by several researchers, but it was only recently that more serious attention started to be paid to the problem of creating a bidirectional system for natural language processing. In this paper we discuss a somewhat more radical version of the problem: given a parser for a language, can we reverse it so that it becomes an efficient generator for the same language? Furthermore, since both the parser and the generator are based upon the same grammar, are there any normalization conditions upon the form of the grammar that must be met in order to assure the maximum efficiency of the reversed program? Can other grammars be transformed into the normal form? We describe the results of an experiment with PROLOG-based logic grammar which has been derived from a substantial-coverage string grammar for English. We present an alogorithm for automated inversion of a unification parser into an efficient unification generator, using the collections of minimal sets of essential arguments for predicates. We discuss the scope of the present version of the algorithm and then point out several possible avenues for extension. We also outline a preliminary solution to the question of grammar's “normal form” and suggest a handful of normalizing transformations that can be used to enhance the efficiency of the generator. This research interacts closely with a Japanese-English machine translation project at New York University, for which the first implementation of the inversion algorithm has been prepared. 相似文献
10.
Initially developed for geometric representation, feature modeling has been applied in product design and manufacturing with great success. With the growth of computer-aided engineering (CAE), computer-aided process planning (CAPP), computer-aided manufacturing (CAM), and other applications for product engineering, the definitions of features have been mostly application-driven. This survey briefly reviews feature modeling historical evolution first. Subsequently, various approaches to resolving the interoperability issues during product lifecycle management are reviewed. In view of the recent progress of emerging technologies, such as Internet of Things (IoT), big data, social manufacturing, and additive manufacturing (AM), the focus of this survey is on the state of the art application of features in the emerging research fields. The interactions among these trending techniques constitute the socio-cyber-physical system (SCPS)-based manufacturing which demands for feature interoperability across heterogeneous domains. Future efforts required to extend feature capability in SCPS-based manufacturing system modeling are discussed at the end of this survey. 相似文献
11.
Natural language processing (NLP), or the pragmatic research perspective of computational linguistics, has become increasingly powerful due to data availability and various techniques developed in the past decade. This increasing capability makes it possible to capture sentiments more accurately and semantics in a more nuanced way. Naturally, many applications are starting to seek improvements by adopting cutting-edge NLP techniques. Financial forecasting is no exception. As a result, articles that leverage NLP techniques to predict financial markets are fast accumulating, gradually establishing the research field of natural language based financial forecasting (NLFF), or from the application perspective, stock market prediction. This review article clarifies the scope of NLFF research by ordering and structuring techniques and applications from related work. The survey also aims to increase the understanding of progress and hotspots in NLFF, and bring about discussions across many different disciplines. 相似文献
12.
Visual languages (VLs) facilitate software development by not only supporting communication and abstraction, but also by generating various artifacts such as code and reports from the same high-level specification. VLs are thus often translated to other formalisms, in most cases with bidirectionality as a crucial requirement to, e.g., support re-engineering of software systems. Triple Graph Grammars (TGGs) are a rule-based language to specify consistency relations between two (visual) languages from which bidirectional translators are automatically derived. TGGs are formally founded but are also limited in expressiveness, i.e., not all types of consistency can be specified with TGGs. In particular, 1-to-n correspondence between elements depending on concrete input models cannot be described. In other words, a universal quantifier over certain parts of a TGG rule is missing to generalize consistency to arbitrary size. To overcome this, we transfer the well-known multi-amalgamation concept from algebraic graph transformation to TGGs, allowing us to mark certain parts of rules as repeated depending on the translation context. Our main contribution is to derive TGG-based translators that comply with this extension. Furthermore, we identify bad smells on the usage of multi-amalgamation in TGGs, prove that multi-amalgamation increases the expressiveness of TGGs, and evaluate our tool support. 相似文献
13.
The high complexity of natural language and the huge amount of human and temporal resources necessary for producing the grammars lead several researchers in the area of Natural Language Processing to investigate various solutions for automating grammar generation and updating processes. Many algorithms for Context-Free Grammar inference have been developed in the literature. This paper provides a survey of the methodologies for inferring context-free grammars from examples, developed by researchers in the last decade. After introducing some preliminary definitions and notations concerning learning and inductive inference, some of the most relevant existing grammatical inference methods for Natural Language are described and classified according to the kind of presentation (if text or informant) and the type of information (if supervised, unsupervised, or semi-supervised). Moreover, the state of the art of the strategies for evaluation and comparison of different grammar inference methods is presented. The goal of the paper is to provide a reader with introduction to major concepts and current approaches in Natural Language Learning research. 相似文献
14.
Uwe Zdun 《Software》2007,37(9):983-1016
Software patterns provide reusable solutions to recurring design problems in a particular context. The software architect or designer must find the relevant patterns and pattern languages that need to be considered, and select the appropriate patterns, as well as the best order to apply them. If the patterns and pattern languages are written by multiple pattern authors, it might be necessary to identify interdependencies and overlaps between these patterns and pattern languages first. Out of the possible multitude of patterns and pattern combinations that might provide a solution to a particular design problem, one fitting solution must be selected. This can only be mastered with a sufficient expertise for both the relevant patterns and the domain in which they are applied. To remedy these issues we provide an approach to support the selection of patterns based on desired quality attributes and systematic design decisions based on patterns. We propose to formalize the pattern relationships in a pattern language grammar and to annotate the grammar with effects on quality goals. In a second step, complex design decisions are analyzed further using the design spaces covered by a set of related software patterns. This approach helps to systematically find and categorize the appropriate software patterns—possibly even from different sources. As a case study of our approach, we analyze patterns from a pattern language for distributed object middleware. Copyright © 2006 John Wiley & Sons, Ltd. 相似文献
15.
《Computer Speech and Language》2005,19(3):249-274
This paper is devoted to the estimation of stochastic context-free grammars (SCFGs) and their use as language models. Classical estimation algorithms, together with new ones that consider a certain subset of derivations in the estimation process, are presented in a unified framework. This set of derivations is chosen according to both structural and statistical criteria. The estimated SCFGs have been used in a new hybrid language model to combine both a word-based n-gram, which is used to capture the local relations between words, and a category-based SCFG together with a word distribution into categories, which is defined to represent the long-term relations between these categories. We describe methods for learning these stochastic models for complex tasks, and we present an algorithm for computing the word transition probability using this hybrid language model. Finally, experiments on the UPenn Treebank corpus show significant improvements in the test set perplexity with regard to the classical word trigram models. 相似文献
16.
P. HR. PETKOV M. M. KONSTANTINOV N. D. CHRTSTOV 《International journal of systems science》2013,44(4):465-477
This paper presents a brief survey of computational algorithms for the analysis and synthesis of linear control systems described in the state space. An attempt is made to select the most efficient methods for analysis of the stability, controllability and observability, the reduction into canonical forms, the pole assignment synthesis and the synthesis of optimal systems with quadratic cost. Some aspects of the development of mathematical software for solving these problems are also discussed. 相似文献
17.
18.
Ethel Schuster 《Computational Intelligence》1986,2(1):93-98
This paper describes VP2 , a system that has been implemented to tutor nonnative speakers in English. This system differs from many tutoring systems by employing an explicit grammar of its user's native language. This grammar enables VP2 to customize its responses by addressing problems due to interference of the native language. The system focuses on the acquisition of English verb-particle and verb-prepositional phrase constructions. Its correction strategy is based upon comparison of the native language grammar with an English grammar. VP2 is a modular system: its grammar of a user's native language can easily be replaced by a grammar of another language. The problems and solutions presented in this paper are related to the more general question of how modelling previous knowledge facilitates instruction in a new skill. 相似文献
19.
20.
Data dimensionality estimation methods: a survey 总被引:9,自引:0,他引:9
Francesco 《Pattern recognition》2003,36(12):2945-2954
In this paper, data dimensionality estimation methods are reviewed. The estimation of the dimensionality of a data set is a classical problem of pattern recognition. There are some good reviews (Algorithms for Clustering Data, Prentice-Hall, Englewood Cliffs, NJ, 1988) in literature but they do not include more recent developments based on fractal techniques and neural autoassociators. The aim of this paper is to provide an up-to-date survey of the dimensionality estimation methods of a data set, paying special attention to the fractal-based methods. 相似文献