首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper we concentrate on spatial prepositions, more specifically we are interested here in projective prepositions (eg. in front of, to the left of) which have in the past been treated as semantically uninteresting. We demonstrate that projective prepositions are in fact problematic and demand more attention than they have so far been afforded; after summarising the important components of their meaning, we review the deficiencies of past and current approaches to the decoding problem; that is, predicting what a locative expression used in a particular situation conveys. Finally we present our own approach. Motivated by the shortcomings of contemporary work, we integrate elements of Lang's conceptual representation of objects' perceptual and dimensional characteristics, and the potential field model of object proximity that originated in manipulator and mobile robot path-finding.  相似文献   

2.
This paper reports on a ZnO piezoelectric micro cantilever with a high-aspect-ratio (HAR) nano tip, which is proposed for a ferroelectric material based nano storage system. The system uses the interaction between the nano tip and a storage medium, and the HAR nano tip is needed to suppress undesirable effects caused by the small gap between the cantilever and the storage medium. The fabrication process for the cantilever with the HAR nano tip consists of three parts: the HAR nano tip formation, the cantilever fabrication, and the bonding/releasing process. The HAR nano tip is formed by the Si deep reactive ion etching for a long shaft and the anisotropic wet etching for a nano tip end. The cantilever is made up of 1 m-thick LPCVD poly-Si layer and 0.2 m-thick Si nitride layer, and has 0.5 m-thick ZnO actuation layer. A final releasing process is followed by an anodic bonding process. The fabricated HAR nano tip has 6 m side length, over 18 m height, and less than 15 nm tip radius, which is built on the 85 m-wide, 300 m-long, and 1.2 m-thick cantilever. The experimental results show a linear behavior with respect to input voltage of 1 to 5 V and the first resonance frequency at 17.9 kHz.  相似文献   

3.
Suppose a directed graph has its arcs stored in secondary memory, and we wish to compute its transitive closure, also storing the result in secondary memory. We assume that an amount of main memory capable of holdings values is available, and thats lies betweenn, the number of nodes of the graph, ande, the number of arcs. The cost measure we use for algorithms is theI/O complexity of Kung and Hong, where we count 1 every time a value is moved into main memory from secondary memory, or vice versa.In the dense case, wheree is close ton 2, we show that I/O equal toO(n 3/s) is sufficient to compute the transitive closure of ann-node graph, using main memory of sizes. Moreover, it is necessary for any algorithm that is standard, in a sense to be defined precisely in the paper. Roughly, standard means that paths are constructed only by concatenating arcs and previously discovered paths. For the sparse case, we show that I/O equal toO(n 2e/s) is sufficient, although the algorithm we propose meets our definition of standard only if the underlying graph is acyclic. We also show that(n 2e/s) is necessary for any standard algorithm in the sparse case. That settles the I/O complexity of the sparse/acyclic case, for standard algorithms. It is unknown whether this complexity can be achieved in the sparse, cyclic case, by a standard algorithm, and it is unknown whether the bound can be beaten by nonstandard algorithms.We then consider a special kind of standard algorithm, in which paths are constructed only by concatenating arcs and old paths, never by concatenating two old paths. This restriction seems essential if we are to take advantage of sparseness. Unfortunately, we show that almost another factor ofn I/O is necessary. That is, there is an algorithm in this class using I/OO(n 3e/s) for arbitrary sparse graphs, including cyclic ones. Moreover, every algorithm in the restricted class must use(n 3e/s/log3 n) I/O, on some cyclic graphs.The work of this author was partially supported by NSF grant IRI-87-22886, IBM contract 476816, Air Force grant AFOSR-88-0266 and a Guggenheim fellowship.  相似文献   

4.
We show that the simple universal adaptive control lawu(t)=N(k(t))y(t)=|y(t)| 2, withN(k)=(logk) cos((logk)) and 3+<1, stabilizes all detectable and stabilizable infinite dimensional systems of Pritchard-Salamon type which are externally stabilized by somescalar output feedback. The same controller is also shown to stabilize time varying systems satisfying the same type of output feedback stabilizability.  相似文献   

5.
Conclusions The memory gain given by the present algorithms, in comparison with the algorithms of [2], depends on the class of problems to be solved and can be quite substantial in the solution of problems in which the unknowns have long forms as their values. The cost in time connected with the more complex structures of the lists must, in general, be compensated by the less frequent calls to the procedures described in [2] for the so-called garbage-collector-optimizer and by the replacement of copying of values by the copying of references (the marks ). It remains to be noted that if the values of the unknowns or the function identifiers are constants, then the algorithms reduce to those described in [2].Translated from Kibernetika, No. 1, pp. 69–74, January–February, 1977.  相似文献   

6.
This paper presents a detailed study of Eurotra Machine Translation engines, namely the mainstream Eurotra software known as the E-Framework, and two unofficial spin-offs – the C,A,T and Relaxed Compositionality translator notations – with regard to how these systems handle hard cases, and in particular their ability to handle combinations of such problems. In the C,A,T translator notation, some cases of complex transfer are wild, meaning roughly that they interact badly when presented with other complex cases in the same sentence. The effect of this is that each combination of a wild case and another complex case needs ad hoc treatment. The E-Framework is the same as the C,A,T notation in this respect. In general, the E-Framework is equivalent to the C,A,T notation for the task of transfer. The Relaxed Compositionality translator notation is able to handle each wild case (bar one exception) with a single rule even where it appears in the same sentence as other complex cases.  相似文献   

7.
The initial five papers of this series on pattern perception treat first, the perception of pitch in musical contexts and then, the perception of timbre and speech. Each sound is considered to be embedded in a context consisting of those sounds which surround it or coincide with it. The apprehension of a musical pattern depends upon the perceptibility of certain relations between, and properties of, its parts (e.g. motifA is similar to motifB or G is the tonic). It is hypothesized that, because of the limitations of short term memory, the perception of specific relations and properties requires that certain mental reference frames be extracted from the various contexts. However, a reference frame which supports the perception of any specified relation may be extracted from only very few of all possible contexts. The choices of musical materials in both Western and non-Western music are shown to avoid precisely such difficulties. When they are not avoided, distortions of perception are predicted and methods for experimental verification are suggested. This theory is then applied to suggest new materials for the composition of both microtonal and tonecolor music. This is done in a manner which exposes the correspondence between each choice of musical materials and those musical properties and relations whose perception is (or is not) thereby supported.This first paper discusses the relation between the ability to perceive relative sizes of musical intervals and the choice of reference frame from a given musical context.  相似文献   

8.
In this paper we deepen Mundici's analysis on reducibility of the decision problem from infinite-valued ukasiewicz logic to a suitable m-valued ukasiewicz logic m , where m only depends on the length of the formulas to be proved. Using geometrical arguments we find a better upper bound for the least integer m such that a formula is valid in if and only if it is also valid in m. We also reduce the notion of logical consequence in to the same notion in a suitable finite set of finite-valued ukasiewicz logics. Finally, we define an analytic and internal sequent calculus for infinite-valued ukasiewicz logic.  相似文献   

9.
The design of the database is crucial to the process of designing almost any Information System (IS) and involves two clearly identifiable key concepts: schema and data model, the latter allowing us to define the former. Nevertheless, the term model is commonly applied indistinctly to both, the confusion arising from the fact that in Software Engineering (SE), unlike in formal or empirical sciences, the notion of model has a double meaning of which we are not always aware. If we take our idea of model directly from empirical sciences, then the schema of a database would actually be a model, whereas the data model would be a set of tools allowing us to define such a schema.The present paper discusses the meaning of model in the area of Software Engineering from a philosophical point of view, an important topic for the confusion arising directly affects other debates where model is a key concept. We would also suggest that the need for a philosophical discussion on the concept of data model is a further argument in favour of institutionalizing a new area of knowledge, which could be called: Philosophy of Engineering.  相似文献   

10.
The degree to which information sources are pre-processed by Web-based information systems varies greatly. In search engines like Altavista, little pre-processing is done, while in knowledge integration systems, complex site-specific wrappers are used to integrate different information sources into a common database representation. In this paper we describe an intermediate point between these two models. In our system, information sources are converted into a highly structured collection of small fragments of text. Database-like queries to this structured collection of text fragments are approximated using a novel logic called WHIRL, which combines inference in the style of deductive databases with ranked retrieval methods from information retrieval (IR). WHIRL allows queries that integrate information from multiple Web sites, without requiring the extraction and normalization of object identifiers that can be used as keys; instead, operations that in conventional databases require equality tests on keys are approximated using IR similarity metrics for text. This leads to a reduction in the amount of human engineering required to field a knowledge integration system. Experimental evidence is given showing that many information sources can be easily modeled with WHIRL, and that inferences in the logic are both accurate and efficient.  相似文献   

11.
We analyze four nce Memed novels of Yaar Kemal using six style markers: most frequent words, syllable counts, word type – or part of speech – information, sentence length in terms of words, word length in text, and word length in vocabulary. For analysis we divide each novel into five thousand word text blocks and count the frequencies of each style marker in these blocks. The style markers showing the best separation are most frequent words and sentence lengths. We use stepwise discriminant analysis to determine the best discriminators of each style marker. We then use these markers in cross validation based discriminant analysis. Further investigation based on multiple analysis of variance (MANOVA) reveals how the attributes of each style marker group distinguish among the volumes.  相似文献   

12.
I discuss the attitude of Jewish law sources from the 2nd–:5th centuries to the imprecision of measurement. I review a problem that the Talmud refers to, somewhat obscurely, as impossible reduction. This problem arises when a legal rule specifies an object by referring to a maximized (or minimized) measurement function, e.g., when a rule applies to the largest part of a divided whole, or to the first incidence that occurs, etc. A problem that is often mentioned is whether there might be hypothetical situations involving more than one maximal (or minimal) value of the relevant measurement and, given such situations, what is the pertinent legal rule. Presumption of simultaneous occurrences or equally measured values are also a source of embarrassment to modern legal systems, in situations exemplified in the paper, where law determines a preference based on measured values. I contend that the Talmudic sources discussing the problem of impossible reduction were guided by primitive insights compatible with fuzzy logic presentation of the inevitable uncertainty involved in measurement. I maintain that fuzzy models of data are compatible with a positivistic epistemology, which refuses to assume any precision in the extra-conscious world that may not be captured by observation and measurement. I therefore propose this view as the preferred interpretation of the Talmudic notion of impossible reduction. Attributing a fuzzy world view to the Talmudic authorities is meant not only to increase our understanding of the Talmud but, in so doing, also to demonstrate that fuzzy notions are entrenched in our practical reasoning. If Talmudic sages did indeed conceive the results of measurements in terms of fuzzy numbers, then equality between the results of measurements had to be more complicated than crisp equations. The problem of impossible reduction could lie in fuzzy sets with an empty core or whose membership functions were only partly congruent. Reduction is impossible may thus be reconstructed as there is no core to the intersection of two measures. I describe Dirichlet maps for fuzzy measurements of distance as a rough partition of the universe, where for any region A there may be a non-empty set of - _A (upper approximation minus lower approximation), where the problem of impossible reduction applies. This model may easily be combined with probabilistic extention. The possibility of adopting practical decision standards based on -cuts (and therefore applying interval analysis to fuzzy equations) is discussed in this context. I propose to characterize the uncertainty that was presumably capped by the old sages as U-uncertainty, defined, for a non-empty fuzzy set A on the set of real numbers, whose -cuts are intervals of real numbers, as U(A) = 1/h(A) 0 h(A) log [1+(A)]d, where h(A) is the largest membership value obtained by any element of A and (A) is the measure of the -cut of A defined by the Lebesge integral of its characteristic function.  相似文献   

13.
In this paper, we consider the linear interval tolerance problem, which consists of finding the largest interval vector included in ([A], [b]) = {x R n | A [A], b [b], Ax = b}. We describe two different polyhedrons that represent subsets of all possible interval vectors in ([A], [b]), and we provide a new definition of the optimality of an interval vector included in ([A], [b]). Finally, we show how the Simplex algorithm can be applied to find an optimal interval vector in ([A], [b]).  相似文献   

14.
In this paper, we present a method for high-level control of robots whose low-level software is based on dynamically reconfigurable, reusable real-time software modules. Our approach is to use an embedded interpreter for a general-purpose programming language to direct the operation of the low-level modules toward meeting the task-level goals of the robot. To this end, we present RSK, a virtual-machine kernel implementing a Scheme interpreter capable of hard real-time operation, and employing a method of code execution we call message-based evaluation (MBE) . MBE is a novel combination of a traditional code execution model and a message-passing architecture, which simplifies the process of writing code for managing the robot's reconfigurable subsystem.  相似文献   

15.
The paper introduces the concept of Computer-based Informated Environments (CBIEs) to indicate an emergent form of work organisation facilitated by information technology. It first addresses the problem of inconsistent meanings of the informate concept in the literature, and it then focuses on those cases which, it is believed, show conditions of plausible informated environments. Finally, the paper looks at those factors that when found together contribute to building a CBIE. It makes reference to CBIEs as workplaces that comprise a non-technocentric perspective and questions whether CBIEs truly represent an anthropocentric route of information technology.  相似文献   

16.
Learning to Match the Schemas of Data Sources: A Multistrategy Approach   总被引:5,自引:0,他引:5  
Doan  AnHai  Domingos  Pedro  Halevy  Alon 《Machine Learning》2003,50(3):279-301
The problem of integrating data from multiple data sources—either on the Internet or within enterprises—has received much attention in the database and AI communities. The focus has been on building data integration systems that provide a uniform query interface to the sources. A key bottleneck in building such systems has been the laborious manual construction of semantic mappings between the query interface and the source schemas. Examples of mappings are element location maps to address and price maps to listed-price. We propose a multistrategy learning approach to automatically find such mappings. The approach applies multiple learner modules, where each module exploits a different type of information either in the schemas of the sources or in their data, then combines the predictions of the modules using a meta-learner. Learner modules employ a variety of techniques, ranging from Naive Bayes and nearest-neighbor classification to entity recognition and information retrieval. We describe the LSD system, which employs this approach to find semantic mappings. To further improve matching accuracy, LSD exploits domain integrity constraints, user feedback, and nested structures in XML data. We test LSD experimentally on several real-world domains. The experiments validate the utility of multistrategy learning for data integration and show that LSD proposes semantic mappings with a high degree of accuracy.  相似文献   

17.
A technique to model and to verify distributed algorithms is suggested. This technique (based on Petri nets) reduces the modelling and analysis effort to a reasonable level. The paper outlines the technique using the example of a typical network algorithm, theecho algorithm.Supported by the DFG-projects Verteilte Algorithmen and Konsensalgorithmen  相似文献   

18.
The language of standard propositional modal logic has one operator ( or ), that can be thought of as being determined by the quantifiers or , respectively: for example, a formula of the form is true at a point s just in case all the immediate successors of s verify .This paper uses a propositional modal language with one operator determined by a generalized quantifier to discuss a simple connection between standard invariance conditions on modal formulas and generalized quantifiers: the combined generalized quantifier conditions of conservativity and extension correspond to the modal condition of invariance under generated submodels, and the modal condition of invariance under bisimulations corresponds to the generalized quantifier being a Boolean combination of and .  相似文献   

19.
On Bounding Solutions of Underdetermined Systems   总被引:1,自引:0,他引:1  
Sufficient conditions for the existence and uniqueness of a solution x* D (R n ) of Y(x) = 0 where : R n R m (m n) with C 2(D) where D R n is an open convex set and Y = (x)+ are given, and are compared with similar results due to Zhang, Li and Shen (Reliable Computing 5(1) (1999)). An algorithm for bounding zeros of f (·) is described, and numerical results for several examples are given.  相似文献   

20.
Through key examples and constructs, exact and approximate, complexity, computability, and solution of linear programming systems are reexamined in the light of Khachian's new notion of (approximate) solution. Algorithms, basic theorems, and alternate representations are reviewed. It is shown that the Klee-Minty example hasnever been exponential for (exact) adjacent extreme point algorithms and that the Balinski-Gomory (exact) algorithm continues to be polynomial in cases where (approximate) ellipsoidal centered-cutoff algorithms (Levin, Shor, Khachian, Gacs-Lovasz) are exponential. By model approximation, both the Klee-Minty and the new J. Clausen examples are shown to be trivial (explicitly solvable) interval programming problems. A new notion of computable (approximate) solution is proposed together with ana priori regularization for linear programming systems. New polyhedral constraint contraction algorithms are proposed for approximate solution and the relevance of interval programming for good starts or exact solution is brought forth. It is concluded from all this that the imposed problem ignorance of past complexity research is deleterious to research progress on computability or efficiency of computation.This research was partly supported by Project NR047-071, ONR Contract N00014-80-C-0242, and Project NR047-021, ONR Contract N00014-75-C-0569, with the Center for Cybernetic Studies, The University of Texas at Austin.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号