首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
In this paper is indicated the possible utility of isotonic spaces as a background language for discussing systems. In isotonic spaces the basic duality between neighborhood and convergent first achieves a proper background permitting applications beyond the scope of topological spaces. A generalization of continuity of mappings based on ancestral relations is presented and this definition is applied to establish a necessary and sufficient condition that mappings preserve connectedness. Fortunately for systems theory, it is not necessary to have infinite sets or infinitary operators to apply definitions of neighborhood, convergents, continuity and connectedness.This work was supported in part by a grant from the National Science Foundation.  相似文献   

2.
In Response to Elliott and Valenza, 'And Then There Were None', (1996) Donald Foster has taken strenuous issue with our Shakespeare Clinic's final report, which concluded that none of the testable Shakespeare claimants, and none of the Shakespeare Apocrypha poems and plays – including Funeral Elegy by W.S. – match Shakespeare. Though he seems to accept most of our exclusions – notably excepting those of the Elegy and A Lover's Complaint – he believes that our methodology is nonetheless fatally flawed by worthless figures ... wrong more often than right, rigorous cherry–picking, playing with a stacked deck, and conveniently exil[ing] ... inconvenient data. He describes our tests as foul vapor and methodological madness.We believe that this criticism is seriously overdrawn, and that our tests and conclusions have emerged essentially intact. By our count, he claims to have found 21 errors of consequence in our report. Only five of these claims, all trivial, have any validity at all. If fully proved, they might call for some cautions and slight refinements for five of our 54 tests, but in no case would they come close to invalidating the questioned test. The remaining 49 tests are wholly intact. Total erosion of our findings from the Foster critique could amount, at most, to half of one percent. None of his accusations of cherry–picking, deck–stacking, and evidence–ignoring are substantiated.  相似文献   

3.
We investigate three-dimensional visibility problems for scenes that consist ofn non-intersecting spheres. The viewing point moves on a flightpath that is part of a circle at infinity given by a planeP and a range of angles {(t)¦t[01]} [02]. At timet, the lines of sight are parallel to the ray inP, which starts in the origin ofP and represents the angle(t) (orthographic views of the scene). We give an algorithm that computes the visibility graph at the start of the flight, all time parameters at which the topology of the scene changes, and the corresponding topology changes. The algorithm has running time0(n + k + p) logn), wheren is the number of spheres in the scene;p is the number of transparent topology changes (the number of different scene topologies visible along the flight path, assuming that all spheres are transparent); andk denotes the number of vertices (conflicts) which are in the (transparent) visibility graph at the start and do not disappear during the flight.The second author was supported by the ESPRIT II Basic Research Actions Program, under Contract No. 3075 (project ALCOM).  相似文献   

4.
The adaptiveness of agents is one of the basic conditions for the autonomy. This paper describes an approach of adaptiveness forMonitoring Cognitive Agents based on the notion of generic spaces. This notion allows the definition of virtual generic processes so that any particular actual process is then a simple configuration of the generic process, that is to say a set of values of parameters. Consequently, generic domain ontology containing the generic knowledge for solving problems concerning the generic process can be developed. This lead to the design of Generic Monitoring Cognitive Agent, a class of agent in which the whole knowledge corpus is generic. In other words, modeling a process within a generic space becomes configuring a generic process and adaptiveness becomes genericity, that is to say independence regarding technology. In this paper, we present an application of this approach on Sachem, a Generic Monitoring Cognitive Agent designed in order to help the operators in operating a blast furnace. Specifically, the NeuroGaz module of Sachem will be used to present the notion of a generic blast furnace. The adaptiveness of Sachem can then be noted through the low cost of the deployment of a Sachem instance on different blast furnaces and the ability of NeuroGaz in solving problem and learning from various top gas instrumentation.  相似文献   

5.
In this paper the problem of routing messages along shortest paths in a distributed network without using complete routing tables is considered. In particular, the complexity of deriving minimum (in terms of number of intervals) interval routing schemes is analyzed under different requirements. For all the cases considered NP-hardness proofs are given, while some approximability results are provided. Moreover, relations among the different cases considered are studied.This work was supported by the EEC ESPRIT II Basic Research Action Program under Contract No. 7141 Algorithms and Complexity II, by the EEC Human Capital and Mobility MAP project, and by the Italian MURST 40% project Algoritmi, Modelli di Calcolo e Strutture Informative.  相似文献   

6.
Product quality is directly related to how well that product meets the customers needs and intents. Therefore, the ability to capture customer requirements correctly and succinctly is paramount. Unfortunately, within most software development frameworks requirements elicitation, recording and evaluation are some of the more ill-defined and least structured activities. To help address such inadequacies, we propose a requirements generation model (RGM) that (a) decomposes the conventional requirements analysis phase into sub-phases which focus and refine requirement generation activities, (b) bounds and structures those activities to promote a more effective generation process, and (c) implements a monitoring methodology to assist in detecting deviations from well-defined procedures intended to support the generation of requirements that meet the customers intent. The RGM incorporates lessons learned from a preliminary study that concentrated on identifying where and how miscommunication and requirements omission occur. An industry study (also reported in this paper) attests to the effectiveness of the RGM. The results of that study indicate that the RGM helps (a) reduce the late discovery of requirements, (b) reduce the slippage in milestone completion dates, and (c) increase customer and management satisfaction levels.  相似文献   

7.
A key managerial challenge, of interest to academics and practitioners alike, is the assessment and management of customer satisfaction. In this paper, we examine the underlying processes involving consumer satisfaction and switching patterns among ISPs using different satisfaction models, including the expectations-disconfirmation model, the attribution model, and an affective model. Our results indicate that the satisfaction levels of ISP consumers are generally relatively low, despite the fact that consumer expectations of ISPs are also low, reflecting mediocrity in the marketplace. In addition, consumers attribute their dissatisfaction to ISP indifference and believe that managing dissatisfaction is within the control of the ISP. Moreover, affective factors play an important role in satisfaction processes and switching behavior. Customer service including technical support and responsiveness of service staff is an important determinant factor in ISP selection. We suggest that as the ISP market matures, service providers that pay attention to affective factors and to building relationships with their customers will have a competitive advantage in the marketplace of the future.  相似文献   

8.
Abe  Naoki  Mamitsuka  Hiroshi 《Machine Learning》1997,29(2-3):275-301
We propose a new method for predicting protein secondary structure of a given amino acid sequence, based on a training algorithm for the probability parameters of a stochastic tree grammar. In particular, we concentrate on the problem of predicting -sheet regions, which has previously been considered difficult because of the unbounded dependencies exhibited by sequences corresponding to -sheets. To cope with this difficulty, we use a new family of stochastic tree grammars, which we call Stochastic Ranked Node Rewriting Grammars, which are powerful enough to capture the type of dependencies exhibited by the sequences of -sheet regions, such as the parallel and anti-parallel dependencies and their combinations. The training algorithm we use is an extension of the inside-outside algorithm for stochastic context-free grammars, but with a number of significant modifications. We applied our method on real data obtained from the HSSP database (Homology-derived Secondary Structure of Proteins Ver 1.0) and the results were encouraging: Our method was able to predict roughly 75 percent of the -strands correctly in a systematic evaluation experiment, in which the test sequences not only have less than 25 percent identity to the training sequences, but are totally unrelated to them. This figure compares favorably to the predictive accuracy of the state-of-the-art prediction methods in the field, even though our experiment was on a restricted type of -sheet structures and the test was done on a relatively small data size. We also stress that our method can predict the structure as well as the location of -sheet regions, which was not possible by conventional methods for secondary structure prediction. Extended abstracts of parts of the work presented in this paper have appeared in (Abe & Mamitsuka, 1994) and (Mamitsuka & Abe, 1994).  相似文献   

9.
Most of the results to date in discrete event supervisory control assume a zero-or-infinity structure for the cost of controlling a discrete event system, in the sense that it costs nothing to disable controllable events while uncontrollable events cannot be disabled (i.e., their disablement entails infinite cost). In several applications however, a more refined structure of the control cost becomes necessary in order to quantify the tradeoffs between candidate supervisors. In this paper, we formulate and solve a new optimal control problem for a class of discrete event systems. We assume that the system can be modeled as a finite acylic directed graph, i.e., the system process has a finite set of event trajectories and thus is terminating. The optimal control problem explicitly considers the cost of control in the objective function. In general terms, this problem involves a tradeoff between the cost of system evolution, which is quantified in terms of a path cost on the event trajectories generated by the system, and the cost of impacting on the external environment, which is quantified as a dynamic cost on control. We also seek a least restrictive solution. An algorithm based on dynamic programming is developed for the solution of this problem. This algorithm is based on a graph-theoretic formulation of the problem. The use of dynamic programming allows for the efficient construction of an optimal subgraph (i.e., optimal supervisor) of the given graph (i.e., discrete event system) with respect to the cost structure imposed. We show that this algorithm is of polynomial complexity in the number of vertices of the graph of the system.Research supported in part by the National Science Foundation under grant ECS-9057967 with additional support from GE and DEC.  相似文献   

10.
Summary Very much space is needed to store the values of all attribute instances in an attributed tree at the corresponding nodes; for that reason global cells are often used to store values of attribute instances. But these global cells must contain the right value at the right time, and, therefore, not all evaluation sequences of attribute instances are admissible, if one uses global cells.In this paper we will study first the problem arising during the construction of such admissible evaluation sequences for attributed trees, if no special property of an underlying ag is presumed. This will lead to a number of restrictions on the practically allowed use of global cells. After that we will provide a method for the construction of admissible evaluation sequences for arbitrary attribute trees of given attribute grammars, if global cells are used in the restricted sense. The proposed method is independent of special classes of attribute grammars and can be used with arbitrary evaluator generators.  相似文献   

11.
The apex graph grammars generate precisely the context-free graph languages of bounded degree, independently of whether one considers hyperedge replacement systems or (boundary or confluent) NLC or edNCE graph grammars. The main feature of apex graph grammars is that nodes cannot be passed from nonterminal to nonterminal. The proof is based on a normal form result for arbitrary hyperedge replacement systems that forbids passing chains. This generalizes Greibach Normal Form.  相似文献   

12.
Attention is drawn to the need for controlling (during encoding) and checking (after encoding) the quality or accuracy of musical data. Some large databases of melodies are now becoming available, and methods of control and checking are presented which are specially suited to these. Two applications are discussed in detail: to Gregorian Chant and to German folksong. An effective method in tonal and modal music is found to be the investigation of melodic progressions which remain unusual even after amalgamation by transposition to a central register. Dr. Nigel Nettheim is Senior Research Associate in the Centre for Liberal and General Studies, University of New South Wales, Australia. His research combines Mathematical Statistics and Analytical Musicology. Publications include On the Spectral Analysis of Melody, Interface,21 (1992), and The Pulse in German Folksong: A Statistical Investigation, Musikometrika5, (to appear). The author thanks the anonymous reviewers for helpful remarks.  相似文献   

13.
We present an O(n3) time type inference algorithm for a type system with a largest type, a smallest type , and the usual ordering between function types. The algorithm infers type annotations of least shape, and it works equally well for recursive types. For the problem of typability, our algorithm is simpler than the one of Kozen, Palsberg, and Schwartzbach for type inferencewithout . This may be surprising, especially because the system with is strictly more powerful.  相似文献   

14.
In this paper we concentrate on spatial prepositions, more specifically we are interested here in projective prepositions (eg. in front of, to the left of) which have in the past been treated as semantically uninteresting. We demonstrate that projective prepositions are in fact problematic and demand more attention than they have so far been afforded; after summarising the important components of their meaning, we review the deficiencies of past and current approaches to the decoding problem; that is, predicting what a locative expression used in a particular situation conveys. Finally we present our own approach. Motivated by the shortcomings of contemporary work, we integrate elements of Lang's conceptual representation of objects' perceptual and dimensional characteristics, and the potential field model of object proximity that originated in manipulator and mobile robot path-finding.  相似文献   

15.
Summary We examine long unavoidable patterns, unavoidable in the sense of Bean, Ehrenfeucht, McNulty. Zimin and independently Schmidt have shown that there is only one unavoidable pattern of length 2 n -1 on an alphabet with n letters; this pattern is a quasi-power in the sense of Schützenberger. We characterize the unavoidable words of length 2 n -2 and 2 n -3. Finally we show that every sufficiently long unavoidable word has a certain quasi-power as a subword.This work was done while the author stayed at LITP, Université Paris 6, France  相似文献   

16.
Postgraduate degree programs in software engineering have been in existence for some time and now undergraduate degree programs with this title are beginning to appear. A number of questions and issues of only moderate importance with respect to the postgraduate programs now become critical and of overriding importance. While one could (and did) by and large gloss over these issues earlier, this will be much more difficult, and probably impossible, in the future. These issues must be resolved in a wider context than that in which they have been dealt with before, involving engineering in the universities and professional engineering societies in the larger social context. Much of the disagreement regarding these issues can, ultimately, be traced back to differing fundamental views and definitions of the term engineering and whether software engineering should be treated as just another engineering discipline or in a significantly and fundamentally different way. After examining two different definitions and views of engineering, this paper states and discusses a number of the questions and issues which planners of new undergraduate software engineering degree programs must deal with. Some pros and cons of various alternative answers are presented and a few answers suggested. The issues discussed here begin with questions of general policy and concept (e.g., which culture – that of the traditional engineering disciplines or that of computer science – should be instilled in the students in these new degree programs), include organizational matters (e.g., the position of the new degree program in the university hierarchy) and end with selected, more detailed questions regarding the curriculum (e.g., how to deal with programming, design and management topics). It is conjectured that, although the scientific foundation for the undergraduate software engineering degree programs must and will come from computer science, their culture and orientation must come from engineering if they and their graduates are to be successful in satisfying society's needs in the long term.  相似文献   

17.
Concept learning in robotics is an extremely challenging problem: sensory data is often high dimensional, and noisy due to specularities and other irregularities. In this paper, we investigate two general strategies to speed up learning, based on spatial decomposition of the sensory representation, and simultaneous learning of multiple classes using a shared structure. We study two concept learning scenarios: a hallway navigation problem, where the robot has to induce features such as opening or wall. The second task is recycling, where the robot has to learn to recognize objects, such as a trash can. We use a common underlying function approximator in both studies in the form of a feedforward neural network, with several hundred input units and multiple output units. Despite the high degree of freedom afforded by such an approximator, we show the two strategies provide sufficient bias to achieve rapid learning. We provide detailed experimental studies on an actual mobile robot called PAVLOV to illustrate the effectiveness of this approach.  相似文献   

18.
We consider the parallel time complexity of logic programs without function symbols, called logical query programs, or Datalog programs. We give a PRAM algorithm for computing the minimum model of a logical query program, and show that for programs with the polynomial fringe property, this algorithm runs in time that is logarithmic in the input size, assuming that concurrent writes are allowed if they are consistent. As a result, the linear and piecewise linear classes of logic programs are inN C. Then we examine several nonlinear classes in which the program has a single recursive rule that is an elementary chain. We show that certain nonlinear programs are related to GSM mappings of a balanced parentheses language, and that this relationship implies the polynomial fringe property; hence such programs are inN C Finally, we describe an approach for demonstrating that certain logical query programs are log space complete forP, and apply it to both elementary single rule programs and nonelementary programs.Supported by NSF Grant IST-84-12791, a grant of IBM Corporation, and ONR contract N00014-85-C-0731.  相似文献   

19.
Reliable and probably useful learning, proposed by Rivest and Sloan, is a variant of probably approximately correct learning. In this model the hypothesis must never misclassify an instance but is allowed to answer I don't know with a low probability. We derive upper and lower bounds for the sample complexity of reliable and probably useful learning in terms of the combinatorial characteristics of the concept class to be learned. This is done by reducing reliable and probably useful learning to learning with one-sided error. The bounds also hold for a slightly weaker model that allows the learner to output with a low probability a hypothesis that makes misclassifications. We see that in these models learning with one oracle is more difficult than learning with two oracles. Our results imply that monotone Boolean conjunctions or disjunctions cannot be learned reliably and probably usefully from a polynomial number of examples. Rectangles in n forn 2 cannot be learned from any finite number of examples.A preliminary version of this paper appeared under the title Reliable and useful learning inProceedings of the 2nd Annual Workshop on Computational Learning Theory, Morgan Kaufmann, San Mateo, CA, 1989, pp. 365–380. This work was supported by the Academy of Finland.  相似文献   

20.
In this paper, we propose a two-layer sensor fusion scheme for multiple hypotheses multisensor systems. To reflect reality in decision making, uncertain decision regions are introduced in the hypotheses testing process. The entire decision space is partitioned into distinct regions of correct, uncertain and incorrect regions. The first layer of decision is made by each sensor indepedently based on a set of optimal decision rules. The fusion process is performed by treating the fusion center as an additional virtual sensor to the system. This virtual sensor makes decision based on the decisions reached by the set of sensors in the system. The optimal decision rules are derived by minimizing the Bayes risk function. As a consequence, the performance of the system as well as individual sensors can be quantified by the probabilities of correct, incorrect and uncertain decisions. Numerical examples of three hypotheses, two and four sensor systems are presented to illustrate the proposed scheme.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号