首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We prove that in anyN-node communication network with maximum degreed, any deterministic oblivious algorithm for routing an arbitrary permutation requires (N/d) parallel communication steps in the worst case. This is an improvement upon the (N/d 3/2) bound obtained by Borodin and Hopcroft. For theN-node hypercube, in particular, we show a matching upper bound by exhibiting a deterministic oblivious algorithm that routes any permutation in (N/logN) steps. The best previously known upper bound was (N). Our algorithm may be practical for smallN (up to about 214 nodes).C. Kaklamanis was supported in part by NSF Grant NSF-CCR-87-04513. T. Tsantilas was supported in part by NSF Grants NSF-DCR-86-00379 and NSF-CCR-89-02500.  相似文献   

2.
The two-terminal shortest-path problem asks for the shortest directed path from a specified nodes to a specified noded in a complete directed graphG onn nodes, where each edge has a nonnegative length. We show that if the length of each edge is chosen independently from the exponential distribution, and adjacency lists at each node are sorted by length, then a priority-queue implementation of Dijkstra's unidirectional search algorithm has the expected running time (n logn). We present a bidirectional search algorithm that has expected running time (n logn). These results are generalized to apply to a wide class of edge-length distributions, and to sparse graphs. If adjacency lists are not sorted, bidirectional search has the expected running time (an) on graphs of average degreea, as compared with (an) for unidirectional search.  相似文献   

3.
Product quality is directly related to how well that product meets the customers needs and intents. Therefore, the ability to capture customer requirements correctly and succinctly is paramount. Unfortunately, within most software development frameworks requirements elicitation, recording and evaluation are some of the more ill-defined and least structured activities. To help address such inadequacies, we propose a requirements generation model (RGM) that (a) decomposes the conventional requirements analysis phase into sub-phases which focus and refine requirement generation activities, (b) bounds and structures those activities to promote a more effective generation process, and (c) implements a monitoring methodology to assist in detecting deviations from well-defined procedures intended to support the generation of requirements that meet the customers intent. The RGM incorporates lessons learned from a preliminary study that concentrated on identifying where and how miscommunication and requirements omission occur. An industry study (also reported in this paper) attests to the effectiveness of the RGM. The results of that study indicate that the RGM helps (a) reduce the late discovery of requirements, (b) reduce the slippage in milestone completion dates, and (c) increase customer and management satisfaction levels.  相似文献   

4.
We introduce a methodology whereby an arbitrary logic system L can be enriched with temporal features to create a new system T(L). The new system is constructed by combining L with a pure propositional temporal logic T (such as linear temporal logic with Since and Until) in a special way. We refer to this method as adding a temporal dimension to L or just temporalising L. We show that the logic system T(L) preserves several properties of the original temporal logic like soundness, completeness, decidability, conservativeness and separation over linear flows of time. We then focus on the temporalisation of first-order logic, and a comparison is make with other first-order approaches to the handling of time.  相似文献   

5.
This paper proposes the use of accessible information (data/knowledge) to infer inaccessible data in a distributed database system. Inference rules are extracted from databases by means of knowledge discovery techniques. These rules can derive inaccessible data due to a site failure or network partition in a distributed system. Such query answering requires combining incomplete and partial information from multiple sources. The derived answer may be exact or approximate. Our inference process involves two phases to reason with reconstructed information. One phase involves using local rules to infer inaccessible data. A second phase involves merging information from different sites. We shall call such reasoning processes cooperative data inference. Since the derived answer may be incomplete, new algebraic tools are developed for supporting operations on incomplete information. A weak criterion called toleration is introduced for evaluating the inferred results. The conditions that assure the correctness of combining partial results, known as sound inference paths, are developed. A solution is presented for terminating an iterative reasoning process on derived data from multiple knowledge sources. The proposed approach has been implemented on a cooperative distributed database testbed, CoBase, at UCLA. The experimental results validate the feasibility of this proposed concept and can significantly improve the availability of distributed knowledge base/database systems.List of notation Mapping - --< Logical implication - = Symbolic equality - ==< Inference path - Satisfaction - Toleration - Undefined (does not exist) - Variable-null (may or may not exist) - * Subtuple relationship - * s-membership - s-containment - Open subtuple - Open s-membership - Open s-containment - P Open base - P Program - I Interpretation - DIP Data inference program - t Tuples - R Relations - Ø Empty interpretation - Open s-union - Open s-interpretation - Set of mapping from the set of objects to the set of closed objects - W Set of attributes - W Set of sound inference paths on the set of attributes W - Set of relational schemas in a DB that satisfy MVD - + Range closure of W wrt   相似文献   

6.
A set of spanning line segments in a polyhedronP preserves the property of intersection; that is, a plane intersectsP if and only if it also intersects . This paper gives a linear time algorithm for constructing for a polyhedron withN extreme vertices. IfN is odd, the algorithm is optimal in yielding [N/2] + 1 spanning line segments. IfN is even, it gives (N/2) + 1, which is optimal in some cases and nearly optimal in others.  相似文献   

7.
We analyze the Ohya-Masuda quantum algorithm that solves the so-called satisfiability problem, which is an NP-complete problem of the complexity theory. We distinguish three steps in the algorithm, and analyze the second step, in which a coherent superposition of states (a pure state) transforms into an incoherent mixture presented by a density matrix. We show that, if nonideal (in analogy with nonideal quantum measurement), this transformation can make the algorithm to fail in some cases. On this basis we give some general notions on the physical implementation of the Ohya-Masuda algorithm.  相似文献   

8.
Im Teil II Bildung von Elementarsätzen wird der Teil Namengebung und Kennzeichnung vorbereitet. Seine Überschrift könnte auch lauten: Was sind Nominatoren und wie kommen sie zustande? Teil V ist wie seine Vorgänger Teil III Gleichheit und Abstraktion und Teil IV Objektsprache/Metasprache als eine Erweiterung einer rationalen Grammatik aufzufassen. Der noch folgende Teil VI Logik und Geltungsicherung von Behauptungen schließt die Serie Informatik als Grundbildung ab.  相似文献   

9.
Summary A framework is proposed for the structured specification and verification of database dynamics. In this framework, the conceptual model of a database is a many sorted first order linear tense theory whose proper axioms specify the update and the triggering behaviour of the database. The use of conceptual modelling approaches for structuring such a theory is analysed. Semantic primitives based on the notions of event and process are adopted for modelling the dynamic aspects. Events are used to model both atomic database operations and communication actions (input/output). Nonatomic operations to be performed on the database (transactions) are modelled by processes in terms of trigger/reaction patterns of behaviour. The correctness of the specification is verified by proving that the desired requirements on the evolution of the database are theorems of the conceptual model. Besides the traditional data integrity constraints, requirements of the form Under condition W, it is guaranteed that the database operation Z will be successfully performed are also considered. Such liveness requirements have been ignored in the database literature, although they are essential to a complete definition of the database dynamics.

Notation

Classical Logic Symbols (Appendix 1) for all (universal quantifier) - exists at least once (existential quantifier) - ¬ no (negation) - implies (implication) - is equivalent to (equivalence) - and (conjunction) - or (disjunction) Tense Logic Symbols (Appendix 1) G always in the future - G 0 always in the future and now - F sometime in the future - F 0 sometime in the future or now - H always in the past - H 0 always in the past and now - P sometime in the past - P 0 sometime in the past or now - X in the next moment - Y in the previous moment - L always - M sometime Event Specification Symbols (Sects. 3 and 4.1) (x) means immediately after the occurrence of x - (x) means immediately before the occurrence of x - (x) means x is enabled, i.e., x may occur next - { } ({w 1} x{w 2}) states that if w 1 holds before the occurrence of x, then w 2 will hold after the occurrence of x (change rule) - [ ] ([oa1, ..., oan]x) states that only the object attributes oa1, ..., oa n are modifiable by x (scope rule) - {{ }} ({{w}}x) states that if x may occur next, then w holds (enabling rule) Process Specification Symbols (Sects. 5.3 and 5.4) :: for causal rules - for behavioural rules Transition-Pattern Composition Symbols (Sects. 5.2 and 5.3) ; sequential composition - ¦ choice composition - parallel composition - :| guarded alternative composition Location Predicates (Sect. 5.2) (z) means immediately after the occurrence of the last event of z (after) - (z) means immediately before the occurrence of the first event of z (before) - (z) means after the beginning of z and before the end of z (during) - ( z) means before the occurrence of an event of z (at)  相似文献   

10.
Given a finite setE R n, the problem is to find clusters (or subsets of similar points inE) and at the same time to find the most typical elements of this set. An original mathematical formulation is given to the problem. The proposed algorithm operates on groups of points, called samplings (samplings may be called multiple centers or cores); these samplings adapt and evolve into interesting clusters. Compared with other clustering algorithms, this algorithm requires less machine time and storage. We provide some propositions about nonprobabilistic convergence and a sufficient condition which ensures the decrease of the criterion. Some computational experiments are presented.  相似文献   

11.
Adam Drozdek 《AI & Society》1995,9(4):389-395
The question whether or not computers can think was first asked in print by Alan Turing in his seminal 1950 article. In order to avoid defining what a computer is or what thinking is, Turing resorts to the imitation game which is a test that allows us to determine whether or not a machine can think. That is, if an interrogator is unable to tell whether responses to his questions come from a human being or from a machine, the machine is imitating a human being so well that it has to be acknowledged that these responses result from its thinking. However, then as now, it is not an indisputable claim that machines could think, and an unceasing stream of papers discussing the validity of the test proves this point. There are many arguments in favour of, as well as against, the claims borne by the test, and Turing himself discusses some of them. In his view, there are mice possible objections to the concept of a thinking machine, which he eventually dismisses as weak, irrelevant, or plain false. However, as he admits, he can present no very convincing arguments of a positive nature to support my views. If I had I should not have taken such pains to point out the fallacies in contrary views.  相似文献   

12.
The proofs of the Church–Rosser theorems for , , and reduction in untyped -calculus are formalized in Isabelle/HOL, an implementation of Higher Order Logic in the generic theorem prover Isabelle. For -reduction, both the standard proof and Takahashi's are given and compared. All proofs are based on a general theory of commutating relations that supports an almost geometric style of reasoning about confluence diagrams.  相似文献   

13.
The design of the database is crucial to the process of designing almost any Information System (IS) and involves two clearly identifiable key concepts: schema and data model, the latter allowing us to define the former. Nevertheless, the term model is commonly applied indistinctly to both, the confusion arising from the fact that in Software Engineering (SE), unlike in formal or empirical sciences, the notion of model has a double meaning of which we are not always aware. If we take our idea of model directly from empirical sciences, then the schema of a database would actually be a model, whereas the data model would be a set of tools allowing us to define such a schema.The present paper discusses the meaning of model in the area of Software Engineering from a philosophical point of view, an important topic for the confusion arising directly affects other debates where model is a key concept. We would also suggest that the need for a philosophical discussion on the concept of data model is a further argument in favour of institutionalizing a new area of knowledge, which could be called: Philosophy of Engineering.  相似文献   

14.
The ongoing integration of LANs and WANs to support global communications and businesses and the emergence of integrated broadband communication services has created an increased demand for cooperation between customers, network and service providers to achieve end-to-end service management. Such a cooperation between autonomous authorities, each defining their own administrative management domains, requires the application of an open standardized framework to facilitate and regulate interworking. Such a framework is given by the ITU-T recommendations on TMN, where the so-called X interface is of particular importance for inter-domain management. In this paper, we explain the role of the TMN X interface within an inter-domain TMN architecture supporting end-to-end communications management. We identify the important issues that need to be addressed for the definition and realization of TMN X interfaces and report about our practical experiences with the implementation of TMN X interfaces in the PREPARE project.  相似文献   

15.
In this paper, we propose a two-layer sensor fusion scheme for multiple hypotheses multisensor systems. To reflect reality in decision making, uncertain decision regions are introduced in the hypotheses testing process. The entire decision space is partitioned into distinct regions of correct, uncertain and incorrect regions. The first layer of decision is made by each sensor indepedently based on a set of optimal decision rules. The fusion process is performed by treating the fusion center as an additional virtual sensor to the system. This virtual sensor makes decision based on the decisions reached by the set of sensors in the system. The optimal decision rules are derived by minimizing the Bayes risk function. As a consequence, the performance of the system as well as individual sensors can be quantified by the probabilities of correct, incorrect and uncertain decisions. Numerical examples of three hypotheses, two and four sensor systems are presented to illustrate the proposed scheme.  相似文献   

16.
When interpolating incomplete data, one can choose a parametric model, or opt for a more general approach and use a non-parametric model which allows a very large class of interpolants. A popular non-parametric model for interpolating various types of data is based on regularization, which looks for an interpolant that is both close to the data and also smooth in some sense. Formally, this interpolant is obtained by minimizing an error functional which is the weighted sum of a fidelity term and a smoothness term.The classical approach to regularization is: select optimal weights (also called hyperparameters) that should be assigned to these two terms, and minimize the resulting error functional.However, using only the optimal weights does not guarantee that the chosen function will be optimal in some sense, such as the maximum likelihood criterion, or the minimal square error criterion. For that, we have to consider all possible weights.The approach suggested here is to use the full probability distribution on the space of admissible functions, as opposed to the probability induced by using a single combination of weights. The reason is as follows: the weight actually determines the probability space in which we are working. For a given weight , the probability of a function f is proportional to exp(– f2 uu du) (for the case of a function with one variable). For each different , there is a different solution to the restoration problem; denote it by f. Now, if we had known , it would not be necessary to use all the weights; however, all we are given are some noisy measurements of f, and we do not know the correct . Therefore, the mathematically correct solution is to calculate, for every , the probability that f was sampled from a space whose probability is determined by , and average the different f's weighted by these probabilities. The same argument holds for the noise variance, which is also unknown.Three basic problems are addressed is this work: Computing the MAP estimate, that is, the function f maximizing Pr(f/D) when the data D is given. This problem is reduced to a one-dimensional optimization problem. Computing the MSE estimate. This function is defined at each point x as f(x)Pr(f/D) f. This problem is reduced to computing a one-dimensional integral.In the general setting, the MAP estimate is not equal to the MSE estimate. Computing the pointwise uncertainty associated with the MSE solution. This problem is reduced to computing three one-dimensional integrals.  相似文献   

17.
In this paper, we consider the class of Boolean -functions, which are the Boolean functions definable by -expressions (Boolean expressions in which no variable occurs more than once). We present an algorithm which transforms a Boolean formulaE into an equivalent -expression-if possible-in time linear in E times , where E is the size ofE andn m is the number of variables that occur more than once inE. As an application, we obtain a polynomial time algorithm for Mundici's problem of recognizing -functions fromk-formulas [17]. Furthermore, we show that recognizing Boolean -functions is co-NP-complete for functions essentially dependent on all variables and we give a bound close to co-NP for the general case.  相似文献   

18.
Summary Equivalence is a fundamental notion for the semantic analysis of algebraic specifications. In this paper the notion of crypt-equivalence is introduced and studied w.r.t. two loose approaches to the semantics of an algebraic specification T: the class of all first-order models of T and the class of all term-generated models of T. Two specifications are called crypt-equivalent if for one specification there exists a predicate logic formula which implicitly defines an expansion (by new functions) of every model of that specification in such a way that the expansion (after forgetting unnecessary functions) is homologous to a model of the other specification, and if vice versa there exists another predicate logic formula with the same properties for the other specification. We speak of first-order crypt-equivalence if this holds for all first-order models, and of inductive crypt-equivalence if this holds for all term-generated models. Characterizations and structural properties of these notions are studied. In particular, it is shown that first order crypt-equivalence is equivalent to the existence of explicit definitions and that in case of positive definability two first-order crypt-equivalent specifications admit the same categories of models and homomorphisms. Similarly, two specifications which are inductively crypt-equivalent via sufficiently complete implicit definitions determine the same associated categories. Moreover, crypt-equivalence is compared with other notions of equivalence for algebraic specifications: in particular, it is shown that first-order cryptequivalence is strictly coarser than abstract semantic equivalence and that inductive crypt-equivalence is strictly finer than inductive simulation equivalence and implementation equivalence.  相似文献   

19.
The adaptiveness of agents is one of the basic conditions for the autonomy. This paper describes an approach of adaptiveness forMonitoring Cognitive Agents based on the notion of generic spaces. This notion allows the definition of virtual generic processes so that any particular actual process is then a simple configuration of the generic process, that is to say a set of values of parameters. Consequently, generic domain ontology containing the generic knowledge for solving problems concerning the generic process can be developed. This lead to the design of Generic Monitoring Cognitive Agent, a class of agent in which the whole knowledge corpus is generic. In other words, modeling a process within a generic space becomes configuring a generic process and adaptiveness becomes genericity, that is to say independence regarding technology. In this paper, we present an application of this approach on Sachem, a Generic Monitoring Cognitive Agent designed in order to help the operators in operating a blast furnace. Specifically, the NeuroGaz module of Sachem will be used to present the notion of a generic blast furnace. The adaptiveness of Sachem can then be noted through the low cost of the deployment of a Sachem instance on different blast furnaces and the ability of NeuroGaz in solving problem and learning from various top gas instrumentation.  相似文献   

20.
Kumiko Ikuta 《AI & Society》1990,4(2):137-146
The role of craft language in the process of teaching (learning) Waza (skill) will be discussed from the perspective of human intelligence.It may be said that the ultimate goal of learning Waza in any Japanese traditional performance is not the perfect reproduction of the teaching (learning) process of Waza. In fact, a special metaphorical language (craft language) is used, which has the effect of encouraging the learner to activate his creative imagination. It is through this activity that the he learns his own habitus (Kata).It is suggested that, in considering the difference of function between natural human intelligence and artificial intelligence, attention should be paid to the imaginative activity of the learner as being an essential factor for mastering Kata.This article is a modified English version of Chapter 5 of my bookWaza kara shiru (Learning from Skill), Tokyo University Press, 1987, pp. 93–105.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号