首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper reports on the first steps towards the formal verification of correctness proofs of real-life protocols in process algebra. We show that such proofs can be verified, and partly constructed, by a general purpose proof checker. The process algebra we use isCRL, ACP augmented with data, which is expressive enough for the specification of real-life protocols. The proof checker we use is Coq, which is based on the Calculus of Constructions, an extension of simply typed lambda calculus. The focus is on the translation of the proof theory ofCRL andCRL-specifications to Coq. As a case study, we verified the Alternating Bit Protocol.  相似文献   

2.
Summary A single multiaccess channel is studied with the outcome of a transmission being either idle, success, or collision (ternary channel). Packets involved in a collision must be retransmitted, and an efficient way to solve a collision is known in the literature as Gallager-Tsybakov-Mikhailov algorithm. Performance analysis of the algorithm is quite hard. In fact, it bases on a numerical solution of some recurrence equations and on a numerical evaluation of some series. The obvious drawback of it is lack of insight into the behavior of the algorithm. We shall present a new approach of looking at the algorithm and discuss some attempts of analyzing its performance. In particular, expected lengths of a resolution interval and a conflict resolution interval as well as throughput of the algorithm will be discussed using asymptotic approximation and a small input rate approximation.  相似文献   

3.
We formalize natural deduction for first-order logic in the proof assistant Coq, using de Bruijn indices for variable binding. The main judgment we model is of the form d[:], stating that d is a proof term of formula under hypotheses it can be viewed as a typing relation by the Curry–Howard isomorphism. This relation is proved sound with respect to Coq's native logic and is amenable to the manipulation of formulas and of derivations. As an illustration, we define a reduction relation on proof terms with permutative conversions and prove the property of subject reduction.  相似文献   

4.
Some Observations on the Semantics of “Information”   总被引:1,自引:0,他引:1  
The term Information is widely used in the rhetoric of the Information Society, a rhetoric which some critics have judged to be empty, at least in part because of the overextension and inconsistent use of this word. We review the emergence of the concept of Information, identify a number of dimensions of similarity and difference in the way that the concept of Information is used and defined, and point up some specific areas where further critical attention is needed, of especial concern to the field of Information Systems. In particular we highlight some problems in identifying and understanding Information Work, and some research themes relevant to Virtual Organisations and Computer Supported Collaborative Work.  相似文献   

5.
This paper presents the latest developments of the MadeIn 'Coop method for modelling the human-machine and human-human co-operation process, and an application of this method for the design of a more co-operative version of the C3I System CHEOPS. We first consider that the design of software systems for organizations is tied more and more to the perspective of compound Knowledge Production Systems that link humans and machines engaged in a co-operative problem solving process. After exposing the four principles upon which MadeIn 'Coop rests for modelling co-operation, we present an artificial problem solving dialogue between CHEOPS and its users. Consistent with the Group Cognitive Processes Theory framework, we propose a dialogue analysis according to two complimentary points of view: the Collective Problem Solving model, and the Coordination model. This analysis should help system designers to identify new system functionalities to assist problem solving.(C3I) Command Control Communication Intelligence Systems  相似文献   

6.
Fast Theta-Subsumption with Constraint Satisfaction Algorithms   总被引:1,自引:0,他引:1  
Relational learning and Inductive Logic Programming (ILP) commonly use as covering test the -subsumption test defined by Plotkin. Based on a reformulation of -subsumption as a binary constraint satisfaction problem, this paper describes a novel -subsumption algorithm named Django,1 which combines well-known CSP procedures and -subsumption-specific data structures. Django is validated using the stochastic complexity framework developed in CSPs, and imported in ILP by Giordana et Saitta. Principled and extensive experiments within this framework show that Django improves on earlier -subsumption algorithms by several orders of magnitude, and that different procedures are better at different regions of the stochastic complexity landscape. These experiments allow for building a control layer over Django, termed Meta-Django, which determines the best procedures to use depending on the order parameters of the -subsumption problem instance. The performance gains and good scalability of Django and Meta-Django are finally demonstrated on a real-world ILP task (emulating the search for frequent clauses in the mutagenesis domain) though the smaller size of the problems results in smaller gain factors (ranging from 2.5 to 30).  相似文献   

7.
Games such as CHESS, GO and OTHELLO can be represented by minimax game trees. Among various search procedures to solve such game trees,- and SSS* are perhaps most well known. Although it is proved that SSS* explores only a subset of the nodes explored by-, - is commonly believed to be faster in real applications, since it requires very little memory space and hence its storage management cost is low. Contrary to this folklore, however, this paper reports, using the OTHELLO game as an example, that SSS* is much faster than-. It is also demonstrated that SSS* can be modified to make the required memory space controllable to some extent, while retaining the high efficiency of the original SSS*.This research was partially supported by the Ministry of Education, Science and Culture of Japan, under a Scientific Grant-in-Aid.  相似文献   

8.
Optimal shape design problems for an elastic body made from physically nonlinear material are presented. Sensitivity analysis is done by differentiating the discrete equations of equilibrium. Numerical examples are included.Notation U ad set of admissible continuous design parameters - U h ad set of admissible discrete design parameters - function fromU h ad defining shape of body - h function fromU h ad defining approximated shape of body - vector of nodal values of h - { n} sequence of functions tending to - () domain defined by - K bulk modulus - shear modulus - penalty parameter for contact condition - V() space of virtual displacements in() - V h(h) finite element approximation ofV() - J cost functional - J h discretized cost functional - J algebraic form ofJ h - (u) stress tensor - e(u) strain tensor - K stiffness matrix - f force vector - b(q) term arising from nonlinear boundary conditions - q vector of nodal degrees of freedom - p vector of adjoint state variables - J Jacobian of isoparametric mapping - |J| determinant ofJ - N vector of shape function values on parent element - L matrix of shape function derivatives on parent element - G matrix of Cartesian derivatives of shape functions - X matrix of nodal coordinates of element - D matrix of elastic coefficients - B strain-displacement matrix - P part of boundary where tractions are prescribed - u part of boundary where displacements are prescribed - variable part of boundary - strain invariant  相似文献   

9.
This paper aims to provide a basis for renewed talk about use in computing. Four current discourse arenas are described. Different intentions manifest in each arena are linked to failures in translation, different terminologies crossing disciplinary and national boundaries non-reflexively. Analysis of transnational use discourse dynamics shows much miscommunication. Conflicts like that between the Scandinavian System Development School and the usability approach have less current salience. Renewing our talk about use is essential to a participatory politics of information technology and will lead to clearer perception of the implications of letting new systems becoming primary media of social interaction.  相似文献   

10.
The notion of obvious inference in predicate logic is discussed from the viewpoint of proof-checker applications in logic and mathematics education. A class of inferences in predicate logic is defined and it is proposed to identify it with the class of obvious logical inferences. The definition is compared with other approaches. The algorithm for implementing the obviousness decision procedure follows directly from the definition.  相似文献   

11.
The consistency problem associated with a concept classC is to determine, given two setsA andB of examples, whether there exists a conceptc inC such that eachx inA is a positive example ofc and eachy inB is a negative example ofc. We explore in this paper the following intuition: for a concept classC, if the membership problem of determining whether a given example is positive for a concept isNP-complete, then the corresponding consistency problem is likely to be P 2 -complete. To support this intuition, we prove that the following three consistency problems for concept classes of patterns, graphs and generalized Boolean formulas, whose membership problems are known to beNP-complete, are P 2 -complete: (a) given two setsA andB of strings, determine whether there exists a patternp such that every string inA is in the languageL(p) and every string inB is not in the languageL(p); (b) given two setsA andB of graphs, determine whether there exists a graphG such that every graph inA is isomorphic to a subgraph ofG and every graph inB is not isomorphic to any subgraph ofG; and (c) given two setsA andB of Boolean formulas, determine whether there exists a 3-CNF Boolean formula such that for every A, is satisfiable and for every B, is not satisfiable. These results suggest that consistendy problems in machine learning are natural candidates for P 2 -complete problems if the corresponding membership problems are known to beNP-complete.In addition, we prove that the corresponding prediction problems for concept classes of polynomial-time nondeterministic Turing machines, nondeterministic Boolean circuits, generalized Boolean formulas, patterns and graphs are prediction-complete for the classR NP of all concept classes whose membership problems are inNP.  相似文献   

12.
A parametric model of systems is regarded as a geometric manifold imbedded in the enveloping manifold consisting of all the linear systems. The present paper aims at establishing a new geometrical method and framework for analyzing properties of manifolds of systems. A Riemannian metric and a pair of dual affine connections are introduced to a system manifold. They are called-connections. The duality of connections is a new concept in differential geometry. The manifold of all the linear systems is-flat so that it admits natural and invariant-divergence measures. Such geometric structures are useful for treating the problems of approximation, identification, and stochastic realization of systems. By using the-divergences, we solve the problem of approximating a given system by one included in a model. For a sequence of-flat nesting models such as AR models and MA models, it is shown that the approximation errors are decomposed additively corresponding to each dimension of the model.  相似文献   

13.
In this paper we use free fall approach to develop a high level control/command strategy for a bipedal robot called BIPMAN, based on a multi-chain mechanical model with a general control architecture. The strategy is composed of three levels: the Legs and arms level, the Coordinator level and the Supervisor level. The Coordinator level is devoted to controlling leg movements and to ensure the stability of the whole biped. Actually perturbation effects threaten the equilibrium of the human robot and can only be compensated using a dynamic control strategy. This one is based on dynamic stability studies with a center of mass acceleration control and a force distribution on each leg and arm. Free fall in the gravity field is assumed to be deeply involved in the human locomotor control. According to studies of this specific motion through a direct dynamic model,the notion of equilibrium classes is introduced. They allow one to define time intervals in which the biped is able to maintain its posture. This notion is used for the definition of a reconfigurable high level control of the robot.  相似文献   

14.
We study the approximation of the smallest eigenvalue of a Sturm–Liouville problem in the classical and quantum settings. We consider a univariate Sturm–Liouville eigenvalue problem with a nonnegative function q from the class C2 ([0,1]) and study the minimal number n() of function evaluations or queries that are necessary to compute an -approximation of the smallest eigenvalue. We prove that n()=(–1/2) in the (deterministic) worst case setting, and n()=(–2/5) in the randomized setting. The quantum setting offers a polynomial speedup with bit queries and an exponential speedup with power queries. Bit queries are similar to the oracle calls used in Grovers algorithm appropriately extended to real valued functions. Power queries are used for a number of problems including phase estimation. They are obtained by considering the propagator of the discretized system at a number of different time moments. They allow us to use powers of the unitary matrix exp((1/2) iM), where M is an n× n matrix obtained from the standard discretization of the Sturm–Liouville differential operator. The quantum implementation of power queries by a number of elementary quantum gates that is polylog in n is an open issue. In particular, we show how to compute an -approximation with probability (3/4) using n()=(–1/3) bit queries. For power queries, we use the phase estimation algorithm as a basic tool and present the algorithm that solves the problem using n()=(log –1) power queries, log 2–1 quantum operations, and (3/2) log –1 quantum bits. We also prove that the minimal number of qubits needed for this problem (regardless of the kind of queries used) is at least roughly (1/2) log –1. The lower bound on the number of quantum queries is proven in Bessen (in preparation). We derive a formula that relates the Sturm–Liouville eigenvalue problem to a weighted integration problem. Many computational problems may be recast as this weighted integration problem, which allows us to solve them with a polylog number of power queries. Examples include Grovers search, the approximation of the Boolean mean, NP-complete problems, and many multivariate integration problems. In this paper we only provide the relationship formula. The implications are covered in a forthcoming paper (in preparation).PACS: 03.67.Lx, 02.60.-x.  相似文献   

15.
This paper is an informal introduction to the theory of types which use a connective for the intersection of two types and a constant for a universal type, besides the usual connective for function-types. This theory was first devised in about 1977 by Coppo, Dezani and Sallé in the context of-calculus and its main development has been by Coppo and Dezani and their collaborators in Turin. With suitable axioms and rules to assign types to-calculus terms, they obtained a system in which (i) the set of types given to a term does not change under-conversion, (ii) some interesting sets of terms, for example the solvable terms and the terms with normal form, can be characterised exactly by the types of their members, and (iii) the type-apparatus is not so complex as polymorphic systems with quantifier-containing types and therefore probably not so expensive to implement mechanically as these systems.There are in fact several variant systems with different detailed properties. This paper defines and motivates the simplest one from which the others are derived, and describes its most basic properties. No proofs are given but the motivation is shown by examples. A comprehensive bibliography is included.  相似文献   

16.
Pushing Convertible Constraints in Frequent Itemset Mining   总被引:1,自引:0,他引:1  
Recent work has highlighted the importance of the constraint-based mining paradigm in the context of frequent itemsets, associations, correlations, sequential patterns, and many other interesting patterns in large databases. Constraint pushing techniques have been developed for mining frequent patterns and associations with antimonotonic, monotonic, and succinct constraints. In this paper, we study constraints which cannot be handled with existing theory and techniques in frequent pattern mining. For example, avg(S)v, median(S)v, sum(S)v (S can contain items of arbitrary values, {<, <, , } and v is a real number.) are customarily regarded as tough constraints in that they cannot be pushed inside an algorithm such as Apriori. We develop a notion of convertible constraints and systematically analyze, classify, and characterize this class. We also develop techniques which enable them to be readily pushed deep inside the recently developed FP-growth algorithm for frequent itemset mining. Results from our detailed experiments show the effectiveness of the techniques developed.  相似文献   

17.
On the Characteristics of Growing Cell Structures (GCS) Neural Network   总被引:2,自引:0,他引:2  
In this paper, a self-developing neural network model, namely the Growing Cell Structures (GCS) is characterized. In GCS each node (or cell) is associated with a local resource counter (t). We show that GCS has the conservation property by which the summation of all resource counters always equals , thereby s is the increment added to (t) of the wining node after each input presentation and (0 < < 1.0) is the forgetting (i.e., decay) factor applied to (t) of non-wining nodes. The conservation property provides an insight into how GCS can maximize information entropy. The property is also employed to unveil the chain-reaction effect and race-condition which can greatly influence the performance of GCS. We show that GCS can perform better in terms of equi-probable criterion if the resource counters are decayed by a smaller .  相似文献   

18.
In this paper we show, for the first time, how Radial Basis Function (RBF) network techniques can be used to explore questions surrounding authorship of historic documents. The paper illustrates the technical and practical aspects of RBF's, using data extracted from works written in the early 17th century by William Shakespeare and his contemporary John Fletcher. We also present benchmark comparisons with other standard techniques for contrast and comparison.David Lowe is Professor of Neural Computing at Aston University, UK. His research interests span from the theoretical aspects of dynamical systems theory and statistical pattern processing, to a wide range of application domains, from financial market analysis (Novel Exploitation of Neural Network Methods in Financial Markets, invited paper,World Conference on Computational Intelligence, vol. VI, pp. 3623–28, 1994) to the artificial nose (Novel Topographic Nonlinear Feature Extraction using Radial Basis Functions for Concentration Coding in the Artificial Nose,3 rd IEE International Conference on Artificial Neural Networks, pp. 95–99, Conference Publication number 372, The Institute of Electrical Engineers, 1993).Robert Matthews is a visiting research fellow at Aston University. His research interests include probability, number theory and astronomy. His recent paper inNature (vol. 374, pp. 681–82, 1995) somehow managed to combine all three.  相似文献   

19.
I discuss the attitude of Jewish law sources from the 2nd–:5th centuries to the imprecision of measurement. I review a problem that the Talmud refers to, somewhat obscurely, as impossible reduction. This problem arises when a legal rule specifies an object by referring to a maximized (or minimized) measurement function, e.g., when a rule applies to the largest part of a divided whole, or to the first incidence that occurs, etc. A problem that is often mentioned is whether there might be hypothetical situations involving more than one maximal (or minimal) value of the relevant measurement and, given such situations, what is the pertinent legal rule. Presumption of simultaneous occurrences or equally measured values are also a source of embarrassment to modern legal systems, in situations exemplified in the paper, where law determines a preference based on measured values. I contend that the Talmudic sources discussing the problem of impossible reduction were guided by primitive insights compatible with fuzzy logic presentation of the inevitable uncertainty involved in measurement. I maintain that fuzzy models of data are compatible with a positivistic epistemology, which refuses to assume any precision in the extra-conscious world that may not be captured by observation and measurement. I therefore propose this view as the preferred interpretation of the Talmudic notion of impossible reduction. Attributing a fuzzy world view to the Talmudic authorities is meant not only to increase our understanding of the Talmud but, in so doing, also to demonstrate that fuzzy notions are entrenched in our practical reasoning. If Talmudic sages did indeed conceive the results of measurements in terms of fuzzy numbers, then equality between the results of measurements had to be more complicated than crisp equations. The problem of impossible reduction could lie in fuzzy sets with an empty core or whose membership functions were only partly congruent. Reduction is impossible may thus be reconstructed as there is no core to the intersection of two measures. I describe Dirichlet maps for fuzzy measurements of distance as a rough partition of the universe, where for any region A there may be a non-empty set of - _A (upper approximation minus lower approximation), where the problem of impossible reduction applies. This model may easily be combined with probabilistic extention. The possibility of adopting practical decision standards based on -cuts (and therefore applying interval analysis to fuzzy equations) is discussed in this context. I propose to characterize the uncertainty that was presumably capped by the old sages as U-uncertainty, defined, for a non-empty fuzzy set A on the set of real numbers, whose -cuts are intervals of real numbers, as U(A) = 1/h(A) 0 h(A) log [1+(A)]d, where h(A) is the largest membership value obtained by any element of A and (A) is the measure of the -cut of A defined by the Lebesge integral of its characteristic function.  相似文献   

20.
This study demonstrates an objective method used to evaluate the enhanceability of commercial software. It examines the relationship between enhancement and repair, and suggests that enhancement be considered when developing formal models of defect cause. Another definition of defect-prone software is presented that concentrates attention on software that requires unusually high repair considering the magnitude of planned enhancement.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号