首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
3.
4.
5.
Fuzzy models of reliability of algorithmic procedure based on membership functions that depend on parameters affecting the algorithm correct performance are considered. Statements of optimization problems of correction and control resources of linear algorithms with respect to reliability-cost criteria are formalized. The solution of the stated problems is illustrated by an example. The novelty of the approach consists in the fact that it does not use any statistical data and optimize parametric reliability of the system by choosing the control and correction resources in order to ensure the required or maximal possible correction level of performance for the given instability of input parameters.  相似文献   

6.
Randomized algorithms are widely used for finding efficiently approximated solutions to complex problems, for instance primality testing and for obtaining good average behavior. Proving properties of such algorithms requires subtle reasoning both on algorithmic and probabilistic aspects of programs. Thus, providing tools for the mechanization of reasoning is an important issue. This paper presents a new method for proving properties of randomized algorithms in a proof assistant based on higher-order logic. It is based on the monadic interpretation of randomized programs as probabilistic distributions (Giry, Ramsey and Pfeffer). It does not require the definition of an operational semantics for the language nor the development of a complex formalization of measure theory. Instead it uses functional and algebraic properties of unit interval. Using this model, we show the validity of general rules for estimating the probability for a randomized algorithm to satisfy specified properties. This approach addresses only discrete distributions and gives rules for analyzing general recursive functions.We apply this theory to the formal proof of a program implementing a Bernoulli distribution from a coin flip and to the (partial) termination of several programs. All the theories and results presented in this paper have been fully formalized and proved in the Coq proof assistant.  相似文献   

7.
8.
Continuous random variables are widely used to mathematically describe random phenomena in engineering and the physical sciences. In this paper, we present a higher-order logic formalization of the Standard Uniform random variable as the limit value of the sequence of its discrete approximations. We then show the correctness of this specification by proving the corresponding probability distribution properties within the HOL theorem prover, summarizing the proof steps. The formalized Standard Uniform random variable can be transformed to formalize other continuous random variables, such as Uniform, Exponential, Normal, etc., by using various non-uniform random number generation techniques. The formalization of these continuous random variables will enable us to perform an error free probabilistic analysis of systems within the framework of a higher-order-logic (HOL) theorem prover. For illustration purposes, we present the formalization of the Continuous Uniform random variable based on the formalized Standard Uniform random variable, and then utilize it to perform a simple probabilistic analysis of roundoff error in HOL.  相似文献   

9.
Complexity analysis of algorithms is formalized in terms of recognition of their classification properties. The formalization procedure and a discipline of analysis by the recognition method are described.Translated from Kibernetika i Sistemnyi Analiz, No. 6, pp. 53–69, November–December, 1991.  相似文献   

10.
We analyze three proof strategies commonly used in deductive verification of deterministic sequential programs formalized with operational semantics. The strategies are (i) stepwise invariants, (ii) clock functions, and (iii) inductive assertions. We show how to formalize the strategies in the logic of the ACL2 theorem prover. Based on our formalization, we prove that each strategy is both sound and complete. The completeness result implies that given any proof of correctness of a sequential program one can derive a proof in each of the above strategies. The soundness and completeness theorems have been mechanically checked with ACL2.  相似文献   

11.
The binary representation of each classification from a subset of a space of admissible classifications is considered. A metric in a unit cube is introduced, and a correct algebra of classification algorithms is constructed. The correctness and completeness of a model of classification algorithms are proved. An example of construction of a complete model for a classification problem is considered.  相似文献   

12.
Combining goal-oriented and use case modeling has been proven to be an effective method in requirements elicitation and elaboration. To ensure the quality of such modeled artifacts, a detailed model analysis needs to be performed. However, current requirements engineering approaches generally lack reliable support for automated analysis of consistency, correctness and completeness (3Cs problems) between and within goal models and use case models. In this paper, we present a goal–use case integration framework with tool support to automatically identify such 3Cs problems. Our new framework relies on the use of ontologies of domain knowledge and semantics and our goal–use case integration meta-model. Moreover, functional grammar is employed to enable the semiautomated transformation of natural language specifications into Manchester OWL Syntax for automated reasoning. The evaluation of our tool support shows that for representative example requirements, our approach achieves over 85 % soundness and completeness rates and detects more problems than the benchmark applications.  相似文献   

13.
In this work the problem of synthesis of a learning algorithm of point classification in planar configurations is considered. A modification of the formulation of the problem, based on the use of metrics for comparing the neighborhoods and the introduction of continuous space of marks, is formulated. The problems of solvability, regularity, and completeness of such problems are investigated.  相似文献   

14.
Correctness of compilers is a vital precondition for the correctness of the software translated by them. In this paper, we present two approaches for the formalization of static single assignment (SSA) form together with two corresponding formal proofs in the Isabelle/HOL system, each showing the correctness of code generation. Our comparison between the two proofs shows that it is very important to find adequate formalizations in formal proofs since they can simplify the verification task considerably. Our formal correctness proofs do not only verify the correctness of a certain class of code generation algorithms but also give us sufficient, easily checkable correctness criteria characterizing correct compilation results obtained from implementations (compilers) of these algorithms. These correctness criteria can be used in a compiler result checker.  相似文献   

15.
The development cycle of high-performance optimization algorithms requires the algorithm designer to make several design decisions. These decisions range from implementation details to the setting of parameter values for testing intermediate designs. Proper parameter setting can be crucial for the effective assessment of algorithmic components because a bad parameter setting can make a good algorithmic component perform poorly. This situation may lead the designer to discard promising components that just happened to be tested with bad parameter settings. Automatic parameter tuning techniques are being used by practitioners to obtain peak performance from already designed algorithms. However, automatic parameter tuning also plays a crucial role during the development cycle of optimization algorithms. In this paper, we present a case study of a tuning-in-the-loop approach for redesigning a particle swarm-based optimization algorithm for tackling large-scale continuous optimization problems. Rather than just presenting the final algorithm, we describe the whole redesign process. Finally, we study the scalability behavior of the final algorithm in the context of this special issue.  相似文献   

16.
Algorithms for Distributed Constraint Satisfaction: A Review   总被引:12,自引:0,他引:12  
When multiple agents are in a shared environment, there usually exist constraints among the possible actions of these agents. A distributed constraint satisfaction problem (distributed CSP) is a problem to find a consistent combination of actions that satisfies these inter-agent constraints. Various application problems in multi-agent systems can be formalized as distributed CSPs. This paper gives an overview of the existing research on distributed CSPs. First, we briefly describe the problem formalization and algorithms of normal, centralized CSPs. Then, we show the problem formalization and several MAS application problems of distributed CSPs. Furthermore, we describe a series of algorithms for solving distributed CSPs, i.e., the asynchronous backtracking, the asynchronous weak-commitment search, the distributed breakout, and distributed consistency algorithms. Finally, we show two extensions of the basic problem formalization of distributed CSPs, i.e., handling multiple local variables, and dealing with over-constrained problems.  相似文献   

17.
We report on an experiment in combining the theorem prover Isabelle with automatic first-order arithmetic provers to increase automation on the verification of distributed protocols. As a case study for the experiment we verify several averaging clock synchronization algorithms. We present a formalization of Schneider’s generalized clock synchronization protocol [Sch87] in Isabelle/HOL. Then, we verify that the convergence functions used in two clock synchronization algorithms, namely, the Interactive Convergence Algorithm (ICA) of Lamport and Melliar-Smith [LMS85] and the Fault-tolerant Midpoint algorithm of Lundelius–Lynch [LL84], satisfy Schneider’s general conditions for correctness. The proofs are completely formalized in Isabelle/HOL. We identify parts of the proofs which are not fully automatically proven by Isabelle built-in tactics and show that these proofs can be handled by automatic first-order provers with support for arithmetics.  相似文献   

18.
WS-CDL从全局的观点定义了P2P协作实体通用的外部行为,这些外部行为以消息交换序列的形式来达到一个共同的交易目的。尽管WS-CDL声称是基于Pi-演算的,但是证明Pi-演算在建模WS-CDL的效力上做的实质性工作并不多。因此,提出了一种基于Pi-演算的方法来形式化并且验证WS-CDL编舞。该方法可以保证WS-CDL编舞的正确性以降低由于Web服务执行的失败带来的开销。这一方法提高了部署的效率,降低了实现和调用不合适Web服务的风险。论文的贡献可以归纳为以下三点:首先,用Pi-演算形式化WS-CDL的语义信息;其次,形式化的有效性已确认,重要的属性都在形式化后得到验证。第三,用一个具体的交易场景验证WS-CDL模型设计的可靠性。  相似文献   

19.
Incidence Calculus is a technique for associating uncertainty values with logical sentences. These uncertainty values are called incidences and they are sets of points, which may be thought of as representing equivalence classes of situations, Tarskian models, or possible worlds. Incidence Calculus was originally introduced in [1]. Incidence Calculus was designed to overcome various inherent problems with purely numeric mechanisms for uncertain reasoning [2]. In particular, incidences can represent the dependence between sentences, which numbers cannot, and hence Incidence Calculus can provide genuine, probabilistic reasoning. In this paper we prove soundness and completeness results for some algorithms introduced in [1] and hence satisfy some of the correctness criteria for Incidence Calculus. These algorithms can be used for probabilistic reasoning and to check the consistency of the subjective probabilities of sentences.  相似文献   

20.
Much of the world’s quantitative data reside in scattered web tables. For a meaningful role in Big Data analytics, the facts reported in these tables must be brought into a uniform framework. Based on a formalization of header-indexed tables, we proffer an algorithmic solution to end-to-end table processing for a large class of human-readable tables. The proposed algorithms transform header-indexed tables to a category table format that maps easily to a variety of industry-standard data stores for query processing. The algorithms segment table regions based on the unique indexing of the data region by header paths, classify table cells, and factor header category structures of two-dimensional as well as the less common multidimensional tables. Experimental evaluations substantiate the algorithmic approach to processing heterogeneous tables. As demonstrable results, the algorithms generate queryable relational database tables and semantic-web triple stores. Application of our algorithms to 400 web tables randomly selected from diverse sources shows that the algorithmic solution automates end-to-end table processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号