首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
One of the principal impediments to widespread use of automated program verification methodology is due to the user burden of creating appropriate inductive assertions. In this paper, we investigate a class of programs for which such inductive assertions can be mechanically generated from Input-output specifications. This class of programs, called accumulating programs, are iterative realizations of problems in which the required output information is accumulated during successive passes over the input data structures. Obtaining invariant assertions for such programs is shown to be equivalent to the problem of generalizations of specifications to that over an extended closed data domain. For this purpose, a set of basis data elements are to be conceived of as generating the extended domain. An arbitary data element would thus be considered as uniquely decomposable into a sequence of basis elements. The structural relations between the components of a data element are used to extend program behavior and thus obtain the desired invariant.  相似文献   

2.
3.
一种类的权威性度量方法   总被引:1,自引:0,他引:1  
软件度量是软件工程的一个重要研究领域,关系到一个软件开发和维护的开销.至今人们已经对内聚性和耦合性度量开展了较为丰富的工作.然而对类权威性度量的研究工作却较少.本文基于类间关系提出一种类权威性度量方法,该方法从类图本身出发通过迭代计算类的权威值,并依据线性代数的理论证明了类的权威值具有最小值和最大值,最后用实例做验证.  相似文献   

4.
5.
In this work we focus on a formalisation of the algorithms of lazy exact arithmetic a la Edalat-Potts in type theory. We choose the constructive type theory extended with coinductive types as our formal verification tool. We show examples of how infinite objects such as streams and expression trees can be formalised as coinductive types. We study the type theoretic notion of productivity which ensures the infiniteness of the outcome of the algorithms on infinite objects. Syntactical methods are not always strong enough to ensure the productivity. However, if some information about the complexity of a function is provided, one may be able to show the productivity of that function. In the case of the normalisation algorithm we show that such information can be obtained from the choice of real number representation that is used to represent the input and the output. Furthermore, we show how to employ this semantic information to formalise a restricted case of normalisation algorithm in our type theory.  相似文献   

6.
7.
We explore the theory of regular language representations in the constructive type theory of Coq. We cover various forms of automata (deterministic, nondeterministic, one-way, two-way), regular expressions, and the logic WS1S. We give translations between all representations, show decidability results, and provide operations for various closure properties. Our results include a constructive decidability proof for the logic WS1S, a constructive refinement of the Myhill-Nerode characterization of regularity, and translations from two-way automata to one-way automata with verified upper bounds for the increase in size. All results are verified with an accompanying Coq development of about 3000 lines.  相似文献   

8.
9.
We improve both main results of the paper ``Non-Deterministic Communication Complexity with Few Witnesses ' by M. Karchmer, I. Newman, M. Saks, and A. Wigderson, that appeared in Journal of Computer and System Sciences , Vol. 49, pp. 247—257.  相似文献   

10.
The foundation for process evolution research is the modelling of process structural complexity. With such a model, process systems can be directed toward acquiring good maintainability attributes according to the principles of engineering. In this paper a process management (PM) model is developed to acquire directly from the target process codes the knowledge hidden among and within components of process systems. With the knowledge acquired by the PM model, process structural complexity measures in terms of partitioning, restructuring and rewriting criteria are developed; a systematic process remodularization schema is derived, and algorithms for scheduling process changes are presented. These criteria and mechanisms are used to guide the process production chain.  相似文献   

11.
We illustrate a methodology for formalizing and reasoning about Abadi and Cardelli’s object-based calculi, in (co)inductive type theory, such as the Calculus of (Co)Inductive Constructions, by taking advantage of natural deduction semantics and coinduction in combination with weak higher-order abstract syntax and the Theory of Contexts. Our methodology allows us to implement smoothly the calculi in the target metalanguage; moreover, it suggests novel presentations of the calculi themselves. In detail, we present a compact formalization of the syntax and semantics for the functional and the imperative variants of the ς-calculus. Our approach simplifies the proof of subject deduction theorems, which are proved formally in the proof assistant Coq with a relatively small overhead. Supported by UE project IST-CA-510996 Types and French grant CNRS ACI Modulogic.  相似文献   

12.
谢雪平  王鸿绪 《软件》2012,33(4):126-128
Vague集的相似度量在Vague集的应用中具有极其重要的地位.分析了现有3类Vague集(值)之间的相似度量方法 ,指出这些相似度量方法的分类是源于对特殊Vague值s=[0,1]的处理上.提出Vague集之间新的相似度量的定义.给出Vague集之间新的相似度量公式.该公式的结构是简洁的.  相似文献   

13.
In attempting to describe the quality of computer software, one of the more frequently mentioned measurable attributes is complexity of the flow of control. During the past several years, there have been many attempts to quantify this aspect of computer programs, approaching the problem from such diverse points of view as graph theory and software science. Most notable measures in these areas are McCabe's cyclomatic complexity and Halstead's software effort. More recently, Woodward et al. proposed a complexity measure based on the number of crossings, or "knots," of arcs in a linearization of the flowgraph.  相似文献   

14.
We briefly survey a number of important recent achievements in Theoretical Computer Science (TCS), especially Computational Complexity Theory. We will discuss the PCP Theorem, its implications to inapproximability on combinatorial optimization problems; space bounded computations, especially deterministic logspace algorithm for undirected graph connectivity problem; deterministic polynomial-time primality test; lattice complexity, worst-case to average-case reductions; pseudorandomness and extractor constructions; and Valiant's new theory of holographic algorithms and reductions.  相似文献   

15.
陆陪  于大川  吕建 《软件学报》1998,9(2):111-114
在面向对象语言的动态程序设计环境中,动态修改一个类时,会导致该类已经存在的那些活动对象难以处理的情况.本文提出了一种基于衍生类来实现类的动态修改的方法,并与其它方法进行了比较.  相似文献   

16.
关于Vague集相似度量的一个注记   总被引:1,自引:1,他引:0  
闫德勤  迟忠先 《计算机科学》2005,32(10):170-171
自从Hong等学者指出由Chen给出的关于Vague集的相似性度量方法不合理并提出改进的相似性度量方法后,很多新的改进方法被提出.文章认为Chen给出的关于Vague集的相似性度量与Hong等学者提出的相似性度量具有不同的度量意义,Hong对Chen的否定是错误的.  相似文献   

17.
Partial order optimality theory (PoOT) (Anttila and Cho in Lingua 104:31–56, 1998) is a conservative generalization of classical optimality theory (COT) (Prince and Smolensky in Optimality theory: constraint interaction in generative grammar, Blackwell Publishers, Malden, 1993/2004) that makes possible the modeling of free variation and quantitative regularities without any numerical parameters. Solving the ranking problem for PoOT has so far remained an outstanding problem: allowing for free variation, given a finite set of input/output pairs, i.e., a dataset, \(\Delta \) that a speaker S knows to be part of some language L, how can S learn the set of all grammars G under some constraint set C compatible with \(\Delta \)?. Here, allowing for free variation, given the set of all PoOT grammars GPoOT over a constraint set C , for an arbitrary \(\Delta \), I provide set-theoretic means for constructing the actual set G compatible with \(\Delta \). Specifically, I determine the set of all STRICT ORDERS of C that are compatible with \(\Delta \). As every strict total order is a strict order, our solution is applicable in both PoOT and COT, showing that the ranking problem in COT is a special instance of a more general one in PoOT.  相似文献   

18.
The problem of existence of predictive complexity for the absolute loss game is studied. Predictive complexity is a generalization of Kolmogorov complexity which bounds the ability of any algorithm to predict elements of a sequence of outcomes. For perfectly mixable loss functions (logarithmic and squared difference are among them) predictive complexity is defined like Kolmogorov complexity to within an additive constant. The absolute loss function is not perfectly mixable, and the question of existence of the corresponding predictive complexity, which is defined to within an additive constant, is open. We prove that in the case of the absolute loss game the predictive complexity can be defined to within an additive term O( ), where n is the length of a sequence of outcomes. We prove also that in some restricted settings this bound cannot be improved.  相似文献   

19.
20.
归纳学习是机器学习最重要、最核心也是最成熟的一个分支,但在应用归纳学习所获得的知识以及改进归纳学习算法等方面存在着很多传统方法难以解决的问题。本文从归纳学习的本质--归纳依赖于数据间的相似性出发,尝试将能够较好地定量反映数据间相似性程度的模糊理论应用到归纳学习中去,为归纳学习和机器学习找出一个新的研究方法和思路。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号