首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 593 毫秒
1.
This paper concerns calculational methods of refinement in state-based specification languages. Data refinement is a well-established technique for transforming specifications of abstract data types into ones, which are closer to an eventual implementation. The conditions under which a transformation is a correct refinement are encapsulated into two simulation rules: downward and upward simulations.One approach for refining an abstract system is to specify the concrete data type, and then attempt to verify that it is a valid refinement of the abstract type. An alternative approach is to calculate the concrete specification based upon the abstract specification and a retrieve relation, which links the abstract and concrete states. In this paper we generalise existing calculational methods for downward simulations and derive similar results for upward simulations; we also document their use and application in a particular specification language, namely Z.  相似文献   

2.
Verifying data refinements using a model checker   总被引:2,自引:1,他引:1  
In this paper, we consider how refinements between state-based specifications (e.g., written in Z) can be checked by use of a model checker. Specifically, we are interested in the verification of downward and upward simulations which are the standard approach to verifying refinements in state-based notations. We show how downward and upward simulations can be checked using existing temporal logic model checkers.In particular, we show how the branching time temporal logic CTL can be used to encode the standard simulation conditions. We do this for both a blocking, or guarded, interpretation of operations (often used when specifying reactive systems) as well as the more common non-blocking interpretation of operations used in many state-based specification languages (for modelling sequential systems). The approach is general enough to use with any state-based specification language, and we illustrate how refinements between Z specifications can be checked using the SAL CTL model checker using a small example.  相似文献   

3.
数字图像的分数阶微分掩模及其数值运算规则   总被引:14,自引:0,他引:14  
蒲亦非  王卫星 《自动化学报》2007,33(11):1128-1135
研究目的是提出并论述数字图像的分数阶微分掩模及其数值运算规则. 首先, 从信号处理角度论述了数字图像分数阶微分掩模的特性. 其次, 详细论述了在 x 轴负、x 轴正、y 轴负、y 轴正、左下对角线、左上对角线、右下对角线、右上对角线 8 个相互中心对称的数字图像 n×n 分数阶微分掩模的构造. 最后, 论述了数字图像分数阶微分掩模的数值运算规则. 计算机数值实验结果表明, 对于纹理细节信息丰富的图像信号而言, 分数阶微分对灰度变化不大的平滑区域中的纹理细节信息的提取效果明显优于整数阶微分运算.  相似文献   

4.
In this paper we investigate how standard model checkers can be applied to checking refinement relationships between Z specifications. The major obstacle to such a use are the (potentially) infinite data domains in specifications. Consequently, we examine the application of data abstraction techniques for reducing the infinite to a finite state space. Since data abstractions do, however, decrease the amount of information in a specification, refinement can—in general—not be proven on the abstractions anymore, it can only be disproved. The model checker can thus be used to generate counter examples to a refinement relationship. Here, we show how abstract specifications can be systematically constructed (from a given data abstraction) and how a standard model checker (FDR) can be applied to find counter examples in case when refinement is absent. We especially discuss the applicability of the construction method: it constructs abstract specifications which are either upward or downward simulations of the original specifications, and depending on the operations in the specification and the data abstraction chosen, such a construction might succeed or fail. The construction abstracts both the input/output as well as the state.  相似文献   

5.
Attribute-oriented induction (AOI) is a useful data mining method for extracting generalized knowledge from relational data and users’ background knowledge. Concept hierarchies can be integrated with the AOI method to induce multi-level generalized knowledge. However, the existing AOI approaches are only capable of mining positive knowledge from databases; thus, rare but important negative generalized knowledge that is unknown, unexpected, or contradictory to what the user believes, can be missed. In this study, we propose a global negative attribute-oriented induction (GNAOI) approach that can generate comprehensive and multiple-level negative generalized knowledge at the same time. Two pruning properties, the downward level closure property and the upward superset closure property, are employed to improve the efficiency of the algorithm, and a new interest measure, nim(cl), is exploited to measure the degree of the negative relation. Experiment results from a real-life dataset show that the proposed method is effective in finding global negative generalized knowledge.  相似文献   

6.
The development of test cases is an important issue for testing software, communication protocols and other reactive systems. A number of methods are known for the development of a test suite based on a formal specification given in the form of a finite state machine. In this paper, we overview and experiment with these methods to assess their complexity, applicability, completeness, fault detection capability, length and derivation time of their test suites. The experiments are conducted on randomly generated specifications and on two realistic protocols called the Simple Connection Protocol and the ITU-T V.76 Recommendation.  相似文献   

7.
The paper focuses on the adaptive relational association rule mining problem. Relational association rules represent a particular type of association rules which describe frequent relations that occur between the features characterizing the instances within a data set. We aim at re-mining an object set, previously mined, when the feature set characterizing the objects increases. An adaptive relational association rule method, based on the discovery of interesting relational association rules, is proposed. This method, called ARARM (Adaptive Relational Association Rule Mining) adapts the set of rules that was established by mining the data before the feature set changed, preserving the completeness. We aim to reach the result more efficiently than running the mining algorithm again from scratch on the feature-extended object set. Experiments testing the method's performance on several case studies are also reported. The obtained results highlight the efficiency of the ARARM method and confirm the potential of our proposal.  相似文献   

8.
A core problem in formal methods is the transition from informal requirements to formal specifications. Especially when specifying the behavior of reactive systems, many formalisms require the user to either understand a complex mathematical theory and notation or to derive details not given in the requirements, such as the state space of the problem. For many approaches also a consistent set of requirements is needed, which enforces to resolve requirements conflicts prior to formalization. This paper describes a specification technique, where not states but signal patterns are the main elements. The notation is based on tables of regular expressions and supports a piece-wise formalization of potentially inconsistent requirements. Many properties, such as input completeness and consistency, can be checked automatically for these specifications. The detection and resolution of conflicts can be performed within our framework after formalization. Besides the formal foundation of our approach, this paper presents prototypical tool support and results from an industrial case study.  相似文献   

9.
Full potential density functional theory (DFT) calculations have been performed for the MgB2 superconductor. Results show that applying positive (negative) pressure leads to a decrement (increment) in the DOS at the Fermi level that this calculations suggest a decrement in Tc under application of positive pressure. In the Γ-A path of momentum space, the band which has the dominant role in conduction properties, moves upward when c increases or a decreases, and moves downward when c decreases or a increases. By relaxation of the system under the plane strain, we have studied the behavior of axial lattice parameter c. Our results show that changes in the axial lattice constant c is one third of the changes of planar lattice constant a. It has been seen that by applying small in-plane strain (tensile), DOS at the Fermi level increases, but it decreases for higher applied strain. For the negative in-plane strain (compression), DOS decreases monotonically at the Fermi level. It can be seen that tension makes the electronic bands to move downward in the Γ-A direction of the reciprocal lattice, but by compression, they move upward. Based on these results, it can be concluded that by applying small tension, one can enhance Tc in MgB2 compound.  相似文献   

10.
It is widely believed that showing a problem to be NP-complete is tantamount to proving its computational intractability. In this paper we show that a number of NP-complete problems remain NP-complete even when their domains are substantially restricted. First we show the completeness of Simple Max Cut (Max Cut with edge weights restricted to value 1), and, as a corollary, the completeness of the Optimal Linear Arrangement problem. We then show that even if the domains of the Node Cover and Directed Hamiltonian Path problems are restricted to planar graphs, the two problems remain NP-complete, and that these and other graph problems remain NP-complete even when their domains are restricted to graphs with low node degrees. For Graph 3-Colorability, Node Cover, and Undirected Hamiltonian Circuit, we determine essentially the lowest possible upper bounds on node degree for which the problems remain NP-complete.  相似文献   

11.
The notions of bisimulation and simulation are used for graph reduction and are widely employed in many areas: modal logic, concurrency theory, set theory, formal verification, and so forth. In particular, in the context of formal verification they are used to tackle the so-called state-explosion problem. The faster algorithms to compute the maximum bisimulation on a given labeled graph are based on the crucial equivalence between maximum bisimulation and relational coarsest partition problem. As far as simulation is concerned, many algorithms have been proposed that turn out to be relatively inexpensive in terms of either time or space. In this paper we first revisit the state of the art about bisimulation and simulation, pointing out the analogies and differences between the two problems. Then, we propose a generalization of the relational coarsest partition problem, which is equivalent to the simulation problem. Finally, we present an algorithm that exploits such a characterization and improves on previously proposed algorithms for simulation. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

12.
This paper proposes a hybrid framework composed of filtering module and clustering module to identify six common types of control chart patterns, including natural pattern, cyclic pattern, upward shift, downward shift, upward trend, and downward trend. In particular, a multi-scale wavelet filter is designed for denoising and its performance is compared to single-scale filters, including mean filter and exponentially weighted moving average (EWMA) filter. Moreover, three fuzzy clustering algorithms, based on fuzzy c means (FCM), entropy fuzzy c means (EFCM) and kernel fuzzy c means (KFCM), are adopted to compare their performance of pattern classification. Experimental results demonstrate that the excellent performance of EFCM and KFCM against outliers, especially in the case of high noise level embedded in the input data. Therefore, a hybrid framework combining wavelet filter with robust fuzzy clustering is suggested and proposed in this paper. Compared to neural network based approaches, the proposed method provides a promising way for the on-line recognition of control chart patterns because of its efficient computation and robustness against outliers.  相似文献   

13.
Dr. E. M. Clarke 《Computing》1979,21(4):273-294
We argue that soundness and relative completeness theorems for Floyd-Hoare Axiom Systems ([3], [5], [18]) are really fixedpoint theorems. We give a characterization of program invariants as fixedpoints of functionals which may be obtained in a natural manner from the text of a program. We show that within the framework of this fixedpoint theory, soundness and relative completeness results have a particularly simple interpretation. Completeness of a Floyd-Hoare Axiom System is equivalent to the existence of a fixedpoint for an appropriate functional, and soundness follows from the maximality of this fixedpoint. The functionals associated with regular procedure declarations are similar to thepredicate transformers of Dijkstra; for nonregular recursions it is necessary to use a generalization of the predicate transformer concept which we call arelational transformer.  相似文献   

14.
Abstract

Functional completeness of a relational language is the ability to express linear recursive queries. We present such a language that also has the property of relational completeness (relational algebra is, in fact, embedded in it). The transitive closure of a binary relation is carried out by means of the projection function and its inverse.  相似文献   

15.
Models are the core assets in model-driven engineering, and are therefore subject to all kind of manipulations, such as refactorings, animations, transformations into other languages, comparisons and merging. This set of model-related activities is known as model management. Even though many languages and approaches have been proposed for model management, most of them are type-centric, specific to concrete meta-models, and hence leading to specifications with a low level of abstraction and difficult to be reused in practice. In this paper, we introduce ideas from generic programming into model management to raise the level of abstraction of the specifications of model manipulations and facilitate their reuse. In particular we adopt generic meta-model concepts as an intermediate, abstract meta-model over which model management specifications are defined. Such meta-model concepts are mapped to concrete meta-models, so that specifications can be applied to families of meta-models satisfying the concept requirements. As a proof of concept, we show the implementation of these ideas using the Eclipse Modeling Framework and the Epsilon family of languages for model management.  相似文献   

16.
Triangle algebras are equationally defined structures that are equivalent with certain residuated lattices on a set of intervals, which are called interval-valued residuated lattices (IVRLs). Triangle algebras have been used to construct triangle logic (TL), a formal fuzzy logic that is sound and complete w.r.t. the class of IVRLs.In this paper, we prove that the so-called pseudo-prelinear triangle algebras are subdirect products of pseudo-linear triangle algebras. This can be compared with MTL-algebras (prelinear residuated lattices) being subdirect products of linear residuated lattices.As a consequence, we are able to prove the pseudo-chain completeness of pseudo-linear triangle logic (PTL), an axiomatic extension of TL introduced in this paper. This kind of completeness is the analogue of the chain completeness of monoidal T-norm based logic (MTL).This result also provides a better insight in the structure of triangle algebras; it enables us, amongst others, to prove properties of pseudo-prelinear triangle algebras more easily. It is known that there is a one-to-one correspondence between triangle algebras and couples (L,α), in which L is a residuated lattice and α an element in that residuated lattice. We give a schematic overview of some properties of pseudo-prelinear triangle algebras (and a number of others that can be imposed on a triangle algebra), and the according necessary and sufficient conditions on L and α.  相似文献   

17.
Creating High Confidence in a Separation Kernel   总被引:2,自引:0,他引:2  
Separation of processes is the foundation for security and safety properties of systems. This paper reports on a collaborative effort of Government, Industry and Academia to achieve high confidence in the separation of processes. To this end, this paper will discuss (1) what a separation kernel is, (2) why the separation of processes is fundamental to security systems, (3) how high confidence in the separation property of the kernel was obtained, and (4) some of the ways government, industry, and academia cooperated to achieve high confidence in a separation kernel. What is separation? Strict separation is the inability of one process to interfere with another. In a separation kernel, the word separation is interpreted very strictly. Any means for one process to disturb another, be it by communication primitives, by sharing of data, or by subtle uses of kernel primitives not intended for communication, is ruled out when twoprocesses are separated. Why is separation fundamental? Strict separation between processes enables the evaluation of a system to check that the system meets its security policy. For example, if a red process is strictly separated from a black process, then it can be concluded that there is no flow of information from red to black. How was high confidence achieved? We have collaborated and shared our expertise in the use of SPECWARE. SPECWARE is a correct by construction method, in which high level specifications are built up from modules using specification combinators. Refinements of the specifications are made until an implementation is achieved. These refinements are also subject to combinators. The high confidence in the separation property of the kernel stems from the use of formal methods in the development of the kernel. How did we collaborate? Staff from the Kestrel Institute (developers of SPECWARE), the Department of Defense (DoD), and Motorola (developers of the kernel) cooperated in the creation of the Mathematically Analyzed Separation Kernel (MASK). DoD provided the separation kernel concept, and expertise in computer security and high confidence development. Kestrel provided expertise in SPECWARE. Motorola combined its own the expertise with that of DoD and Kestrel in creating MASK.  相似文献   

18.
Much like relational probabilistic models, the need for relational preference models naturally arises in real-world applications involving multiple, heterogeneous, and richly interconnected objects. On the one hand, relational preferences should be represented into statements which are natural for human users to express. On the other hand, relational preference models should be endowed with a structure that supports tractable forms of reasoning and learning. Based on these criteria, this paper introduces the framework of relational conditional preference networks (RCP-nets), that maintains the spirit of the popular ??CP-nets?? by expressing relational preferences in a natural way using the ceteris paribus semantics. We show that acyclic RCP-nets support tractable inference for optimization and ranking tasks. In addition, we show that in the online learning model, tree-structured RCP-nets (with bipartite orderings) are efficiently learnable from both optimization tasks and ranking tasks, using linear loss functions. Our results are corroborated by experiments on a large-scale movie recommendation dataset.  相似文献   

19.
Interconnection networks based on the k-ary n-tree topology are widely used in high-performance parallel computers. However, this topology is expensive and complex to build. In this paper we evaluate an alternative tree-like topology that is cheaper in terms of cost and complexity because it uses fewer switches and links. This alternative topology leaves unused upward ports on switches, which can be rearranged to be used as downward ports. The increase of locality might be efficiently exploited by applications. We test the performance of these thin-trees, and compare it with that of regular trees. Evaluation is carried out using a collection of synthetic traffic patterns that emulate the behavior of scientific applications and functions within message passing libraries, not only in terms of sources and destinations of messages, but also considering the causal relationships among them. We also propose a methodology to perform cost and performance analysis of different networks. Our main conclusion is that, for the set of studied workloads, the performance drop in thin-trees is less noticeable than the cost savings.  相似文献   

20.
Problem statements often resort to superlatives such as in e.g. “… the smallest such number”, “… the best approximation”, “… the longest such list” which lead to specifications made of two parts: one defining a broad class of solutions (the easy part) and the other requesting one particular such solution, optimal in some sense (the hard part).This article introduces a binary relational combinator which mirrors this linguistic structure and exploits its potential for calculating programs by optimization. This applies in particular to specifications written in the form of Galois connections, in which one of the adjoints delivers the optimal solution.The framework encompasses re-factoring of results previously developed by Bird and de Moor for greedy and dynamic programming, in a way which makes them less technically involved and therefore easier to understand and play with.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号