首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Atomic transactions are a widely-accepted technique for organizing computation in fault-tolerant distributed systems. In most languages and systems based on transactions, atomicity is implemented through atomic objects, typed data objects that provide their own synchronization and recovery. Hence, atomicity is the key correctness condition required of a data type implementation. This paper presents a technique for verifying the correctness of implementations of atomic data types. The significant aspect of this technique is the extension of Hoare's abstraction function to map to a set of sequences of abstract operations, not just to a single abstract value. We give an example of a proof for an atomic queue implemented in the programming language Avalon/C++.  相似文献   

2.
Feature models are widely used in domain engineering to capture common and variant features among systems in a particular domain. However, the lack of a formal semantics and reasoning support of feature models has hindered the development of this area. Industrial experiences also show that methods and tools that can support feature model analysis are badly appreciated. Such reasoning tool should be fully automated and efficient. At the same time, the reasoning tool should scale up well since it may need to handle hundreds or even thousands of features a that modern software systems may have. This paper presents an approach to modeling and verifying feature diagrams using Semantic Web OWL ontologies. We use OWL DL ontologies to precisely capture the inter-relationships among the features in a feature diagram. OWL reasoning engines such as FaCT++ are deployed to check for the inconsistencies of feature configurations fully automatically. Furthermore, a general OWL debugger has been developed to tackle the disadvantage of lacking debugging aids for the current OWL reasoner and to complement our verification approach. We also developed a CASE tool to facilitate visual development, interchange and reasoning of feature diagrams in the Semantic Web environment.  相似文献   

3.
For common data flow schemes, the number of copies of tokens made during a computation is shown to be a Blum complexity measure.(1) Results from abstract complexity theory (see Ref. 2) then hold for the copy measure, indicating, for example, that any implementation of a data flow processor will be constrained by its ability to copy tokens. The copy measure is a natural measure of complexity for data flow computations, and is distinct from the usual time or space measures. The result is generalized to a wider class of data flow schemas, including those with an apply operator. An example is also given of a data flow scheme which makes no copies.Supported in part by NSF Grant MCS 83-01536 and NSA OCREAE Grant MDA904-85-H-0002.  相似文献   

4.
Methods are presented for verifying loops which iterate over elements of data structures. This verification is done in the functional style developed by Mills and others, in which code is verified against the function that the code is intended to compute. The methods allow the verifier to concentrate on the essential computation performed on each element of the structure, and separate out such concerns as data-structure access and termination so that they do not need to be verified again for every loop in the program. The methods are applicable to a large class of data structures and iterations over them  相似文献   

5.
Safety-critical embedded systems are often subject to multiple certification requirements from different certification authorities, giving rise to the concept of Mixed-Criticality Systems. Preemption Threshold Scheduling (PTS) is an effective technique for reducing stack memory usage by selectively disabling preemption between pairs of tasks. In this paper, we consider the AUTOSAR standard in automotive embedded software development, where each task consists of multiple runnables that are scheduled with static priority and preemption threshold. We address the problems of design synthesis from an AUTOSAR model to minimize stack usage for mixed-criticality systems with preemption threshold scheduling, and present algorithms for schedulability analysis and system stack usage minimization. Experimental results demonstrate that our approach can significantly reduce the system stack usage.  相似文献   

6.
Verifying data refinements using a model checker   总被引:2,自引:1,他引:1  
In this paper, we consider how refinements between state-based specifications (e.g., written in Z) can be checked by use of a model checker. Specifically, we are interested in the verification of downward and upward simulations which are the standard approach to verifying refinements in state-based notations. We show how downward and upward simulations can be checked using existing temporal logic model checkers.In particular, we show how the branching time temporal logic CTL can be used to encode the standard simulation conditions. We do this for both a blocking, or guarded, interpretation of operations (often used when specifying reactive systems) as well as the more common non-blocking interpretation of operations used in many state-based specification languages (for modelling sequential systems). The approach is general enough to use with any state-based specification language, and we illustrate how refinements between Z specifications can be checked using the SAL CTL model checker using a small example.  相似文献   

7.
《Information and Computation》2007,205(9):1413-1425
In this paper, we study secure information flow policies in the sense of Meadows [C. Meadows, Extending the Brewer–Nash model to a multi-level context. IEEE Computer Society Symposium on Research in Security and Privacy (1990) 95–102.] and others for aggregated datasets, collectively. We first present a method for combining different sensitivity levels over a common dataset and investigate its ramifications on information flow policies. Next, safe-flow policies are formulated in full generality using domain-theoretic tools, and systematically derived as closure operators from Scott continuous functions. Maximum safe-flow policies correspond to the top element of the lattice of the derived closure-operator collection. We then introduce a categorical framework for information flow, in which amalgamation is used to formulate and characterize information-flow policy merging.Our methods for mediating information flow policies should be of practical interest for information sharing among multiple agencies. Our formulation of safe-flow policies as closure operators from Scott continuous functions and the associated categorical formulation of safe-flow policy merging provide a sound and general theoretical foundation for the first time for this topic, setting a stage for further development in this area.  相似文献   

8.
This paper presents an embedded security sublanguage for enforcing information-flow policies in the standard Haskell programming language. The sublanguage provides useful information-flow control mechanisms including dynamic security lattices, run-time code privileges and declassification all without modifying the base language. This design avoids the redundant work of producing new languages, lowers the threshold for adopting security-typed languages, and also provides great flexibility and modularity for using security-policy frameworks.The embedded security sublanguage is designed using a standard combinator interface called arrows. Computations constructed in the sublanguage have static and explicit control-flow components, making it possible to implement information-flow control using static-analysis techniques at run time, while providing strong security guarantees. This paper presents a formal proof that our embedded sublanguage provides noninterference, a concrete Haskell implementation and an example application demonstrating the proposed techniques.1  相似文献   

9.
A. C. Fleck 《Software》1982,12(7):627-640
The presentation of an abstract data type by a series of equational axioms has become an accepted specification mechanism. Verifying the correctness of such specifications has been recognized as a problem troubling their use. A means is presented for experimenting with a directly executable version of the axioms without having to choose representations for the data structures or describe algorithms for the operations.  相似文献   

10.
Homeland security requires that organizations share sensitive data, but both suppliers and users must typically restrict data access for security, legal, or business reasons. Matchbox database servers provide highly secure, fine-grained access control using digitally cosigned contracts to enforce sharing restrictions. To handle security operations, Matchbox uses the tamper-responding, programmable IBM 4758 cryptographic coprocessor. Matchbox servers can be distributed on a network for high availability, and parties can communicate with Matchbox over public networks - including hostile environments with untrusted hardware, software, and administrators.  相似文献   

11.
12.
一种半自动化安全数据交换模型   总被引:2,自引:2,他引:0  
讨论了Internet环境下数据交换的特点,分析了一类在数据库间交换数据的应用所具有的特点,并在此基础上提出了一种独立于运行和开发平台的数据交换模型。该模型基于数据库间的直接连接,对应于一类广泛的数据交换应用。以一个可实际运行的原型系统为基础,详细讨论了该模型的体系结构、数据描述、数据交换通信协议(DECP)、安全性和半自动化等几方面的内容,并给出了实现指导。最后对进一步研究该模型给出了总结及展望。  相似文献   

13.
We present a comprehensive model of structured communications in which self-adaptation and security concerns are jointly addressed. More specifically, we propose a model of multiparty, self-adaptive communications with access control and secure information flow guarantees. In our model, multiparty protocols (choreographies) are described as global types; security violations occur when process implementations of protocol participants attempt to read or write messages of inappropriate security levels within directed exchanges. Such violations trigger adaptation mechanisms that prevent the violations to occur and/or to propagate their effect in the choreography. Our model is equipped with local and global adaptation mechanisms for reacting to security violations of different gravity; type soundness results ensure that the overall multiparty protocol is still correctly executed while the system adapts itself to preserve the participants’ security.  相似文献   

14.
This paper surveys protocols that verify remote data possession. These protocols have been proposed as a primitive for ensuring the long-term integrity and availability of data stored at remote untrusted hosts. Externalizing data storage to multiple network hosts is becoming widely used in several distributed storage and P2P systems, which urges the need for new solutions that provide security properties for the remote data. Replication techniques cannot ensure on their own data integrity and availability, since they only offer probabilistic guarantees. Moreover, peer dynamics (i.e., peers join and leave at any time) and their potential misbehavior (e.g., free-riding) exacerbate the difficult challenge of securing remote data. To this end, remote data integrity verification protocols have been proposed with the aim to detect faulty and misbehaving storage hosts, in a dynamic and open setting as P2P networks. In this survey, we analyze several of these protocols, compare them with respect to expected security guarantees and discuss their limitations.  相似文献   

15.
16.
This paper discusses standards-based approaches for secure data sharing across organizations. In particular, current standards as well as standardization trends for data integration, multimedia data management, active real-time data management, data warehousing and mining, expert data management, semantic web data management, knowledge management, visualization, metadata extraction and management, and security management for data sharing are discussed. We will illustrate the ideas with an example from emergency response and public health awareness application domain.  相似文献   

17.
Data Warehouses (DW), Multidimensional (MD) databases, and On-Line Analytical Processing (OLAP) applications provide companies with many years of historical information for the decision-making process. Owing to the relevant information managed by these systems, they should provide strong security and confidentiality measures from the early stages of a DW project in the MD modeling and enforce them. In the last years, there have been some proposals to accomplish the MD modeling at the conceptual level. Nevertheless, none of them considers security measures as an important element in their models, and therefore, they do not allow us to specify confidentiality constraints to be enforced by the applications that will use these MD models. In this paper, we present an Access Control and Audit (ACA) model for the conceptual MD modeling. Then, we extend the Unified Modeling Language (UML) with this ACA model, representing the security information (gathered in the ACA model) in the conceptual MD modeling, thereby allowing us to obtain secure MD models. Moreover, we use the OSCL (Object Security Constraint Language) to specify our ACA model constraints, avoiding in this way an arbitrary use of them. Furthermore, we align our approach with the Model-Driven Architecture, the Model-Driven Security and the Model-Driven Data Warehouse, offering a proposal highly compatible with the more recent technologies.  相似文献   

18.
19.
在存储外包应用中,无关RAM允许客户对不信任服务器隐藏数据存储模式。提出一种新的无关RAM结构,对客户的每个请求仅需常量级代价和少量客户端存储空间即可实现无关访问。  相似文献   

20.
《Information Systems》1999,24(5):377-400
Multilevel relations, based on the current multilevel secure (MLS) relational data models, can present a user with information that is difficult to interpret and may display an inconsistent outlook about the views of other users. Such ambiguity is due to the lack of a comprehensive method for asserting and interpreting beliefs about information at lower security levels. In this paper we present a belief-consistent MLS relational database model which provides an unambiguous interpretation of all visible information and gives the user access to the beliefs of users at lower security levels, neither of which was possible in any of the existing models. We identify different beliefs that can be held by users at higher security levels about information at lower security levels, and introduce a mechanism for asserting beliefs about all accessible tuples. This mechanism provides every user with an unambiguous interpretation of all viewable information and presents a consistent account of the views at all levels visible to the user. In order to implement this assertion mechanism, new database operations, such as verify true and verify false, are presented. We specify the constraints for the write operations, such as update and delete, that maintain belief consistency and redefine the relational algebra operations, such as select, project, union, difference and join.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号