首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Real-time update of access control policies, that is, updating policies while they are in effect and enforcing the changes immediately and automatically, is necessary for many dynamic environments. Examples of such environments include disaster relief and war zone. In such situations, system resources may need re-configuration or operational modes may change, necessitating a change of policies. For the system to continue functioning, the policies must be changed immediately and the modified policies automatically enforced. In this paper, we propose a solution to this problem—we consider real-time update of access control policies in the context of a database system.  相似文献   

3.
Delegation in role-based access control   总被引:3,自引:0,他引:3  
User delegation is a mechanism for assigning access rights available to one user to another user. A delegation can either be a grant or transfer operation. Existing work on delegation in the context of role-based access control models has extensively studied grant delegations, but transfer delegations have largely been ignored. This is largely because enforcing transfer delegation policies is more complex than grant delegation policies. This paper, primarily, studies transfer delegations for role-based access control models. We also include grant delegations in our model for completeness. We present various mechanisms that authorize delegations in our model. In particular, we show that the use of administrative scope for authorizing delegations is more efficient than using relations. We also discuss the enforcement and revocation of delegations. Finally, we study delegation in the context of workflow systems. In particular, we demonstrate the application of the administrative scope and administrative domain concepts to control delegation of tasks in worklist-based workflow systems.  相似文献   

4.
Computing Science is a new subject, and we have not yet achieved the unification of theories that should support a proper understanding of its structure. CAR Hoare and He Jifeng, 1998. In this paper we use Priestley duality and Jónsson/Tarski duality to translate between four versions of program semantics: the relational model, predicate transformer semantics, information systems, and powerdomains. Our point of entry is the relational model, a kind of Kripke semantics, in which programs are thought of as input-output relations over a structured state space. Specifically, we present the state space as a certain kind of Priestley space, and programs as certain structure-preserving relations over such a space. We then derive, in circular fashion, a predicate transformer semantics from the relational model, an information system from the predicate transformer semantics, a powerdomain from the information system, and the original relational model back again from the powerdomain. The information system is also shown to be related to Hoare logic. To clarify the intuition behind this approach we present a case study, which is a ‘Priestley version’ of the so-called universal domain due to Plotkin, and we explicate various ideas about properties of programs and predicates in terms of this case study. Received April 1997 / Accepted in revised form February 1998  相似文献   

5.
Any implementation should be proven to meet its specification. In this paper, we cope with the issue of checking the temporal correctness of a real-time program, implemented on a RTX microcontroller. Our real-time programs are first written in a high-level synchronous language (Esterel). and then, automatically translated into T-forth. Under these assumptions, the temporal behaviour of the generated RTX program can be predicted, at compile-time. In this paper, we present algorithms that compute realistic durations of system reactions. These algorithms use an abstract representation of the RTX program based on a behavioural semantics given to T-forth.  相似文献   

6.
We describe an approach to verifying bit-level pipelined machine models using a combination of deductive reasoning and decision procedures. While theorem-proving systems such as ACL2 have been used to verify bit-level designs, they typically require extensive expert user support. Decision procedures such as those implemented in UCLID can be used to automatically and efficiently verify term-level pipelined machine models, but these models use numerous abstractions, implement a subset of the instruction set, and are far from executable. We show that by integrating UCLID with the ACL2 theorem-proving system, we can use ACL2 to reduce the proof that an executable, bit-level machine refines its instruction set architecture to a proof that a term-level abstraction of the bit-level machine refines the instruction set architecture, which is then handled automatically by UCLID. We demonstrate the efficiency of our approach by applying it to verify a complex, seven-stage, bit-level interface pipelined machine model that implements 593 instructions and has features such as branch prediction, exceptions, and predicated instruction execution. Such a proof is not possible using UCLID and would require prohibitively more effort using just ACL2. This research was funded in part by NSF grants CCF-0429924, IIS-0417413, and CCF-0438871.  相似文献   

7.
In this paper we initiate the generalization of the Adversarial Queueing Theory (aqt) model to capture the dynamics of continuous scenarios in which the usually assumed synchronicity of the evolution is not required anymore. We propose an asynchronous model, named continuous aqt (caqt), in which packets can have arbitrary lengths, and the network links may have different speeds (or bandwidths) and propagation delays. With respect to the standard aqt model, these new features turn out to be significant for the stability of packet scheduling policies that take them into account, but not so much for the stability of networks. From the network point of view, we show that networks with directed acyclic topologies are universally stable, i.e., stable independently of the scheduling policies and traffic patterns used in it. Interestingly enough, this even holds for traffic patterns that make links to be fully loaded. Finally, it turns out that the set of universally stable networks remains the same as in the aqt model and, therefore, the property of universal stability of networks is decidable in polynomial time. Concerning packet scheduling policies, we show that the well-known lis, sis, ftgand nfsscheduling policies remain universally stable in the caqt model. We introduce other scheduling policies that, although being universally stable in the aqt model, they are unstable under the caqt model. This work was partially supported by eu IST-2004-15964 (aeolus), by the Spanish Ministry of Education and Science under cicyt tin2005-09198 (asce) and cicyt tin2005-25859-E, by the Regional Government of Madrid under contract No. 07T/0022/2003, and by the Universidad Rey Juan Carlos under project No. PPR-2004-42. Also partially supported by the Universidad de Chile (Mecesup fellowship and Anillo en Redes). A preliminary version of this work was presented in the 30th International Symposium on Mathematical Foundations of Computer Science (mfcs), Gdansk, Poland, LNCS, vol. 3618, pp. 144–155, Springer.  相似文献   

8.
ContextIn many organizational environments critical tasks exist which – in exceptional cases such as an emergency – must be performed by a subject although he/she is usually not authorized to perform these tasks. Break-glass policies have been introduced as a sophisticated exception handling mechanism to resolve such situations. They enable certain subjects to break or override the standard access control policies of an information system in a controlled manner.ObjectiveIn the context of business process modeling a number of approaches exist that allow for the formal specification and modeling of process-related access control concepts. However, corresponding support for break-glass policies is still missing. In this paper, we aim at specifying a break-glass extension for process-related role-based access control (RBAC) models.MethodWe use model-driven development (MDD) techniques to provide an integrated, tool-supported approach for the definition and enforcement of break-glass policies in process-aware information systems. In particular, we provide modeling support on the computation independent model (CIM) layer as well as on the platform independent model (PIM) and platform specific model (PSM) layers.ResultsOur approach is generic in the sense that it can be used to extend process-aware information systems or process modeling languages with support for process-related RBAC and corresponding break-glass policies. Based on the formal CIM layer metamodel, we present a UML extension on the PIM layer that allows for the integrated modeling of processes and process-related break-glass policies via extended UML Activity diagrams. We evaluated our approach in a case study on real-world processes. Moreover, we implemented our approach at the PSM layer as an extension to the BusinessActivity library and runtime engine.ConclusionOur integrated modeling approach for process-related break-glass policies allows for specifying break-glass rules in process-aware information systems.  相似文献   

9.
Access control policies may contain anomalies such as incompleteness and inconsistency, which can result in security vulnerabilities. Detecting such anomalies in large sets of complex policies automatically is a difficult and challenging problem. In this paper, we propose a novel method for detecting inconsistency and incompleteness in access control policies with the help of data classification tools well known in data mining. Our proposed method consists of three phases: firstly, we perform parsing on the policy data set; this includes ordering of attributes and normalization of Boolean expressions. Secondly, we generate decision trees with the help of our proposed algorithm, which is a modification of the well-known C4.5 algorithm. Thirdly, we execute our proposed anomaly detection algorithm on the resulting decision trees. The results of the anomaly detection algorithm are presented to the policy administrator who will take remediation measures. In contrast to other known policy validation methods, our method provides means for handling incompleteness, continuous values and complex Boolean expressions. In order to demonstrate the efficiency of our method in discovering inconsistencies, incompleteness and redundancies in access control policies, we also provide a proof-of-concept implementation.  相似文献   

10.
Modern multicore processors, such as the Cell Broadband Engine, achieve high performance by equipping accelerator cores with small “scratch-pad” memories. The price for increased performance is higher programming complexity – the programmer must manually orchestrate data movement using direct memory access (DMA) operations. Programming using asynchronous DMA operations is error-prone, and DMA races can lead to nondeterministic bugs which are hard to reproduce and fix. We present a method for DMA race analysis in C programs. Our method works by automatically instrumenting a program with assertions modeling the semantics of a memory flow controller. The instrumented program can then be analyzed using state-of-the-art software model checkers. We show that bounded model checking is effective for detecting DMA races in buggy programs. To enable automatic verification of the correctness of instrumented programs, we present a new formulation of k-induction geared towards software, as a proof rule operating on loops. Our techniques are implemented as a tool, Scratch, which we apply to a large set of programs supplied with the IBM Cell SDK, in which we discover a previously unknown bug. Our experimental results indicate that our k-induction method performs extremely well on this problem class. To our knowledge, this marks both the first application of k-induction to software verification, and the first example of software model checking in the context of heterogeneous multicore processors.  相似文献   

11.
In this paper we describe the successful application of the ProB tool for data validation in several industrial applications. The initial case study centred on the San Juan metro system installed by Siemens. The control software was developed and formally proven with B. However, the development contains certain assumptions about the actual rail network topology which have to be validated separately in order to ensure safe operation. For this task, Siemens has developed custom proof rules for Atelier B. Atelier B, however, was unable to deal with about 80 properties of the deployment (running out of memory). These properties thus had to be validated by hand at great expense, and they need to be revalidated whenever the rail network infrastructure changes. In this paper we show how we were able to use ProB to validate all of the about 300 properties of the San Juan deployment, detecting exactly the same faults automatically in a few minutes that were manually uncovered in about one man-month. We have repeated this task for three ongoing projects at Siemens, notably the ongoing automatisation of the line 1 of the Paris Métro. Here again, about a man month of effort has been replaced by a few minutes of computation. This achievement required the extension of the ProB kernel for large sets as well as an improved constraint propagation algorithm. We also outline some of the effort and features that were required in moving from a tool capable of dealing with medium-sized examples towards a tool able to deal with actual industrial specifications. We also describe the issue of validating ProB, so that it can be integrated into the SIL4 development chain at Siemens.  相似文献   

12.
The software model checker Blast   总被引:2,自引:0,他引:2  
Blast is an automatic verification tool for checking temporal safety properties of C programs. Given a C program and a temporal safety property, Blast either statically proves that the program satisfies the safety property, or provides an execution path that exhibits a violation of the property (or, since the problem is undecidable, does not terminate). Blast constructs, explores, and refines abstractions of the program state space based on lazy predicate abstraction and interpolation-based predicate discovery. This paper gives an introduction to Blast and demonstrates, through two case studies, how it can be applied to program verification and test-case generation. In the first case study, we use Blast to statically prove memory safety for C programs. We use CCured, a type-based memory-safety analyzer, to annotate a program with run-time assertions that check for safe memory operations. Then, we use Blast to remove as many of the run-time checks as possible (by proving that these checks never fail), and to generate execution scenarios that violate the assertions for the remaining run-time checks. In our second case study, we use Blast to automatically generate test suites that guarantee full coverage with respect to a given predicate. Given a C program and a target predicate p, Blast determines the program locations q for which there exists a program execution that reaches q with p true, and automatically generates a set of test vectors that cause such executions. Our experiments show that Blast can provide automated, precise, and scalable analysis for C programs.  相似文献   

13.
Applying semantic knowledge to real-time update of access control policies   总被引:1,自引:0,他引:1  
Real-time update of access control policies, that is, updating policies while they are in effect and enforcing the changes immediately, is necessary for many security-critical applications. In this paper, we consider real-time update of access control policies in a database system. Updating policies while they are in effect can lead to potential security problems, such as, access to database objects by unauthorized users. In this paper, we propose several algorithms that not only prevent such security breaches but also ensure the correctness of execution. The algorithms differ from each other in the degree of concurrency provided and the semantic knowledge used. Of the algorithms presented, the most concurrency is achieved when transactions are decomposed into atomic steps. Once transactions are decomposed, the atomicity, consistency, and isolation properties no longer hold. Since the traditional transaction processing model can no longer be used to ensure the correctness of the execution, we use an alternate semantic-based transaction processing model. To ensure correct behavior, our model requires an application to satisfy a set of necessary properties, namely, semantic atomicity, consistent execution, sensitive transaction isolation, and policy-compliant. We show how one can verify an application statically to check for the existence of these properties.  相似文献   

14.
The Internet has provided users with access to wealth of information available on the Web, a considerable amount of which is semi-structured. In recent times, there has grown a need to access this information in an interpretable and meaningful way. One way to access this information is by transforming it into relations and integrating the relations to arrive at a universal relation thus ensuring simple and powerful searching of the underlying semi-structured data. Several operations exist for integrating these relations, like outer union, natural inner join, natural outer join etc., of which natural outer join has been found to be most effective in terms of achieving meaningful results i.e. full disjunction of the relations. Since natural outer join is non-associative, for more than two relations only some of the natural outer join ordering may lead to full disjunction. The Correct Outer Join Ordering (COJO) strategy proposed in this paper computes such natural outer join ordering of relations leading to full disjunction. It is further shown that the natural outer join ordering of relations produced by algorithm COJO compares well with the well-known algorithm SOJO on performance parameters. This paper is an expanded version of [24].  相似文献   

15.
Locating potential execution errors in software is gaining more attention due to the economical and social impact of software crashes. For this reason, many software engineers are now in need of automatic debugging tools in their development environments. Fortunately, the work on formal method technologies during the past 25 years has produced a number of techniques and tools that can make the debugging task almost automatic, using standard computer equipment and with a reasonable response time. In particular, verification techniques like model-checking that were traditionally employed for formal specifications of the software can now be directly employed for real source code. Due to the maturity of model-checking technology, its application to real software is now a promising and realistic approach to increase software quality. There are already some successful examples of tools for this purpose that mainly work with self-contained programs (programs with no system-calls). However, verifying software that uses external functionality provided by the operating system via API s is currently a challenging trend. In this paper, we propose a method for using the tool spin to verify C software systems that use services provided by the operating system thorough a given API. Our approach consists in building a model of the underlying operating system to be joined with the original C code in order to obtain the input for the model checker spin. The whole modeling process is transparent for the C programmer, because it is performed automatically and without special syntactic constraints in the input C code. Regarding verification, we consider optimization techniques suitable for this application domain, and we guarantee that the system only reports potential (non-spurious) errors. We present the applicability of our approach focusing on the verification of distributed software systems that use the API Socket and the network protocol stack TCP/IP for communications. In order to ensure correctness, we define and use a formal semantics of the API to conduct the construction of correct models.  相似文献   

16.
In this work we present Zeus, a distributed timed model checker that evolves from the TCTL model checker Kronos [13] and that currently can handle backwards computation of reachability properties [2] over timed automata [3].Zeus was developed following a software architecture-centric approach. Its conceptual architecture was conceived to be sufficiently modular to house several features such as a priori graph partitioning, synchronous and asynchronous computation, communication piggybacking, delayed messaging, and dead-time utilization.Surprisingly enough, early experiments pinpointed the difficulties of getting speedups using asynchronous versions and showed interesting results on the synchronous counterpart, although being intuitively less attractive.  相似文献   

17.

Local occlusion cue has been successfully exploited to infer depth ordering from monocular image. However, due to uncertainty of occluded relations, inconsistent results frequently arise, especially for the image of complex scenarios. We propose a depth propagation mechanism which incorporates local occlusion and global ground cues together in the way of probabilistic-to-energetic Bayesian framework. By maximizing posterior namely minimizing energy of latent relative depth variables with well-defined pairwise occlusion priori, we recover correct depth ordering in monocular setting. Our model can guarantee the consistency of relative depth labeling in automatically constructed topological graph via transferring more confident aligned multi-depth cues amongst different segments. Experiments demonstrate that more reasonable and accurate outcomes can be achieved by our depth propagation mechanism and they are also superior to common-used occlusion-based approaches in complex nature.

  相似文献   

18.
The aim of process mining is to discover the process model from the event log which is recorded by the information system. Typical steps of process mining algorithm can be described as: (1) generating event traces from event log, (2) analyzing event traces and obtaining ordering relations of tasks, (3) generating process model with ordering relations of tasks. The first two steps could be very time consuming involving millions of events and thousands of event traces. This paper presents a novel algorithm (λ-algorithm) which almost eliminates these two steps in generating event traces from event log and analyzing event traces so as to reduce the performance of process mining algorithm. Firstly, we retrieve the event multiset (input data of algorithm marked as MS) which records the frequency of each event but ignores their orders when extracted from event logs. The event in event multiset contains the information of post-activities. Secondly, we obtain ordering relations from event multiset. The ordering relations contain causal dependency, potential parallelism and non-potential parallelism. Finally, we discover a process models with ordering relations. The complexity of λ-algorithm is only bound up with the event classes (the set of events in event logs) that has significantly improved the performance of existing process mining algorithms and is expected to be more practical in real-world process mining based on event logs, as well as being able to detect SWF-nets, short-loops and most of implicit dependency (generated by non-free choice constructions).  相似文献   

19.
Improved Parameterized Set Splitting Algorithms: A Probabilistic Approach   总被引:2,自引:0,他引:2  
In this paper, we study parameterized algorithms for the set splitting problem, for both weighted and unweighted versions. First, we develop a new and effective technique based on a probabilistic method that allows us to develop a simpler and more efficient deterministic kernelization algorithm for the unweighted set splitting problem. We then propose a randomized algorithm for the weighted set splitting problem that is based on a new subset partition technique and has its running time bounded by O *(2 k ), which is significantly better than that of the previous best deterministic algorithm (which only works for the simpler unweighted set splitting problem) of running time O *(2.65 k ). We also show that our algorithm can be de-randomized, which leads to a deterministic parameterized algorithm of running time O *(4 k ) for the weighted set splitting problem and gives the first proof that the problem is fixed-parameter tractable. A preliminary version of this paper was presented at The 13th Annual International Computing and Combinatorics Conference (COCOON 2007), Banff, Canada, July 2007, LNCS vol. 4598, pp. 537–547. This work was supported in part by the National Science Foundation under the Grant CCF-0430683.  相似文献   

20.
The purpose of this paper is to introduce a theory of fuzzily defined complement operations on nonempty sets equipped with fuzzily defined ordering relations. Many-valued equivalence relation-based fuzzy ordering relations (also called vague ordering relations) provide a powerful and a comprehensive mathematical modelling of fuzzily defined partial ordering relations. For this reason, starting with a nonempty set X equipped with a many-valued equivalence relation and a vague ordering relation, a fuzzily defined complement operation (called a vague complement operation) on X will be formulated by means of the underling many-valued equivalence relation and vague ordering relation. Because of the fact that the practical implementations of vague complement operations basically depend on their representation properties, a considerable part of this paper is devoted to the representations of vague complement operations. In addition to this, the present paper provides various nontrivial examples for vague complements, and introduces a many-valued logical interpretation of quantum logic as a real application of vague complements.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号