共查询到20条相似文献,搜索用时 31 毫秒
1.
Nicoletta De Francesco Luca Martini 《International Journal of Information Security》2007,6(2-3):85-106
We present a method based on abstract interpretation to check secure information flow in programs with dynamic structures
where input and output channels are associated with security levels. In the concrete operational semantics each value is annotated
by a security level dynamically taking into account both the explicit and the implicit information flows. We define a collecting
semantics which associates with each program point the set of concrete states of the machine when the point is reached. The
abstract domains are obtained from the concrete ones by keeping the security levels and forgetting the actual values. Using
this framework, we define an abstract semantics, called instruction-level security typing, that allows us to certify a larger
set of programs with respect to the typing approaches to check secure information flow. An efficient implementation is shown,
operating a fixpoint iteration similar to that of the Java bytecode verification.
This work was partially supported by the Italian COFIN 2004 project “AIDA: Abstract Interpretation Design and Application”. 相似文献
2.
邵志清 《计算机科学技术学报》1993,8(2):155-161
In this paper we try to introduce a new approach to operational semantics of recursive programs by using ideas in the“priority method”which is a fundamental tool in Recursion Theory.In lieu of modelling partial functions by introducing undefined values in a traditional approach,we shall define a priority derivation tree for every term,and by respecting thr rule“attacking the subtem of the highest priority first”we define transition relations,computation sequences etc.directly based on a standard interpretation whic includes no undefined value in its domain,Finally,we prove that our new approach generates the same opeational semantics as the traditional one.It is also pointed out that we can use our strategy oto refute a claim of Loeckx and Sieber that the opperational semantics of recursive programs cannot be built based on predicate logic. 相似文献
3.
Steve Lipner 《Datenschutz und Datensicherheit - DuD》2010,34(3):135-137
The increasing adoption of “client and cloud” computing raises several important concerns about security. This article discusses
security issues that are associated with “client and cloud” and their impact on organizations that host applications “in the
cloud.” It describes how Microsoft minimizes the security vulnerabilities in these, possibly mission-critical, platforms and
applications by following two, complementary approaches: developing the policies, practices, and technologies to make their
“client and cloud” applications as secure as possible, and managing the security of the platform environment through clearly
defined operational security policies. 相似文献
4.
Jesper M. Johansson 《Information Technology and Management》2000,1(3):183-194
Research in distributed database systems to date has assumed a “variable cost” model of network response time. However, network
response time has two components: transmission time (variable with message size) and latency (fixed). This research improves
on existing models by incorporating a “fixed plus variable cost” model of the network response time. In this research, we:
(1) develop a distributed database design approach that incorporates a “fixed plus variable cost”, network response time function;
(2) run a set of experiments to create designs using this model, and
(3) evaluate the impact the new model had on the design in various types of networks.
This revised version was published online in July 2006 with corrections to the Cover Date. 相似文献
5.
Summary Equivalence is a fundamental notion for the semantic analysis of algebraic specifications. In this paper the notion of “crypt-equivalence”
is introduced and studied w.r.t. two “loose” approaches to the semantics of an algebraic specificationT: the class of all first-order models ofT and the class of all term-generated models ofT. Two specifications are called crypt-equivalent if for one specification there exists a predicate logic formula which implicitly
defines an expansion (by new functions) of every model of that specification in such a way that the expansion (after forgetting
unnecessary functions) is homologous to a model of the other specification, and if vice versa there exists another predicate
logic formula with the same properties for the other specification. We speak of “first-order crypt-equivalence” if this holds
for all first-order models, and of “inductive crypt-equivalence” if this holds for all term-generated models. Characterizations
and structural properties of these notions are studied. In particular, it is shown that firstorder crypt-equivalence is equivalent
to the existence of explicit definitions and that in case of “positive definability” two first-order crypt-equivalent specifications
admit the same categories of models and homomorphisms. Similarly, two specifications which are inductively crypt-equivalent
via sufficiently complete implicit definitions determine the same associated categories. Moreover, crypt-equivalence is compared
with other notions of equivalence for algebraic specifications: in particular, it is shown that first-order cryptequivalence
is strictly coarser than “abstract semantic equivalence” and that inductive crypt-equivalence is strictly finer than “inductive
simulation equivalence” and “implementation equivalence”. 相似文献
6.
Distributed authorization is an essential issue in computer security. Recent research shows that trust management is a promising
approach for the authorization in distributed environments. There are two key issues for a trust management system: how to
design an expressive high-level policy language and how to solve the compliance-checking problem (Blaze et al. in Proceedings
of the Symposium on Security and Privacy, pp. 164–173, 1996; Proceedings of 2nd International Conference on Financial Cryptography
(FC’98). LNCS, vol.1465, pp. 254–274, 1998), where ordinary logic programming has been used to formalize various distributed
authorization policies (Li et al. in Proceedings of the 2002 IEEE Symposium on Security and Privacy, pp. 114–130, 2002; ACM
Trans. Inf. Syst. Secur. (TISSEC) 6(1):128–171, 2003). In this paper, we employ Answer Set Programming to deal with many complex
issues associated with the distributed authorization along the trust management approach. In particular, we propose a formal
authorization language providing its semantics through Answer Set Programming. Using language , we cannot only express nonmonotonic delegation policies which have not been considered in previous approaches, but also
represent the delegation with depth, separation of duty, and positive and negative authorizations. We also investigate basic
computational properties related to our approach. Through two case studies. we further illustrate the application of our approach
in distributed environments. 相似文献
7.
Ensuring causal consistency in a Distributed Shared Memory (DSM) means all operations executed at each process will be compliant
to a causality order relation. This paper first introduces an optimality criterion for a protocol P, based on a complete replication of variables at each process and propagation of write updates, that enforces causal consistency.
This criterion measures the capability of a protocol to update the local copy as soon as possible while respecting causal
consistency. Then we present an optimal protocol built on top of a reliable broadcast communication primitive and we show
how previous protocols based on complete replication presented in the literature are not optimal. Interestingly, we prove
that the optimal protocol embeds a system of vector clocks which captures the read/write semantics of a causal memory. From
an operational point of view, an optimal protocol strongly reduces its message buffer overhead. Simulation studies show that
the optimal protocol roughly buffers a number of messages of one order of magnitude lower than non-optimal ones based on the
same communication primitive.
R. Baldoni Roberto Baldoni is a Professor of Distributed Systems at the University of Rome “La Sapienza”. He published more than one
hundred papers (from theory to practice) in the fields of distributed and mobile computing, middleware platforms and information
systems. He is the founder of MIDdleware LABoratory <://www.dis.uniroma1.it/$∼midlab> textgreater (MIDLAB) whose members
participate to national and european research projects. He regularly serves as an expert for the EU commission in the evaluation
of EU projects. Roberto Baldoni chaired the program committee of the “distributed algorithms” track of the 19th IEEE International
Conference on Distributed Computing Systems (ICDCS-99) and /he was PC Co-chair of the ACM International Workshop on Principles
of Mobile Computing/ (POMC). He has been also involved in the organizing and program committee of many premiership international
conferences and workshops.
A. Milani Alessia Milani is currently involved in a joint research doctoral thesis between the Department of Computer and Systems Science
of the University of Rome “La Sapienza” and the University of Rennes I, IRISA.She earned a Laurea degree in Computer Engineering
at University of Rome “La Sapienza” on May 2003. Her research activity involves the area of distributed systems. Her current
research interests include communication paradigms, in particular distributed shared memories, replication and consistency
criterions.
S. Tucci Piergiovanni Sara Tucci Piergiovanni is currently a Ph.D. Student at the Department of Computer and Systems Science of the University
of Rome “La Sapienza”.She earned a Laurea degree in Computer Engineering at University of Rome “La Sapienza” on March 2002
with marks 108/110. Her laurea thesis has been awarded the italian national “Federcommin-AICA” prize 2002 for best laurea
thesis in Information Technology. Her research activity involves the area of distributed systems. Early works involved the
issue of fault-tolerance in asynchronous systems and software replication. Currently, her main focus is on communication paradigms
that provide an “anonymous” communication as publish/subscribe and distributed shared memories. The core contributions are
several papers published in international conferences and journals. 相似文献
8.
Timed transition systems are one of the most popular real-time models for concurrency. In the paper, various behavioral equivalences
of timed transition systems are defined and studied. In particular, categories of this model are constructed, and their properties
are studied. In addition, based on the open maps concept, abstract characterization of the considered equivalences is given.
Such an approach makes it possible to develop a metatheory designed for unified definition and study of timed behavioral equivalences
in the “linear time-branching time” spectrum of semantics. 相似文献
9.
Handling message semantics with Generic Broadcast protocols 总被引:1,自引:0,他引:1
Summary. Message ordering is a fundamental abstraction in distributed systems. However, ordering guarantees are usually purely “syntactic,”
that is, message “semantics” is not taken into consideration despite the fact that in several cases semantic information about
messages could be exploited to avoid ordering messages unnecessarily. In this paper we define the Generic Broadcast problem, which orders messages only if needed, based on the semantics of the messages. The semantic information about messages
is introduced by conflict relations. We show that Reliable Broadcast and Atomic Broadcast are special instances of Generic
Broadcast. The paper also presents two algorithms that solve Generic Broadcast.
Received: August 2000 / Accepted: August 2001 相似文献
10.
Christian Bassac Bruno Mery Christian Retoré 《Journal of Logic, Language and Information》2010,19(2):229-245
After a quick overview of the field of study known as “Lexical Semantics”, where we advocate the need of accessing additional
information besides syntax and Montague-style semantics at the lexical level in order to complete the full analysis of an
utterance, we summarize the current formulations of a well-known theory of that field. We then propose and justify our own
model of the Generative Lexicon Theory, based upon a variation of classical compositional semantics, and outline its formalization.
Additionally, we discuss the theoretical place of informational, knowledge-related data supposed to exist within the lexicon
as well as within discourse and other linguistic constructs. The formalization of the structure of natural language utterances
around a surface form (phenogrammatics), a deep structure (tectogrammatics) and the meaning thereof as a logical form (semantics)
has developed from the original theories of Curry and Montague to form coherent, type-driven models. Most of these new theories
rely upon variations of the compositional analysis of the sentence: from pheno to tectogrammatics, and then to semantics.
Our contribution to this work aims at giving such a model a means to overcome the problems posed by polysemous lexical units
during the semantical analysis of the tectogrammatical form. Building upon an assumed “deep structure”, we formalize parts
of Pustejovsky’s Generative Lexicon Theory, linguistically motivated in Pustejovsky (The generative lexicon, MIT Press, Cambridge,
MA, 1995), in a pre-processing of the semantics of the sentence. The mechanisms of Lexical Semantics we propose are an additional
layer of classical Montague compositional semantics, and, as such, integrate smoothly within such an analysis; we proceed
by converting the lexical data to modifiers of the logical form. This treatment of Lexical Semantics furthermore induces us
to think that some sort of non-evident background knowledge of the common use of words is necessary to perform a correct semantic
analysis of an utterance. This “commonsense metaphysics” would therefore not be strictly confined to pragmatics, as is often
assumed. 相似文献
11.
The class of software which is “surreptitiously installed on a user’s computer and monitors a user’s activity and reports
back to a third party on that behavior” is referred to as spyware “(Stafford and Urbaczewski in Communications of the AIS
14:291–306, 2004)”. It is a strategic imperative that software vendors, who either embed surreptitious data collection and
other operations in legitimate software applications or whose software is unwittingly used as a delivery vehicle for surreptitious
operations, understand users’ perceptions of trust, privacy, and legal protection of such software to remain competitive.
This paper develops and tests a research model to explore application software users’ perceptions in the use of software with
embedded surreptitious operations. An experiment was undertaken to examine whether the presence of spyware in application
software impacts users’ perceptions and beliefs about trustworthiness of the application software, privacy control of the
software vendor, United States legal protection, and overall trust of the software vendor. The results indicate users of software
with spyware, versus users of software without spyware, have lower trust perceptions of a software vendor. Further examination
of trustworthiness as a multi-dimensional construct reveals a software vendor’s competence in appropriately using private
user information collected and the user’s belief that the vendor will abide by acceptable principles in information exchange
are important influences in gaining users’ overall trust in a vendor. User trust in software utilization is critical for a
software vendor’s success because without it, users may avoid a vendor’s software should the presence of spyware be discovered.
Software vendors should respond to the strategic necessity to gain users’ trust. Vendors must institute proactive and protective
measures to demonstrate that their software should be trusted. These protections could take the form of technological approaches
or government legislation, or both.
相似文献
Burke T. WardEmail: |
12.
Roberto A. Flores Philippe Pasquier Brahim Chaib-draa 《Autonomous Agents and Multi-Agent Systems》2007,14(2):165-186
We propose an operational model that combines message meaning and conversational structure in one comprehensive approach.
Our long-term research goal is to lay down principles uniting message meaning and conversational structure while providing
an operational foundation that could be implemented in open computer systems. In this paper we explore our advances in one
aspect of meaning that in theories of language use is known as “signal meaning”, and propose a layered model in which the
meaning of messages can be defined according to their fitness to advance the state of joint activities. Messages in our model
are defined in terms of social commitments, which have been shown to entice conversational structure. 相似文献
13.
14.
Gu Junzhong 《计算机科学技术学报》1993,8(4):3-20
In object-oriented database systems(OOBSs),the traditional transaction models are no longer suitable because of the difference between the object-oriented data model(OODM)and the conventional data models(e.g.relational data model).In this paper,transction models for advanced database applications are reviewed and their shortcomings are analyzed.Exchangeability of operations is proposed instead of commuativity and recoverability for using more semantics in transaction management.As a result,an object-oriented transaction model(in short,OOTM)is presented.It is not modeled for some special application,but directly based on object-oriented paradigms.A transaction is regarded as an interpretation of a metho.Each transaction(even subtransactions)keeps relative ACID(Atomicity,Consistency,Isolation,Durability)properties,therefore the special problems appearing in OOBSs such as“long transactions”,“visibility of inconsistent database state”can be solved. 相似文献
15.
Martin Grohe Yuri Gurevich Dirk Leinders Nicole Schweikardt Jerzy Tyszkiewicz Jan Van den Bussche 《Theory of Computing Systems》2009,44(4):533-560
We introduce a new abstract model of database query processing, finite cursor machines, that incorporates certain data streaming aspects. The model describes quite faithfully what happens in so-called “one-pass”
and “two-pass query processing”. Technically, the model is described in the framework of abstract state machines. Our main
results are upper and lower bounds for processing relational algebra queries in this model, specifically, queries of the semijoin
fragment of the relational algebra. 相似文献
16.
Luca Aceto 《Formal Aspects of Computing》1994,6(2):201-222
This paper proposes alternative, effective characterizations for nets of automata of the location equivalence and preorder presented by Boudol et al. in the companion paper [BCHK]. Contrary to the technical development in the above given reference, where locations are dynamically associated to the subparts of a process in the operational semantics, the equivalence and preorder we propose are based on a static association of locations to the parallel components of a net. Following this static approach, it is possible to give these distributed nets a standard operational semantics which associates with each net a finite labelled transition system. Using this operational semantics for distributed nets, we introduce effective notions of equivalence and preorder which are shown to coincide with those proposed in [BCHK]. 相似文献
17.
Carlo Combi Giuseppe Pozzi 《The VLDB Journal The International Journal on Very Large Data Bases》2001,9(4):294-311
The granularity of given temporal information is the level of abstraction at which information is expressed. Different units of measure allow
one to represent different granularities. Indeterminacy is often present in temporal information given at different granularities:
temporal indeterminacy is related to incomplete knowledge of when the considered fact happened. Focusing on temporal databases, different granularities
and indeterminacy have to be considered in expressing valid time, i.e., the time at which the information is true in the modeled
reality. In this paper, we propose HMAP (The term is the transliteration of an ancient Greek poetical word meaning “day”.), a temporal data model extending the capability
of defining valid times with different granularity and/or with indeterminacy. In HMAP, absolute intervals are explicitly represented by their start,end, and duration: in this way, we can represent valid times as “in December 1998 for five hours”, “from July 1995, for 15 days”, “from March
1997 to October 15, 1997, between 6 and 6:30 p.m.”. HMAP is based on a three-valued logic, for managing uncertainty in temporal relationships. Formulas involving different temporal
relationships between intervals, instants, and durations can be defined, allowing one to query the database with different
granularities, not necessarily related to that of data. In this paper, we also discuss the complexity of algorithms, allowing
us to evaluate HMAP formulas, and show that the formulas can be expressed as constraint networks falling into the class of simple temporal problems,
which can be solved in polynomial time.
Received 6 August 1998 / Accepted 13 July 2000 Published online: 13 February 2001 相似文献
18.
FGSPEC is a wide spectrum specification language intended to facilitate the software specification and the expression of transformation process from the functional specification whic describes “what to do ”to the corresponding design(perational)specification whic describer“how to do ”.The design emphasizes the coherence of multi-level specification mechanisms and a tree structure model is provided whic unifies the wide spectrum specification styles from“what”to“how”. 相似文献
19.
Arvind Arasu Shivnath Babu Jennifer Widom 《The VLDB Journal The International Journal on Very Large Data Bases》2006,15(2):121-142
CQL, a continuous query language, is supported by the STREAM prototype data stream management system (DSMS) at Stanford. CQL is an expressive SQL-based declarative
language for registering continuous queries against streams and stored relations. We begin by presenting an abstract semantics
that relies only on “black-box” mappings among streams and relations. From these mappings we define a precise and general
interpretation for continuous queries. CQL is an instantiation of our abstract semantics using SQL to map from relations to
relations, window specifications derived from SQL-99 to map from streams to relations, and three new operators to map from
relations to streams. Most of the CQL language is operational in the STREAM system. We present the structure of CQL's query
execution plans as well as details of the most important components: operators, interoperator queues, synopses, and sharing
of components among multiple operators and queries. Examples throughout the paper are drawn from the Linear Road benchmark recently proposed for DSMSs. We also curate a public repository of data stream applications that includes a wide
variety of queries expressed in CQL. The relative ease of capturing these applications in CQL is one indicator that the language
contains an appropriate set of constructs for data stream processing.
Edited by M. Franklin 相似文献