共查询到20条相似文献,搜索用时 0 毫秒
1.
An algebra for probabilistic databases 总被引:3,自引:0,他引:3
An algebra is presented for a simple probabilistic data model that may be regarded as an extension of the standard relational model. The probabilistic algebra is developed in such a way that (restricted to α-acyclic database schemes) the relational algebra is a homomorphic image of it. Strictly probabilistic results are emphasized. Variations on the basic probabilistic data model are discussed. The algebra is used to explicate a commonly used statistical smoothing procedure and is shown to be potentially very useful for decision support with uncertain information 相似文献
2.
An extended authorization model for relational databases 总被引:3,自引:0,他引:3
Bertino E. Samarati P. Jajodia S. 《Knowledge and Data Engineering, IEEE Transactions on》1997,9(1):85-101
We propose two extensions to the authorization model for relational databases defined originally by P.G. Griffiths and B. Wade (1976). The first extension concerns a new type of revoke operation, called noncascading revoke operation. The original model contains a single, cascading revoke operation, meaning that when a privilege is revoked from a user, a recursive revocation takes place that deletes all authorizations granted by this user that do not have other supporting authorizations. The new type of revocation avoids the recursive revocation of authorizations. The second extension concerns negative authorization which permits specification of explicit denial for a user to access an object under a particular mode. We also address the management of views and groups with respect to the proposed extensions 相似文献
3.
VASA: An algebra for vague spatial data in databases 总被引:1,自引:0,他引:1
Many geographical applications deal with objects in space that cannot be adequately described by determinate, crisp spatial concepts because of their intrinsically indeterminate and vague nature. Geographical information systems and spatial database systems are currently unable to cope with this kind of data. To support the efficient representation, querying, and manipulation of vague spatial data in a database context, we present a formal data model called vague spatial algebra (VASA). This algebra comprises a set of vague spatial data types for vague points, vague lines, and vague regions together with a comprehensive collection of vague spatial operations and vague topological predicates. One of VASA's main benefits is that its formal framework is based on well known, general, and exact models of crisp spatial data types. This enables an exact definition of the vague spatial model since we can build upon an already existing theory of spatial data types. In particular, crisp spatial data types turn out to be a special case of their vague counterparts. In addition, our approach enables executable specifications for the operations, which can be immediately used as implementations. The article offers a precise and conceptually clean foundation for implementing a DBMS extension for vague spatial data and demonstrates the embedding of these new data types as attribute data types in a database schema as well as the incorporation of vague spatial operations and predicates into queries formulated in an SQL-like query language. All concepts have been verified in a prototype implementation. 相似文献
4.
Paul S. Prakash A. 《IEEE transactions on pattern analysis and machine intelligence》1996,22(3):202-217
Querying source code is an essential aspect of a variety of software engineering tasks such as program understanding, reverse engineering, program structure analysis and program flow analysis. In this paper, we present and demonstrate the use of an algebraic source code query technique that blends expressive power with query compactness. The query framework of Source Code Algebra (SCA) permits users to express complex source code queries and views as algebraic expressions. Queries are expressed on an extensible, object-oriented database that stores program source code. The SCA algebraic approach offers multiple benefits such as an applicative query language, high expressive power, seamless handling of structural and flow information, clean formalism and potential for query optimization. We present a case study where SCA expressions are used to query a program in terms of program organization, resource flow, control flow, metrics and syntactic structure. Our experience with an SCA-based prototype query processor indicates that an algebraic approach to source code queries combines the benefits of expressive power and compact query formulation 相似文献
5.
《Information Systems》1997,22(8):423-446
In today's technologically diverse corporate environment, it is common to find several different databases being used to accomplish the organization's operational data management functions. Providing interoperability among these databases is important to the successful operation of the organization. One approach to providing interoperability among heterogeneous database systems, is to define one or more schemas which represent a coherent view of the underlying databases. In the past, most approaches have used schematic knowledge about the underlying databases to generate integrated representations of the databases. In this paper we present a seven step methodology for utilizing integrity constraint knowledge from heterogeneous databases. Specifically, we describe how we can generate a set of integrity constraints applicable at the integrated level from constraints specified on local databases. We introduce the concept of constraint-based relationships between objects in heterogeneous databases and describe the role that these relationships play in integrity constraint integration. Finally, we describe how the integrated set of constraints generated using our methodology can be used to facilitate semantic query processing in a heterogeneous database environment. 相似文献
6.
Chomicki J. Goldin D. Kuper G. Toman D. 《Knowledge and Data Engineering, IEEE Transactions on》2003,15(6):1422-1436
In this paper, we study constraint databases with variable independence conditions (vics). Such databases occur naturally in the context of temporal and spatiotemporal database applications. Using computational geometry techniques, we show that variable independence is decidable for linear constraint databases. We also present a set of rules for inferring vics in relational algebra expressions. Using vics, we define a subset of relational algebra that is closed under restricted aggregation. 相似文献
7.
Levent V. Orman 《Electronic Commerce Research》2008,8(1-2):29-56
Electronic markets rely on database technology for efficient search for products, services, customers, and trading partners. Yet, database technology was not designed to build electronic markets, and it often fails to fully support market search activities. Market search often requires browsing and incremental search, attribute and constraint tradeoffs, approximate matches, and incorporation of user and task characteristics into searches. None of these requirements are supported directly by the database technology. Three major extensions to database technology are proposed to effectively support electronic markets. First, incremental search techniques are proposed to allow users to browse databases, and to move from solution to solution by following attribute and constraint tradeoffs. These techniques require systems to have some knowledge of similarity, distance, direction, and tradeoffs. Second, typing and stereotyping techniques are introduced to allow users to relate task and user characteristics to product attributes. These techniques require integration of typing systems with database searches. Third, a comprehensive search strategy is developed that combines incremental search and stereotyping techniques by utilizing product, task, and user ontologies. 相似文献
8.
Peter Nightingale 《Artificial Intelligence》2011,(2):586-614
The Extended Global Cardinality Constraint (EGCC) is a vital component of constraint solving systems, since it is very widely used to model diverse problems. The literature contains many different versions of this constraint, which trade strength of inference against computational cost. In this paper, I focus on the highest strength of inference usually considered, enforcing generalized arc consistency (GAC) on the target variables. This work is an extensive empirical survey of algorithms and optimizations, considering both GAC on the target variables, and tightening the bounds of the cardinality variables. I evaluate a number of key techniques from the literature, and report important implementation details of those techniques, which have often not been described in published papers. Two new optimizations are proposed for EGCC. One of the novel optimizations (dynamic partitioning, generalized from AllDifferent) was found to speed up search by 5.6 times in the best case and 1.56 times on average, while exploring the same search tree. The empirical work represents by far the most extensive set of experiments on variants of algorithms for EGCC. Overall, the best combination of optimizations gives a mean speedup of 4.11 times compared to the same implementation without the optimizations. 相似文献
9.
10.
With the current interest in using parallel computers as database servers to provide a scaleable parallel application which satisfies a real commercial need, there is a corresponding interest in performance prediction of parallel database systems. Both analytical and simulation approaches have been used and reported in the literature. This paper reports on an investigation into how a stochastic extension to classical process algebra (performance evaluation process algebra, PEPA) may be used for this purpose. This paradigm has a small but powerful set of elements which offers great flexibility for performance modelling. The paper describes how the approach has been adapted to handle database models, including the development of a technique, the decompositional approach, to handle the state-space explosion of parallel database models. It concludes with a comparison between the results obtained using this approach and those obtained using a different analytical approach. 相似文献
11.
《The Journal of Logic Programming》1987,4(4):331-343
We prove the correctness of a simplification method for checking static integrity constraints in stratified deductive databases. 相似文献
12.
13.
《Journal of Computer and System Sciences》2003,66(1):169-206
It is known that standard query languages for constraint databases lack the power to express connectivity properties. Such properties are important in the context of geographical databases, where one naturally wishes to ask queries about connectivity (What are the connected components of a given set?) or reachability (Is there a path from A to B that lies entirely in a given region?). No existing constraint query languages that allow closed-form evaluation can express these properties. In the first part of the paper, we show that, in principle, there is no obstacle to getting closed languages that can express connectivity and reachability queries. In fact, we show that adding any topological property to standard languages like FO+Lin and FO+Poly results in a closed language. In the second part of the paper, we look for tractable closed languages for expressing reachability and connectivity queries. We introduce path logic, which allows one to state properties of paths with respect to given regions. We show that it is closed, has polynomial time data complexity for linear and polynomial constraints, and can express a large number of reachability properties beyond simple connectivity. Query evaluation in the logic involves obtaining a discrete abstraction of a continuous path, and model-checking of temporal formulae on the discrete structure. 相似文献
14.
《Data & Knowledge Engineering》1989,4(3):223-244
A method is presented for checking integrity constraints in a deductive database in which verification of the integrity constraints in the updated database is reduced to the process of constructing paths from update literals to the heads of integrity constraints. In constructing such paths, the method takes advantage of the assumption that the database satisfies the integrity constraints prior to the update. If such a path can be constructed, the integrity of the database has been violated. Correctness of the method has been proved for checking integrity constraints in stratified deductive databases. An explanation of how this method may be realised efficiently in Prolog is given. 相似文献
15.
《Journal of Computer and System Sciences》2006,72(4):576-591
We study the efficient approximation of queries in linear constraint databases using sampling techniques. We define the notion of an almost uniform generator for a generalized relation and extend the classical generator of Dyer, Frieze and Kannan for convex sets to the union and the projection of relations. For the intersection and the difference, we give sufficient conditions for the existence of such generators. We show how such generators give relative estimations of the volume and approximations of generalized relations as the composition of convex hulls obtained from the samples. 相似文献
16.
Let F1,…,Fs∈R[X1,…,Xn] be polynomials of degree at most d, and suppose that F1,…,Fs are represented by a division free arithmetic circuit of non-scalar complexity size L. Let A be the arrangement of Rn defined by F1,…,Fs.For any point x∈Rn, we consider the task of determining the signs of the values F1(x),…,Fs(x) (sign condition query) and the task of determining the connected component of A to which x belongs (point location query). By an extremely simple reduction to the well-known case where the polynomials F1,…,Fs are affine linear (i.e., polynomials of degree one), we show first that there exists a database of (possibly enormous) size sO(L+n) which allows the evaluation of the sign condition query using only (Ln)O(1)log(s) arithmetic operations. The key point of this paper is the proof that this upper bound is almost optimal.By the way, we show that the point location query can be evaluated using dO(n)log(s) arithmetic operations. Based on a different argument, analogous complexity upper-bounds are exhibited with respect to the bit-model in case that F1,…,Fs belong to Z[X1,…,Xn] and satisfy a certain natural genericity condition. Mutatis mutandis our upper-bound results may be applied to the sparse and dense representations of F1,…,Fs. 相似文献
17.
In this paper, we study the issue of process creation from an algebraic perspective. The key to our approach, which is inspired by work of America and De Bakker, consists of giving a new interpretation to the operator symbol · (sequential composition) in the axiom system BPA of Bergstra and Klop. We present a number of other models for BPA and show how the new interpretation of · naturally generalises the usual interpretation in ACP. We give an operational semantics based on Plotkin style inductive rules for a simple language with process creation and communication, and give a complete finite axiomatisation of the associated bisimulation model.
Note: Partial support received from the European Communities under ESPRIT contract 432, An Integrated Formal Approach to Industrial Software Development (METEOR), and RACE contract 1046, Specification and Programming Environment for Communication Software (SPECS). 相似文献
18.
Protocols enable unambiguous, smooth interactions among agents. Commitments among agents are a powerful means of developing
protocols. Commitments enable flexible execution of protocols and help agents reason about protocols and plan their actions
accordingly, while at the same time providing a basis for compliance checking. Multiagent systems based on commitments can
conveniently and effectively model business interactions because the autonomy and heterogeneity of agents mirrors real-world
businesses. Such modeling, however, requires multiagent systems to host a rich variety of protocols that can capture the needs
of different applications. We show how a commitment-based semantics provides a basis for refining and aggregating protocols.
We propose an approach for designing commitment protocols wherein traditional software engineering notions such as refinement
and aggregation are extended to apply to protocols. We present an algebra of protocols that can be used to compose protocols
by refining and merging existing ones, and does this at a level of abstraction high enough to be useful for real-world applications.
We thank Amit Chopra, Nirmit Desai, and anonymous referees for valuable comments. This research was supported partially by
the NSF under grant DST-0139037, and partially by DARPA under contract F30603-00-C-0178. 相似文献
19.
《Advanced Engineering Informatics》2012,26(2):361-382
Architectural floor plan layout design is what architects and designers do when they conceptually combine design units, such as rooms or compartments. At the end of this activity, they deliver precise geometric schemas as solutions to particular problems. More research on this topic is needed to develop productive tools. The authors propose orthogonal compartment placement (OCP) as a new approach to this activity. OCP includes a problem formulation and a solution method in which qualitative and quantitative knowledge are combined. Topological knowledge underlies human spatial reasoning. Computers can adequately perform repetitive topological reasoning. We believe that OCP is the first approach in CAAD to incorporate a full relational algebra to generate floor plan layouts. Based on block algebra (BA) and constraint satisfaction (CS), OCP can generate candidate solutions that correspond to distinct topological options. The analysis of a case study using a prototype tool is included. 相似文献
20.
The RAPID-1 (relational access processor for intelligent data), an associative accelerator that recognizes tuples and logical formulas, is presented. It evaluates logical formulas instantiated by the current tuple, or record, and operates on whole relations or on hashing buckets. RAPID- 1 uses a reduced instruction set and hardwired control and executes all comparisons in a bit-parallel mode. It speeds up the database by a significant factor and will adapt to future generations of microprocessors. The principal design issues, data structures, instruction set, architecture, environments and performance are discussed 相似文献