共查询到20条相似文献,搜索用时 0 毫秒
1.
A software organization which provides for data definition and manipulation in a distributed data base system is presented by describing the functions and interrelations of the component processes; with its methodology for access, the physical location of the data is transparent to the user program. The concepts of distributed data bases are discussed and current research is summarized as a means of establishing a method for the data placement and location mechanism. Procedures for the movement of data in a distributed data base system are presented, along with the data manipulation procedures, in terms of their performance and integrity effects. Enhancements to the mechanisms are suggested. 相似文献
2.
Frank Steyer 《Information Systems》1980,5(2):127-135
This paper attempts for the first time to formulate the majority of data base functions in an homogeneous and formal manner, in contrast to other methods which treat little more than the data manipulation functions. The main task of a data base management system is to execute certain uniform operations on a given set of data resources. Most problems of data base management systems can be deducted from this main task, in that a framework is provided both for investigating existing systems or designing new ones. 相似文献
3.
In this paper the catalog management strategy of the successfully integrating and running DDBMS C-POREL is summarized.The new catalog management strategy and its implementation scheme are based on the analysis of the catalog management methods of the pioneer DDBMS.The goal of the new strategy is to improve the system efficiency.Analysis and practice show that this strategy is successful. 相似文献
4.
5.
Cloud computing is increasingly being seen as a way to reduce infrastructure costs and add elasticity, and is being used by a wide range of organizations. Cloud data management systems today need to serve a range of different workloads, from analytical read-heavy workloads to transactional (OLTP) workloads. For both the service providers and the users, it is critical to minimize the consumption of resources like CPU, memory, communication bandwidth, and energy, without compromising on service-level agreements if any. In this article, we develop a workload-aware data placement and replication approach, called SWORD, for minimizing resource consumption in such an environment. Specifically, we monitor and model the expected workload as a hypergraph and develop partitioning techniques that minimize the average query span, i.e., the average number of machines involved in the execution of a query or a transaction. We empirically justify the use of query span as the metric to optimize, for both analytical and transactional workloads, and develop a series of replication and data placement algorithms by drawing connections to several well-studied graph theoretic concepts. We introduce a suite of novel techniques to achieve high scalability by reducing the overhead of partitioning and query routing. To deal with workload changes, we propose an incremental repartitioning technique that modifies data placement in small steps without resorting to complete repartitioning. We propose the use of fine-grained quorums defined at the level of groups of data items to control the cost of distributed updates, improve throughput, and adapt to different workloads. We empirically illustrate the benefits of our approach through a comprehensive experimental evaluation for two classes of workloads. For analytical read-only workloads, we show that our techniques result in significant reduction in total resource consumption. For OLTP workloads, we show that our approach improves transaction latencies and overall throughput by minimizing the number of distributed transactions. 相似文献
6.
《Artificial Intelligence》1987,32(1):1-55
Reasoning about time typically involves drawing conclusions on the basis of incomplete information. Uncertainty arises in the form of ignorance, indeterminacy, and indecision. Despite the lack of complete information, a problem solver is continually forced to make predictions in order to pursue hypotheses and plan for the future. Such predictions are frequently contravened by subsequent evidence. This paper presents a computational approach to temporal reasoning that directly confronts these issues. The approach centers around techniques for managing a data base of assertions corresponding to the occurrence of events and the persistence of their effects over time. The resulting computational framework performs the temporal analog of (static) reason maintenance by keeping track of dependency information involving assumptions about the truth of facts spanning various intervals of time. The system described in this paper extends classical predicate-calculus data bases, such as those used by PROLOG, to deal with time in an efficient and natural manner. 相似文献
7.
8.
Building knowledge base management systems 总被引:1,自引:0,他引:1
John Mylopoulos Vinay Chaudhri Dimitris Plexousakis Adel Shrufi Thodoros Topologlou 《The VLDB Journal The International Journal on Very Large Data Bases》1996,5(4):238-263
Advanced applications in fields such as CAD, software engineering, real-time process control, corporate repositories and digital
libraries require the construction, efficient access and management of large, shared knowledge bases. Such knowledge bases
cannot be built using existing tools such as expert system shells, because these do not scale up, nor can they be built in
terms of existing database technology, because such technology does not support the rich representational structure and inference
mechanisms required for knowledge-based systems. This paper proposes a generic architecture for a knowledge base management
system intended for such applications. The architecture assumes an object-oriented knowledge representation language with
an assertional sublanguage used to express constraints and rules. It also provides for general-purpose deductive inference
and special-purpose temporal reasoning. Results reported in the paper address several knowledge base management issues. For
storage management, a new method is proposed for generating a logical schema for a given knowledge base. Query processing
algorithms are offered for semantic and physical query optimization, along with an enhanced cost model for query cost estimation.
On concurrency control, the paper describes a novel concurrency control policy which takes advantage of knowledge base structure
and is shown to outperform two-phase locking for highly structured knowledge bases and update-intensive transactions. Finally,
algorithms for compilation and efficient processing of constraints and rules during knowledge base operations are described.
The paper describes original results, including novel data structures and algorithms, as well as preliminary performance evaluation
data. Based on these results, we conclude that knowledge base management systems which can accommodate large knowledge bases
are feasible.
Edited by Gunter Schlageter and H.-J. Schek.
Received May 19, 1994 / Revised May 26, 1995 / Accepted September 18, 1995 相似文献
9.
10.
There is the problem, how to control data base integrity. For many types of integrity assertions it is an unsolved problem, how to prove them in justifiable time. This paper deals with methods for improving the testing of semantic integrity assertions.First a classification of integrity assertions is given and a proposal for the implementation of a subsystem for controlling semantic integrity is described. Special algorithms are presented for testing delayed integrity assertions after the end of transactions, for testing integrity assertions, which contain functions, and for testing integrity assertions, which define relationships between values of attributes of different tuples. Using these algorithms in many cases the integrity test can be performed without any access to secondary storage, so that the cost for the tests becomes justifiable. 相似文献
11.
《Computers and Standards》1982,1(1):49-59
Auditing becomes increasingly difficult in a data base environment. Data base management systems standardization efforts, both nationally and worldwide, are significant advances for auditing, but so far they offer only partial solutions to the auditor's difficulties. The enterprise view of data and the International Standards Organization's (ISO) emphasis on the conceptual schema are applied to the audit environment. The findings suggest that auditors must become involved in the standardization discussions and in further attempts to define requirements for a special audit schema. 相似文献
12.
13.
In the last decade, a new class of data management systems collectively called NoSQL systems emerged and are now intensively developed. The main feature of these systems is that they abandon the relational data model and the SQL, do not fully support ACID transactions, and use distributed architecture (even though there are non-distributed NoSQL systems as well). As a result, such systems outperform the conventional SQL-oriented DBMSs in some applications; in addition, such systems are highly scalable under increasing workloads and huge amounts of data, which is important, in particular, for Web applications. Unfortunately, the absence of transactional semantics imposes certain constraints on the class of applications where NoSQL systems can be effectively used and the choice of a particular system significantly depends on the application. In this paper, a review of the main classes of NoSQL data management systems is given and examples of systems and applications where they can be used are discussed. 相似文献
14.
15.
Myron Miller 《Information & Management》1978,1(6)
A survey of the literature on Distributed Data Base Management Systems is presented. The problems associated with distributing data throughout a network are summarized into two major areas: Data Distribution and Data Transfer. Each area is described detailing some of the major proposed solutions to the problems therein. The intention here is to provide the reader with an overview and an extensive bibliography for further study on any aspect of Distributed DBMS. 相似文献
16.
Elementary algebra is taught at the University of Tennessee at Chattanooga in a computer managed self-paced course. This paper describes the computer management system, including the hardware, the data base, the principal management program and some report generation using an inquiry language. 相似文献
17.
An architectural approach is outlined toward the long range goal of a far-reaching data base communication system capable of supporting a network in which any user in any network node can be given an integrated and tailored view or schema (e.g. hierarchical, relational), while in reality the data may reside in one single data base or in physically separated data bases, managed individually by the same type of GDBMS (e.g. CODASYL, IMS, relational) or by different GDBMS. A series of data base model layers and mappings or translations between these layers are proposed. The entity-relationships model is fundamentally used for the highly logical model layers of the integrated system and a modified DIAM is used for the physical distribution and access path oriented layers. A comprehensive example of an integrated network of heterogeneous data bases is outlined, showing for a set of queries their formulation through different layers of the system from the virtual user realm to the physical data bases. Major challenges and issues are discussed. 相似文献
18.
19.
针对3类不同的温度传感器,给出了3种不同的温度数据采集系统的设计方案;在对3种设计方案的分析对比的基础上,结合不同的应用场合,为温度数据采集系统的设计选型提供了参考。 相似文献