首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Integrity constraints (including key, referential and domain constraints) are unique features of database applications. Integrity constraints are crucial for ensuring accuracy and consistency of data in a database. It is important to perform integrity constraint enforcement (ICE) at the application level to reduce the risk of database corruption. We have conducted an empirical analysis of open-source PHP database applications and found that ICE does not receive enough attention in real-world programming practice. We propose an approach for automatic detection of ICE violations at the application level based on identification of code patterns. We define four patterns that characterize the structures of code implementing integrity constraint enforcement. Violations of these patterns indicate the missing of integrity constraint enforcement. Our work contributes to quality improvement of database applications. Our work also demonstrates that it is feasible to effectively identify bugs or problematic code by mining code patterns in a specific domain/application area.  相似文献   

2.
Many of the Web applications around us are data-intensive; their main purpose is to present a large amount of data to their users. Most online trading and e-commerce sites fall into this category, as do digital libraries and institutional sites describing private and public organizations. Several commercial Web development systems aid rapid creation of data-intensive applications by supporting semiautomatic data resource publishing. Automatic publishing is typically subject to the constraints of database schemas, which limit an application designer's choices. Thus, Web application development often requires adaptation through programming, and programs end up intricately mixing data, navigation, and presentation semantics. Presentation is often a facade for elements of structure, composition, and navigation. Despite this frequently unstructured development process, data-intensive applications, based on large data sets organized within a repository or database, generally follow some typical patterns and rules. We describe these patterns and rules using WebML as a conceptual tool to make such notions explicit. WebML is a conceptual Web modeling language that uses the entity-relationship (ER) model for describing data structures and an original, high-level notation for representing Web content composition and navigation in hypertext form.  相似文献   

3.
When updating a knowledge base, several problems may arise. One of the most important problems is that of integrity constraints satisfaction. The classic approach to this problem has been to develop methods forchecking whether a given update violates an integrity constraint. An alternative approach consists of trying to repair integrity constraints violations by performing additional updates thatmaintain knowledge base consistency. Another major problem in knowledge base updating is that ofview updating, which determines how an update request should be translated into an update of the underlying base facts. We propose a new method for updating knowledge bases while maintaining their consistency. Our method can be used for both integrity constraints maintenance and view updating. It can also be combined with any integrity checking method for view updating and integrity checking. The kind of updates handled by our method are: updates of base facts, view updates, updates of deductive rules, and updates of integrity constraints. Our method is based on events and transition rules, which explicity define the insertions and deletions induced by a knowledge base update. Using these rules, an extension of the SLDNF procedure allows us to obtain all possible minimal ways of updating a knowledge base without violating any integrity constraint.  相似文献   

4.
《Information Systems》2005,30(2):89-118
Business rules are the basis of any organization. From an information systems perspective, these business rules function as constraints on a database helping ensure that the structure and content of the real world—sometimes referred to as miniworld—is accurately incorporated into the database. It is important to elicit these rules during the analysis and design stage, since the captured rules are the basis for subsequent development of a business constraints repository. We present a taxonomy for set-based business rules, and describe an overarching framework for modeling rules that constrain the cardinality of sets. The proposed framework results in various types constraints, i.e., attribute, class, participation, projection, co-occurrence, appearance and overlapping, on a semantic model that supports abstractions like classification, generalization/specialization, aggregation and association. We formally define the syntax of our proposed framework in Backus-Naur Form and explicate the semantics using first-order logic. We describe partial ordering in the constraints and define the concept of metaconstraints, which can be used for automatic constraint consistency checking during the design stage itself. We demonstrate the practicality of our approach with a case study and show how our approach to modeling business rules seamlessly integrates into existing database design methodology. Via our proposed framework, we show how explicitly capturing data semantics will help bridge the semantic gap between the real world and its representation in an information system.  相似文献   

5.
Revision programming   总被引:2,自引:0,他引:2  
In this paper we introduce revision programming — a logic-based framework for describing constraints on databases and providing a computational mechanism to enforce them. Revision programming captures those constraints that can be stated in terms of the membership (presence or absence) of items (records) in a database. Each such constraint is represented by a revision rule1,…,k, where and all gai are of the form in(a) and out(b). Collections of revision rules form revision programs. Similarly as logic programs, revision programs admit both declarative and imperative (procedural) interpretations. In our paper, we introduce a semantics that reflects both interpretations. Given a revision program, this semantics assigns to any database B a collection (possibly empty) of P-justified revisions of B. The paper contains a thorough study of revision programming. We exhibit several fundamental properties of revision programming. We study the relationship of revision programming to logic programming. We investigate complexity of reasoning with revision programs as well as algorithms to compute P-justified revisions. Most importantly from the practical database perspective, we identify two classes of revision programs, safe and stratified, with a desirable property that they determine for each initial database a unique revision.  相似文献   

6.
Most research on semantic integrity has taken place in the traditional database fields, specifically the relational data model. Advanced models, such as semantic and object-oriented data models, have developed higher level abstractions to increase their expressive power in order to meet the needs of newly emerging application domains. This allows them to incorporate some semantic constraints directly into their schemas. There are, however, many types of restrictions that cannot be expressed solely by these high-level constructs. Therefore we extend the potential of advanced models by augmenting their abstractions with useful set restrictions. In particular, we identify and formulate four of their most common semantic groupings: set groupings, is-a related set groupings, power set groupings, and Cartesian product groupings. For each, we define a number of restrictions that control its structure and composition. We exploit the notion of object identity for the definition of these semantic restrictions. This permits each grouping to capture more subtle distinctions of the concepts in the application environment, as demonstrated by numerous examples throughout this paper. The resulting set of restrictions forms a general framework for integrity constraint management in advanced data models  相似文献   

7.
We present a logic programming based asynchronous multi-agent system in which agents can communicate with one another; update themselves and each other; abduce hypotheses to explain observations, and use them to generate actions. The knowledge base of the agents is comprised of generalized logic programs, integrity constraints, active rules, and of abducibles. We characterize the interaction among agents via an asynchronous transition rule system, and provide a stable models based semantics. An example is developed to illustrate how our approach works.  相似文献   

8.
Discusses a paradigm and prototype system for the design-time expression, checking and automatic implementation of the semantics of database updates. Enforcement rules are viewed as the implementation of constraints and are specified, checked for consistency, and then finally mapped to object-oriented code during database design. A classification of enforcement rule types is provided as a basis for these design activities, and the general strategy for specification, analysis and implementation of these rules within a semantic modeling paradigm is discussed. SORAC (Semantics, Objects, Relationships And Constraints), a prototype database design system of the University of Rhode Island, is also described  相似文献   

9.
Gal  A. Etzion  O. 《Computer》1995,28(1):28-38
A new model with invariant-based language effectively handles data-driven rules in databases and uses the rules' inherent semantic properties and supporting mechanisms to meet high-level language requirements. It is an extension of the basic PARDES model developed by Opher Etzion in 1990 to support derivations and integrity constraints in databases. The model's invariant-based language, unlike other programming languages, can follow data-driven rules' semantic properties. Such rules are activated by modifications of data items in a database, and they play an important role in many applications that maintain complex relationships between data items or interdependencies between parts of the database. Applications include expert systems, real-time databases, simulations, and decision-support systems. The authors present requirements for choosing an adequate programming style that uses data-driven rules. These requirements are based on software-engineering criteria such as compatibility with a high-level language and verifiability of the rule language. The authors show that contemporary database programming styles fail to meet these requirements, and they present the invariant-based language as a viable solution  相似文献   

10.
A rule-based approach for the automatic enforcement of consistency constraints is presented. In contrast to existing approaches that compile consistency checks into application programs, the approach centralizes consistency enforcement in a separate module called a knowledge-base management system. Exception handlers for constraint violations are represented as rule entities in the knowledge base. For this purpose, a new form of production rule called the activation pattern controlled rule is introduced: in contrast to classical forward chaining schemes, activation pattern controlled rules are triggered by the intent to apply a specific operation but not necessarily by the result of applying this operation. Techniques for implementing this approach are discussed, and experiments in speeding up the system performance are described. Furthermore, an argument is made for more tolerant consistency enforcement strategies, and how they can be integrated into the rule-based approach to consistency enforcement is discussed  相似文献   

11.
The authors address the problem of providing a homogeneous framework for integrating, in a database environment, active rules, which allow the specification of actions to be executed whenever certain events take place, and deductive rules, which allow the specification of deductions in a logic programming style. Actually, it is widely recognized that both kinds of rules enhance the capabilities of database systems since they provide very natural mechanisms for the management of various important activities (e.g., knowledge representation, complex data manipulation, integrity constraint enforcement, view maintenance). However, in spite of their strong relationship, little work has been done on the unification of these powerful paradigms. They present a rule-based language with an event-driven semantics that allows programmers to express both active and deductive computations. The language is based on a new notion of production rules whose effect is both a change of state and an answer to a query. By using several examples, they show that this simple language schema allows one to uniformly define different computations on data, including complex data manipulations, deductive evaluations, and active rule processing. They define the semantics of the language and then describe the architecture of a preliminary implementation of the language. Finally, they report on the application and experience of using the language  相似文献   

12.
Operational semantics is often presented in a rather syntactic fashion using relations specified by inference rules or equivalently by clauses in a suitable logic programming language. As it is well known, various syntactic details of specifications involving bound variables can be greatly simplified if that logic programming language has term-level abstractions (λ-abstraction) and proof-level abstractions (eigenvariables) and the specification encodes object-level binders using λ-terms and universal quantification. We shall attempt to extend this specification setting to include the problem of specifying not only relations capturing operational semantics, such as one-step evaluation, but also properties and relations about the semantics, such as simulation. Central to our approach is the encoding of generic object-level judgments (universally quantified formulas) as suitable atomic meta-level judgments. We shall encode both the one-step transition semantics and simulation of (finite) π-calculus to illustrate our approach.  相似文献   

13.
Modelling data secrecy and integrity   总被引:1,自引:0,他引:1  
The paper describes a semantic data model used as a design environment for multilevel secure database applications. The proposed technique is built around the concept of security classification constraints (security semantics) and takes into account that security restrictions may either have effects on the static part of a system, on the behavior of the system (the system functions), or on both. As security constraints may influence each other appropriate integrity mechanisms are necessary and modelling of a multilevel application must be data as well as function driven. This functionality is included in the proposed semantic data model for multilevel security by developing secure data schemas, secure function schemas, a procedure for alternating iterative refinements on either schema, and a powerful integrity system to check the consistency of the classification constraints and of the multilevel secure database application.  相似文献   

14.
The relational data model has become the standard for mainstream database processing despite its well-known weakness in the area of representing application semantics. The research community's response to this situation has been the development of a collection of semantic data models that allow more of the meaning of information to be presented in a database. The primary tool for accomplishing this has been the use of various data abstractions, most commonly: inclusion, aggregation and association. This paper develops a general model for analyzing data abstractions, and then applies it to these three best-known abstractions.  相似文献   

15.
Flow models underlie popular programming languages and many graphical behavior specification tools. However, their semantics is typically ambiguous, causing miscommunication between modelers and unexpected implementation results. This article introduces a way to disambiguate common flow modeling constructs, by expressing their semantics as constraints on runtime sequences of behavior execution. It also shows that reduced ambiguity enables more powerful modeling abstractions, such as partial behavior specifications. The runtime representation considered in this paper uses the Process Specification Language (PSL), which is defined in first-order logic, making it amenable to automated reasoning. The activity diagrams of the Unified Modeling Language are used for example flow models.  相似文献   

16.
In recent years, generalization-based data mining techniques have become an interesting topic for many data scientists. Generalized itemset mining is an exploration technique that focuses on extracting high-level abstractions and correlations in a database. However, the problem that domain experts must always deal with is how to manage and interpret a large number of extracted patterns from a massive database of transactions. In generalized pattern mining, taxonomies that contain abstraction information for each dataset are defined, so the number of frequent patterns can grow enormously. Therefore, exploiting knowledge turns into a difficult and costly process. In this article, we introduce an approach that uses cardinality-based constraints with transaction id and numeric encoding to mine generalized patterns. We applied transaction id to support the computation of each frequent itemset as well as to encode taxonomies into a numeric type using two simple rules. We also attempted to apply the combination of cardinality cons- traints and closed or maximal patterns. Experiments show that our optimizations significantly improve the performance of the original method, and the importance of comprehensive information within closed and maximal patterns is worth considering in generalized frequent pattern mining.  相似文献   

17.
Semantic integrity constraints are used for enforcing the integrity of the database as well as for improving the efficiency of the database utilization. Although semantic integrity constraints are usually much more static as compared to the data itself, changes in the data semantics may necessitate corresponding changes in the constraint base. In this paper we address the problems related with maintaining a consistent and non-redundant set of constraints satisfied by the database in the case of updates to the constraint base. We consider implication constraints as semantic integrity constraints. The constraints are represented as conjunctions of inequalities. We present a methodology to determine whether a constraint is redundant or contradictory with respect to a set of constraints. The methodology is based on the partitioning of the constraint base which improves the efficiency of algorithms that check whether a constraint is redundant or contradictory with respect to a constraint base. Received August 19, 1993 / Accepted July 7, 1997  相似文献   

18.
曲云尧  施伯乐 《软件学报》1995,6(10):582-592
传统的读写事务模型(由read(x)和write(x)序列组成)不能使调度机制充分利用应用程序的语义信息对事务进行灵活调度,从而不能有效提高系统的并发度.本文根据SQL语言的操作语义,给出了基于SQL的事务模型.利用这种事务模型并结合2PL方法,设计了并发控制机制:Condition—locking.这个机制可以:(1)避免数据库中的幽灵(phantom)问题,(2)利用应用程序的语义信息和完整性约束提高系统的并发度,(3)减少发生死锁的机会.因此,这是一个实用的并发控制机制.  相似文献   

19.
20.
Parallel implementations of scientific applications often rely on elaborate dynamic data structures with complicated communication patterns. We describe a set of intuitive geometric programming abstractions that simplify coordination of irregular block-structured scientific calculations without sacrificing performance. We have implemented these abstractions in KeLP, a C++ run-time library. KeLP's abstractions enable the programmer to express complicated communication patterns for dynamic applications and to tune communication activity with a high-level, abstract interface. We show that KeLP's flexible communication model effectively manages elaborate data motion patterns arising in structured adaptive mesh refinement and achieves performance comparable to hand-coded message-passing on several structured numerical kernels.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号