首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Constraint Handling Rules (CHRs) are a high-level rule-based programming language commonly used to define constraint solvers. We present a method for automatic implication checking between constraints of CHR solvers. Supporting implication is important for implementing extensible solvers and reification, and for building hierarchical CHR constraint solvers. Our method does not copy the entire constraint store, but performs the check in place using a trailing mechanism. The necessary code enhancements can be done by automatic program transformation based on the rules of the solver. We extend our method to work for hierarchically organized modular CHR solvers. We show the soundness of our method and its completeness for a restricted class of canonical solver as well as for specific existing non-canonical CHR solvers. We evaluate our trailing method experimentally by comparing with the copy approach: runtime is almost halved.  相似文献   

2.

Unit testing is widely used in software development. One important activity in unit testing is automatic test data generation. Constraint-based test data generation is a technique for automatic generation of test data, which uses symbolic execution to generate constraints. Unit testing only tests functions instead of the whole program, where individual functions typically have preconditions imposed on their inputs. Conventional symbolic execution cannot detect these preconditions, let alone converting these preconditions into constraints. To overcome these limitations, we propose a novel unit test data generation approach using rule-directed symbolic execution for dealing with functions with missing input preconditions. Rule-directed symbolic execution uses predefined rules to detect preconditions in the individual function, and generates constraints for inputs based on preconditions. We introduce implicit constraints to represent preconditions, and unify implicit constraints and program constraints into integrated constraints. Test data generated based on integrated constraints can explore previously unreachable code and help developers find more functional faults and logical faults. We have implemented our approach in a tool called CTS-IC, and applied it to real-world projects. The experimental results show that rule-directed symbolic execution can find preconditions (implicit constraints) automatically from an individual function. Moreover, the unit test data generated by our approach achieves higher coverage than similar tools and efficiently mitigates missing input preconditions problems in unit testing for individual functions.

  相似文献   

3.
《Information Systems》2005,30(2):89-118
Business rules are the basis of any organization. From an information systems perspective, these business rules function as constraints on a database helping ensure that the structure and content of the real world—sometimes referred to as miniworld—is accurately incorporated into the database. It is important to elicit these rules during the analysis and design stage, since the captured rules are the basis for subsequent development of a business constraints repository. We present a taxonomy for set-based business rules, and describe an overarching framework for modeling rules that constrain the cardinality of sets. The proposed framework results in various types constraints, i.e., attribute, class, participation, projection, co-occurrence, appearance and overlapping, on a semantic model that supports abstractions like classification, generalization/specialization, aggregation and association. We formally define the syntax of our proposed framework in Backus-Naur Form and explicate the semantics using first-order logic. We describe partial ordering in the constraints and define the concept of metaconstraints, which can be used for automatic constraint consistency checking during the design stage itself. We demonstrate the practicality of our approach with a case study and show how our approach to modeling business rules seamlessly integrates into existing database design methodology. Via our proposed framework, we show how explicitly capturing data semantics will help bridge the semantic gap between the real world and its representation in an information system.  相似文献   

4.
Linguistic rules in natural language are useful and consistent with human way of thinking. They are very important in multi-criteria decision making due to their interpretability. In this paper, our discussions concentrate on extracting linguistic rules from data sets. In the end, we firstly analyze how to extract complex linguistic data summaries based on fuzzy logic. Then, we formalize linguistic rules based on complex linguistic data summaries, in which, the degree of confidence of linguistic rules from a data set can be explained by linguistic quantifiers and its linguistic truth from the fuzzy logical point of view. In order to obtain a linguistic rule with a higher degree of linguistic truth, a genetic algorithm is used to optimize the number and parameters of membership functions of linguistic values. Computational results show that the proposed method is an alternative method for extracting linguistic rules with linguistic truth from data sets.  相似文献   

5.
Model transformations are at the heart of model‐driven engineering because they allow the automation of diverse kinds of model manipulations. Transformation scheduling is a key issue in the design and implementation of many transformation languages. This paper reports our results using continuations as the underlying technique for building a scheduling mechanism implicitly driven by data dependence among transformation rules. To support our experiments, we have built a proof‐of‐concept model transformation language, which is also reported here. First, we motivate the problem by analyzing the scheduling mechanism of current model transformation languages. Then, we introduce the notion of continuation, showing its applicability to model transformations. Afterwards, we present our approach, notably explaining how dependence is specified and giving the scheduling algorithm. We also analyze the lazy resolution of rules and how to deal with collection operations. The approach is validated by an implementation that targets the Java Virtual Machine and by running of the performance benchmarks that show its efficiency and scalability. Besides, we discuss how it can be applied to other existing transformation languages and present several applicability scenarios. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

6.
In a multilevel secure distributed database management system, users cleared at different security levels access and share a distributed database consisting of data at different sensitivity levels. An approach to assigning sensitivity levels, also called security levels, to data is one which utilizes constraints or classification rules. Security constraints provide an effective classification policy. They can be used to assign security levels to the data based on content, context, and time. We extend our previous work on security constraint processing in a centralized multilevel secure database management system by describing techniques for processing security constraints in a distributed environment during query, update, and database design operations  相似文献   

7.
This article describes a new method for building a natural language understanding (NLU) system, in which the system's rules are learnt automatically from training data. The method has been applied to design of a speech understanding (SU) system. Designers of such systems rely increasingly on robust matchers to perform the task of extracting meaning from one or several word sequence hypotheses generated by a speech recognizer. We describe a new data structure, the semantic classification tree (SCT), that learns semantic rules from training data and can be a building block for robust matchers for NLU tasks. By reducing the need for handcoding and debugging a large number of rules, this approach facilitates rapid construction of an NLU system. In the case of an SU system, the rules learned by an SCT are highly resistant to errors by the speaker or by the speech recognizer because they depend on a small number of words in each utterance. Our work shows that semantic rules can be learned automatically from training data, yielding successful NLU for a realistic application  相似文献   

8.
To study the problems of modifiable software, the Software Technology project has investigated approaches and methodologies that could improve modifiability. To test our approaches tools based on data abstraction—a design and programming language and a module interconnection language—were built and used. The incorporation of the module interconnection language into design altered the traditional model of system building. Introducing novices to our approach led to the formalization of new models of program design, development, and evaluation.  相似文献   

9.
10.
Discovering Dispatching Rules Using Data Mining   总被引:1,自引:0,他引:1  
This paper introduces a novel methodology for generating scheduling rules using a data-driven approach. We show how to use data mining to discover previously unknown dispatching rules by applying the learning algorithms directly to production data. This approach involves preprocessing of historic scheduling data into an appropriate data file, discovery of key scheduling concepts, and representation of the data mining results in a way that enables its use for job scheduling. We also consider how by using this new approach unexpected knowledge and insights can be obtained, in a manner that would not be possible if an explicit model of the system or the basic scheduling rules had to be obtained beforehand. All of our results are illustrated via numerical examples and experiments on simulated data.  相似文献   

11.
In this paper the author uses Alloy as a modeling language to model the elements that form a state machine and the rules that govern how they can be connected. In particular, the model proposed in this paper focuses in the static aspects of state machines. To analyze and detect design errors in the model, the Alloy analyzer was used. With the use of this tool, design errors can be detected very quickly. This tool can also generate instances of the model without making a line of code. The paper presents two models based on the formal approach: a graphical model and a textual model. The graphical model is used as an overview of the system while the textual model is used to establish further constraints on the graphical model.  相似文献   

12.
Software architecture contains, in addition to its structural part, interaction patterns that can be regarded as part of the architectural solution of the system. The interaction patterns define architecturally significant behavior of the software system. In this paper we propose a visual modeling language, behavioral profiles, for specifying architecturally significant behavioral rules for an application. The language is built on the Unified Modeling Language (UML), which is a visual language widely used in software development. We show how behavioral profiles can be used to support software designers in creating behavioral models that conform to some predefined rules and for ensuring that an application behaves correctly with respect to the rules given in the profiles. A tool called Bebop was built to support software engineers in behavioral profile‐based design and analysis of program behavior. To evaluate the approach and the tools in different application domains, they are utilized in three cases. The size of the applications used in the cases varies from small to quite large software systems, and from academic to industrial ones. The examples demonstrate how the approach presented can be used in practice for different steps in a software engineering process. The examples cover specializing an application framework and monitoring the program execution in run‐time. In addition, they show how behavioral profiles can be used to support guided program comprehension and to validate program execution by analyzing execution traces. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

13.
Sequential Association Rule Mining with Time Lags   总被引:5,自引:0,他引:5  
This paper presents MOWCATL, an efficient method for mining frequent association rules from multiple sequential data sets. Our goal is to find patterns in one or more sequences that precede the occurrence of patterns in other sequences. Recent work has highlighted the importance of using constraints to focus the mining process on the association rules relevant to the user. To refine the data mining process, this approach introduces the use of separate antecedent and consequent inclusion constraints, in addition to the traditional frequency and support constraints in sequential data mining. Moreover, separate antecedent and consequent maximum window widths are used to specify the antecedent and consequent patterns that are separated by either a maximal width time lag or a fixed width time lag.Multiple time series drought risk management data are used to show that our approach can be effectively employed in real-life problems. This approach is compared to existing methods to show how they complement each other to discover associations in the drought risk management domain. The experimental results validate the superior performance of our method for efficiently finding relationships between global climatic episodes and local drought conditions. Both the maximal and fixed width time lags are shown to be useful when finding interesting associations.  相似文献   

14.
We have previously proposed SecureUML, an expressive UML-based language for constructing security-design models, which are models that combine design specifications for distributed systems with specifications of their security policies. Here, we show how to automate the analysis of such models in a semantically precise and meaningful way. In our approach, models are formalized together with scenarios that represent possible run-time instances. Queries about properties of the security policy modeled are expressed as formulas in UML’s Object Constraint Language. The policy may include both declarative aspects, i.e., static access-control information such as the assignment of users and permissions to roles, and programmatic aspects, which depend on dynamic information, namely the satisfaction of authorization constraints in a given scenario. We show how such properties can be evaluated, completely automatically, in the context of the metamodel of the security-design language. We demonstrate, through examples, that this approach can be used to formalize and check non-trivial security properties. The approach has been implemented in the SecureMOVA tool and all of the examples presented have been checked using this tool.  相似文献   

15.
Constraint-based testing (CBT) is the process of generating test cases against a testing objective by using constraint solving techniques. When programs contain dynamic memory allocation and loops, constraint reasoning becomes challenging as new variables and new constraints should be created during the test data generation process. In this paper, we address this problem by proposing a new constraint model of C programs based on operators that model dynamic memory management. These operators apply powerful deduction rules on abstract states of the memory enhancing the constraint reasoning process. This allows to automatically generate test data respecting complex coverage objectives. We illustrate our approach on a well-known difficult example program that contains dynamic memory allocation/deallocation, structures and loops. We describe our implementation and provide preliminary experimental results on this example that show the highly deductive potential of the approach.  相似文献   

16.
A multi-objective GRASP for partial classification   总被引:4,自引:1,他引:3  
Metaheuristic algorithms have been used successfully in a number of data mining contexts and specifically in the production of classification rules. Classification rules describe a class of interest or a subset of this class, and as such may also be used as an aid in prediction. The production and selection of classification rules for a particular class of the database is often referred to as partial classification. Since partial classification rules are often evaluated according to a number of conflicting objectives, the generation of such rules is a task that is well suited to a multi-objective (MO) metaheuristic approach. In this paper we discuss how to adapt well known MO algorithms for the task of partial classification. Additionally, we introduce a new MO algorithm for this task based on a greedy randomized adaptive search procedure (GRASP). GRASP has been applied to a number of problems in combinatorial optimization, but it has very seldom been used in a MO setting, and generally only through repeated optimization of single objective problems, using either linear combinations of the objectives or additional constraints. The approach presented takes advantage of some specific characteristics of the data mining problem being solved, allowing for the very effective construction of a set of solutions that form the starting point for the local search phase of the GRASP. The resulting algorithm is guided solely by the concepts of dominance and Pareto-optimality. We present experimental results for our partial classification GRASP and other MO metaheuristics. These show that such algorithms are generally very well suited to this data mining task and furthermore, the GRASP brings additional efficiency to the search for partial classification rules.  相似文献   

17.
Transformation systems are particularly well suited to implement modular rules, transforming one language feature of the source language into a single or a composition of language features of the target language. However, in practice, transformation rules must be written which take one language feature and transform them into several language features belonging to various locations in the output program. The implementation of these so-called local-to-global transformations with rewrite rules is very complex and tightly coupled which imposes severe constraints on maintenance and evolvability. The four main coupling problems of the current-day implementations are presented and we indicate how these can be eliminated and reduced by our extension of the rewrite rule system. Furthermore we show how complex invasive compositions can be solved by abstract, reusable algorithms and mechanisms, rendering the implementation of local-to-global transformations into a semi-automatic process.  相似文献   

18.
Uwe Kastens  William Waite 《Software》2017,47(11):1597-1631
Classical strategies for matching identifier uses with declarations cannot handle the complexities of modern languages: arbitrarily qualified superclass names, cyclic dependence among lookup operations, and contextual access constraints. We have developed a language‐independent algorithm and supporting data structure that overcome these problems. A well‐defined interface allows introduction of arbitrary code to enforce language‐specific constraints within the basic lookup operations. This paper explains the limitations of the classical strategies, presents the concepts on which our approach is based, and showcases an implementation based on attribute grammars. We explore the major issues through a series of examples and show how one can deal with those issues in a general framework. Many of the issues are specific to a particular language, and in those cases, we explain the solutions that our general interface supports. Although attribute grammars simplify the task of incorporating the model into a compiler, the model itself is completely independent of attribute grammars. We validated our model by using an implementation to process programs in several representative languages. In particular, we mechanically compared the results produced by that implementation with those produced by the Java SE 8 compiler on complete Java programs that are in general use. Performance data obtained during this processing show that our implementation is efficient. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

19.
Machine learning is traditionally formalized and investigated as the study of learning concepts and decision functions from labeled examples, requiring a representation that encodes information about the domain of the decision function to be learned. We are interested in providing a way for a human teacher to interact with an automated learner using natural instructions, thus allowing the teacher to communicate the relevant domain expertise to the learner without necessarily knowing anything about the internal representations used in the learning process. In this paper we suggest to view the process of learning a decision function as a natural language lesson interpretation problem, as opposed to learning from labeled examples. This view of machine learning is motivated by human learning processes, in which the learner is given a lesson describing the target concept directly and a few instances exemplifying it. We introduce a learning algorithm for the lesson interpretation problem that receives feedback from its performance on the final task, while learning jointly (1) how to interpret the lesson and (2) how to use this interpretation to do well on the final task. traditional machine learning by focusing on supplying the learner only with information that can be provided by a task expert. We evaluate our approach by applying it to the rules of the solitaire card game. We show that our learning approach can eventually use natural language instructions to learn the target concept and play the game legally. Furthermore, we show that the learned semantic interpreter also generalizes to previously unseen instructions.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号