共查询到20条相似文献,搜索用时 15 毫秒
1.
Integrity constraints play a key role in the specification and development of software systems since they state conditions that must always be satisfied by the system at runtime. Therefore, software systems must include some kind of integrity checking component that ensures that all constraints still hold after the execution of any operation that modifies the system state. Integrity checking must be as efficient as possible not to seriously slow down the system performance at runtime. In this sense, this paper proposes a set of techniques to facilitate the efficient integrity checking of UML-based software specifications, usually complemented with a set of integrity constraints defined in Object Constraint Language (OCL) to express all rules that cannot be graphically defined. In particular, our techniques are able to determine, at design-time, when and how each constraint must be checked at runtime to avoid irrelevant verifications. We refer to these techniques as incremental because they minimize the subset of the system state that needs to be checked after each change by assuming that the system was initially in a consistent state and just reevaluating the elements that may have been affected by that change. We also show how the techniques can be integrated in a model-driven development framework to automatically generate a final implementation that automatically checks all constraints in an incremental way. 相似文献
2.
Francesco Di Tria Ezio Lefons Filippo Tangorra 《Information and Software Technology》2012,54(4):360-379
Context
Data warehouse conceptual design is based on the metaphor of the cube, which can be derived from either requirement-driven or data-driven methodologies. Each methodology has its own advantages. The first allows designers to obtain a conceptual schema very close to the user needs but it may be not supported by the effective data availability. On the contrary, the second ensures a perfect traceability and consistence with the data sources—in fact, it guarantees the presence of data to be used in analytical processing—but does not preserve from missing business user needs. To face this issue, the necessity emerged in the last years to define hybrid methodologies for conceptual design.Objective
The objective of the paper is to use a hybrid methodology based on different multidimensional models in order to gather all advantages of each of them.Method
The proposed methodology integrates the requirement-driven strategy with the data-driven one, in that order, possibly performing alterations of functional dependencies on UML multidimensional schemas reconciled with data sources.Results
As case study, we illustrate how our methodology can be applied to the university environment. Furthermore, we evaluate quantitatively the benefits of this methodology by comparing it with some popular and conventional methodologies.Conclusion
In conclusion, we highlight how the hybrid methodology improves the conceptual schema quality. Finally, we outline our present work devoted to introduce automatic design techniques in the methodology on the basis of the logical programming. 相似文献3.
4.
Due to the increase of XML-based applications, XML schema design has become an important task. One approach is to consider
conceptual schemas as a basis for generating XML documents compliant to consensual information of specific domains. However,
the conversion of conceptual schemas to XML schemas is not a straightforward process and inconvenient design decisions can
lead to a poor query processing on XML documents generated. This paper presents a conversion approach which considers data
and query workload estimated for XML applications, in order to generate an XML schema from a conceptual schema. Load information
is used to produce XML schemas which can respond well to the main queries of an XML application. We evaluate our approach
through a case study carried out on a native XML database. The experimental results demonstrate that the XML schemas generated
by our methodology contribute to a better query performance than related approaches.
相似文献
Ronaldo dos Santos MelloEmail: |
5.
《Data & Knowledge Engineering》1996,20(1):39-85
Flat graphical, conceptual modeling techniques are widely accepted as visually effective ways in which to specify and communicate the conceptual data requirements of an information system. Conceptual schema diagrams provide modelers with a picture of the salient structures underlying the modeled universe of discourse, in a form that can readily be understood by and communicated to users, programmers and managers. When complexity and size of applications increase, however, the success of these techniques in terms of comprehensibility and communicability deteriorates rapidly.This paper proposes a method to offset this deterioration, by adding abstraction layers to flat conceptual schemas. We present an algorithm to recursively derive higher levels of abstraction from a given (flat) conceptual schema. The driving force of this algorithm is a hierarchy of conceptual importance among the elements of the universe of discourse. 相似文献
6.
The Conditional Observer and Controller Logic (COCOLOG) is a logical system for the feedback control of finite input‐state‐output
systems wherein the individual first order logical theories have the properties of consistency, completeness and decidability.
The efficiency of automatic theorem proving (ATP) is a crucial issue in the implementation of COCOLOG control systems and
in this paper we present a so‐called function evaluation (FE) based resolution‐refutation ATP methodology for COCOLOG. FE‐resolution
ATP replaces the axioms specifying the dynamics of a finite input‐state‐output machine by a set of defined function relations.
The resulting procedure extends to predicates and hence permits the definition of constant and variable FE‐resolution inference.
It is shown that FE‐resolution ATP is complete in the sense that a set of clauses which is unsatisfiable in all models with
the given interpretation of the functions will yield the empty clause under resolution, paramodulation and FE‐resolution.
This revised version was published online in June 2006 with corrections to the Cover Date. 相似文献
7.
8.
This paper describes automated reasoning in a PROLOG Euclidean geometry theorem-prover. It brings into focus general topics in automated reasoning and the ability of Prolog in coping with them. 相似文献
9.
Several safety-related standards exist for developing and certifying safety-critical systems. System safety assessments are
common practice and system certification according to a standard requires submitting relevant system safety information to
appropriate authorities. The RTCA DO-178B standard is a software quality assurance, safety-related standard for the development
of software aspects of aerospace systems. This research introduces an approach to improve communication and collaboration
among safety engineers, software engineers, and certification authorities in the context of RTCA DO-178B. This is achieved
by utilizing a Unified Modeling Language (UML) profile that allows software engineers to model safety-related concepts and
properties in UML, the de facto software modeling standard. A conceptual meta-model is defined based on RTCA DO-178B, and
then a corresponding UML profile, which we call SafeUML, is designed to enable its precise modeling. We show how SafeUML improves
communication by, for example, allowing monitoring implementation of safety requirements during the development process, and
supporting system certification per RTCA DO-178B. This is enabled through automatic generation of safety and certification-related
information from UML models. We validate this approach through a case study on developing an aircraft’s navigation controller
subsystem. 相似文献
10.
Michael Gelfond 《Annals of Mathematics and Artificial Intelligence》1994,12(1-2):89-116
The purpose of this paper is to expand the syntax and semantics of logic programs and disjunctive databases to allow for the correct representation of incomplete information in the presence of multiple extensions. The language of logic programs with classical negation, epistemic disjunction, and negation by failure is further expanded by new modal operators K and M (where for the set of rulesT and formulaF, KF stands for F is known to be true by a reasoner with a set of premisesT and MF means F may be believed to be true by the same reasoner). Sets of rules in the extended language will be called epistemic specifications. We will define the semantics of epistemic specifications (which expands the semantics of disjunctive databases from) and demonstrate their applicability to formalization of various forms of commonsense reasoning. In particular, we suggest a new formalization of the closed world assumption which seems to better correspond to the assumption's intuitive meaning. 相似文献
11.
Dehak SM Bloch I Maître H 《IEEE transactions on pattern analysis and machine intelligence》2005,27(9):1473-1484
This paper describes a probabilistic method of inferring the position of a point with respect to a reference point knowing their relative spatial position to a third point. We address this problem in the case of incomplete information where only the angular spatial relationships are known. The use of probabilistic representations allows us to model prior knowledge. We derive exact formulae expressing the conditional probability of the position given the two known angles, in typical cases: uniform or Gaussian random prior distributions within rectangular or circular regions. This result is illustrated with respect to two different simulations: The first is devoted to the localization of a mobile phone using only angular relationships, the second, to geopositioning within a city. This last example uses angular relationships and some additional knowledge about the position. 相似文献
12.
13.
14.
Gian Piero Zarri 《Expert systems with applications》2013,40(8):2872-2888
After having recalled some well-known shortcomings linked with the Semantic Web approach to the creation of (application oriented) systems of “rules” – e.g., limited expressiveness, adoption of an Open World Assumption (OWA) paradigm, absence of variables in the original definition of OWL – this paper examines the technical solutions successfully used for implementing advanced reasoning systems according to the NKRL’s methodology. NKRL (Narrative Knowledge Representation Language) is a conceptual meta-model and a Computer Science environment expressly created to deal, in an ‘intelligent’ and complete way, with complex and content-rich non-fictional ‘narrative’ data sources. These last include corporate memory documents, news stories, normative and legal texts, medical records, surveillance videos, actuality photos for newspapers and magazines, etc. In this context, we will expound first the need for distinguishing between “plain/static” and “structured/dynamic” knowledge and for introducing appropriate (and different) knowledge representation structures for these two types of knowledge. In a structured/dynamic context, we will then show how the introduction of “functional roles” – associated with the possibility of making use of n-ary structures – allows us to build up highly ‘expressive’ rules whose “atoms” can directly represent complex situations, actions, etc. without being restricted to the use of binary clauses. In an NKRL context, “functional roles” are primitive symbols interpreted as “relations” – like “subject”, “object”, “source”, “beneficiary”, etc. – that link a semantic predicate with its arguments within an n-ary conceptual formula. Functional roles contrast then with the “semantic roles” that are equated to ordinary concepts like “student”, to be inserted into the “non-sortal” (no direct instances) branch of a traditional ontology. 相似文献
15.
16.
Automated Prototyping of User Interfaces Based on UML Scenarios 总被引:1,自引:0,他引:1
User interface (UI) prototyping and scenario engineering have become popular techniques. Yet, the transition from scenario
to formal specifications and the generation of UI code is still ill-defined and essentially a manual task, and the two techniques
lack integration in the overall requirements engineering process. In this paper, we suggest an approach for requirements engineering
that generates a user interface prototype from scenarios and yields a formal specification of the application. Scenarios are
acquired in the form of collaboration diagrams as defined by the Unified Modeling Language (UML), and are enriched with user
interface (UI) information. These diagrams are automatically transformed into UML Statechart specifications of the UI objects
involved. From the set of obtained specifications, a UI prototype is generated that is embedded in a UI builder environment
for further refinement. Based on end user feedback, the collaboration diagrams and the UI prototype may be iteratively refined,
and the result of the overall process is a specification consisting of the Statechart diagrams of all the objects involved,
together with the generated and refined prototype of the UI. The algorithms underlying this process have been implemented
and exercised on a number of examples.
This research was mainly conducted at University of Montreal, where the first two authors were PhD students and the third
author a full-time faculty member. Funding was provided in part by FCAR (Fonds pour la formation des chercheurs et l'aide
à la recherche au Québec) and by the SPOOL project organized by CSER (Consortium Software Engineering Research) which is funded
by Bell Canada, NSERC (Natural Sciences and Research Council of Canada), and NRC (National Research Council Canada). 相似文献
17.
《Information and Software Technology》2006,48(9):901-914
Database applications tend toward getting more versatile and broader to comply with the expansion of various organizations. However, naïve users usually suffer from accessing data arbitrarily by using formal query languages. Therefore, we believe that accessing databases using natural language constructs will become a popular interface in the future. The concept of object-oriented modeling makes the real world to be well represented or expressed in some kinds of logical form. Since the class diagram in UML is used to model the static relationships of databases, in this paper, we intend to study how to extend the UML class diagram representations to capture natural language queries with fuzzy semantics. By referring to the conceptual schema throughout the class diagram representation, we propose a methodology to map natural language constructs into the corresponding class diagram and employ Structured Object Model (SOM) methodology to transform the natural language queries into SQL statements for query executions. Moreover, our approach can handle queries containing vague terms specified in fuzzy modifiers, like ‘good’ or ‘bad’. By our approach, users obtain not only the query answers but also the corresponding degree of vagueness, which can be regarded as the same way we are thinking. 相似文献
18.
Yasin N. Silva Walid G. Aref Per-Ake Larson Spencer S. Pearson Mohamed H. Ali 《The VLDB Journal The International Journal on Very Large Data Bases》2013,22(3):395-420
Many application scenarios can significantly benefit from the identification and processing of similarities in the data. Even though some work has been done to extend the semantics of some operators, for example join and selection, to be aware of data similarities, there has not been much study on the role and implementation of similarity-aware operations as first-class database operators. Furthermore, very little work has addressed the problem of evaluating and optimizing queries that combine several similarity operations. The focus of this paper is the study of similarity queries that contain one or multiple first-class similarity database operators such as Similarity Selection, Similarity Join, and Similarity Group-by. Particularly, we analyze the implementation techniques of several similarity operators, introduce a consistent and comprehensive conceptual evaluation model for similarity queries, and present a rich set of transformation rules to extend cost-based query optimization to the case of similarity queries. 相似文献
19.
Ronald R. Yager 《Information Sciences》2010,180(8):1390-4427
In order to provide for the representation and manipulation of human sourced soft information we turn to the fuzzy set based theory of approximate reasoning. We describe how approximate reasoning provides a framework for representing and manipulating a wide body linguistically expressed information. We then suggest a number of extensions of the theory to enhance its representational capacity. One such extension focuses on the ability to model imprecise variables as well as imprecise values for the variable. We consider the representation of possible qualified propositions. We look at the issue of deduction in the face of conflict in our knowledge base and suggest an approach compatible with human behavior. 相似文献
20.
We develop a learning-based automated assume-guarantee (AG) reasoning framework for verifying ω-regular properties of concurrent systems. We study the applicability of non-circular (AG-NC) and circular (AG-C) AG proof rules in the context of systems with infinite behaviors. In particular, we show that AG-NC is incomplete when assumptions are restricted to strictly infinite behaviors, while AG-C remains complete. We present a general formalization, called LAG, of the learning based automated AG paradigm. We show how
existing approaches for automated AG reasoning are special instances of LAG. We develop two learning algorithms for a class
of systems, called ∞-regular systems, that combine finite and infinite behaviors. We show that for ∞-regular systems, both
AG-NC and AG-C are sound and complete. Finally, we show how to instantiate LAG to do automated AG reasoning for ∞-regular, and ω-regular, systems using both AG-NC and AG-C as proof rules. 相似文献