首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Test-Driven Development (TDD) is an extreme programming development method in which a software system is developed in short iterations. In this paper we present the Test-Driven Conceptual Modeling (TDCM) method, which is an application of TDD for conceptual modeling, and we show how to develop a conceptual schema using it. In TDCM, a system's conceptual schema is incrementally obtained by performing three kinds of tasks: (1) Write a test the system should pass; (2) Change the schema to pass the test; and (3) Refactor the schema to improve its qualities. We also describe an integration approach of TDCM into a broad set of software development methodologies, including the Unified Process development methodology, the MDD-based approaches, the storytest-driven agile methods and the goal and scenario-oriented requirements engineering methods. We deal with schemas written in UML/OCL, but the TDCM method could be adapted to the development of schemas in other languages.  相似文献   

2.
Schemaless databases, and document-oriented databases in particular, are preferred to relational ones for storing heterogeneous data with variable schemas and structural forms. However, the absence of a unique schema adds complexity to analytical applications, in which a single analysis often involves large sets of data with different schemas. In this paper we propose an original approach to OLAP on collections stored in document-oriented databases. The basic idea is to stop fighting against schema variety and welcome it as an inherent source of information wealth in schemaless sources. Our approach builds on four stages: schema extraction, schema integration, FD enrichment, and querying; these stages are discussed in detail in the paper. To make users aware of the impact of schema variety, we propose a set of indicators inspired by the definition of attribute density. Finally, we experimentally evaluate our approach in terms of efficiency and effectiveness.  相似文献   

3.
Integration of geographic information has increased in importance because of new possibilities arising from the interconnected world and the increasing availability of geographic information. Ontologies support the creation of conceptual models and help with information integration. In this paper, we propose a way to link the formal representation of semantics (i.e., ontologies) to conceptual schemas describing information stored in databases. The main result is a formal framework that explains a mapping between a spatial ontology and a geographic conceptual schema. The mapping of ontologies to conceptual schemas is made using three different levels of abstraction: formal, domain, and application levels. At the formal level, highly abstract concepts are used to express the schema and the ontologies. At the domain level, the schema is regarded as an instance of a generic data model. At the application level, we focus on the particular case of geographic applications. We also discuss the influence of ontologies in both the traditional and geographic systems development methodologies, with an emphasis on the conceptual design phase.  相似文献   

4.
Matching large schemas: Approaches and evaluation   总被引:1,自引:0,他引:1  
Current schema matching approaches still have to improve for large and complex Schemas. The large search space increases the likelihood for false matches as well as execution times. Further difficulties for Schema matching are posed by the high expressive power and versatility of modern schema languages, in particular user-defined types and classes, component reuse capabilities, and support for distributed schemas and namespaces. To better assist the user in matching complex schemas, we have developed a new generic schema matching tool, COMA++, providing a library of individual matchers and a flexible infrastructure to combine the matchers and refine their results. Different match strategies can be applied including a new scalable approach to identify context-dependent correspondences between schemas with shared elements and a fragment-based match approach which decomposes a large match task into smaller tasks. We conducted a comprehensive evaluation of the match strategies using large e-Business standard schemas. Besides providing helpful insights for future match implementations, the evaluation demonstrated the practicability of our system for matching large schemas.  相似文献   

5.
The typical design process for the relational database model develops the conceptual schema and each of the external schemas separately and independently from each other. This paper proposes a new design methodology that constructs the conceptual schema in such a way that overlappings among external schemas are reflected. If the overlappings of external schemas do not produce transitivity at the conceptual level, then with our design method, the relations in the external schemas can be realized as a join over independent components. Thus, a one-to-one function can be defined for the mapping between tuples in the external schemas to tuples in the conceptual schema. If transitivity is produced, then we show that no such function is possible and a new technique is introduced to handle this special case.  相似文献   

6.
ContextIt is critical to ensure the quality of a software system in the initial stages of development, and several approaches have been proposed to ensure that a conceptual schema correctly describes the user’s requirements.ObjectiveThe main goal of this paper is to perform automated reasoning on UML schemas containing arbitrary constraints, derived roles, derived attributes and queries, all of which must be specified by OCL expressions.MethodThe UML/OCL schema is encoded in a first order logic formalisation, and an existing reasoning procedure is used to check whether the schema satisfies a set of desirable properties. Due to the undecidability of reasoning in highly expressive schemas, such as those considered here, we also provide a set of conditions that, if satisfied by the schema, ensure that all properties can be checked in a finite period of time.ResultsThis paper extends our previous work on reasoning on UML conceptual schemas with OCL constraints by considering derived attributes and roles that can participate in the definition of other constraints, queries and derivation rules. Queries formalised in OCL can also be validated to check their satisfiability and to detect possible equivalences between them. We also provide a set of conditions that ensure finite reasoning when they are satisfied by the schema under consideration.ConclusionThis approach improves upon previous work by allowing automated reasoning for more expressive UML/OCL conceptual schemas than those considered so far.  相似文献   

7.
8.
Over the last decade many techniques and tools for software clone detection have been proposed. In this paper, we provide a qualitative comparison and evaluation of the current state-of-the-art in clone detection techniques and tools, and organize the large amount of information into a coherent conceptual framework. We begin with background concepts, a generic clone detection process and an overall taxonomy of current techniques and tools. We then classify, compare and evaluate the techniques and tools in two different dimensions. First, we classify and compare approaches based on a number of facets, each of which has a set of (possibly overlapping) attributes. Second, we qualitatively evaluate the classified techniques and tools with respect to a taxonomy of editing scenarios designed to model the creation of Type-1, Type-2, Type-3 and Type-4 clones. Finally, we provide examples of how one might use the results of this study to choose the most appropriate clone detection tool or technique in the context of a particular set of goals and constraints. The primary contributions of this paper are: (1) a schema for classifying clone detection techniques and tools and a classification of current clone detectors based on this schema, and (2) a taxonomy of editing scenarios that produce different clone types and a qualitative evaluation of current clone detectors based on this taxonomy.  相似文献   

9.
In schema integration, schematic discrepancies occur when data in one database correspond to metadata in another. We explicitly declare the context that is the meta information relating to the source, classification, property etc. of entities, relationships or attribute values in entity–relationship (ER) schemas. We present algorithms to resolve schematic discrepancies by transforming metadata into the attribute values of entity types, keeping the information and constraints of original schemas. Although focusing on the resolution of schematic discrepancies, our technique works seamlessly with the existing techniques resolving other semantic heterogeneities in schema integration.  相似文献   

10.
In this paper, a temporal meta database for three-dimensional (3D) objects whose properties and relationships are supported by valid time is introduced. Based on our proposed temporal object-oriented conceptual schema model, a conceptual schema of the temporal meta database can be generated from a 3D graphical data source and other particular application requirements. Based on our proposed temporal object relational data model with attribute timestamping, logical schemas of the temporal meta database can be systematically and automatically generated from the conceptual schema. From the temporal meta database, non-temporal/temporal metadata about temporal 3D objects are available for temporal information system users. Convenient access using database languages such as SQL can be performed. Queries over 3D objects using a temporal object relational SQL are demonstrated.  相似文献   

11.
On resolving schematic heterogeneity in multidatabase systems   总被引:4,自引:0,他引:4  
The objective of a multidatabase system is to provide a single uniform interface to accessing multiple independent databases being managed by multiple independent, and possibly heterogeneous, database systems. One crucial element in the design of a multidatabase system is the design of a data definition language for specifying a schema that represents the integration of the schemas of multiple independent databases. The design of such a language in turn requires a comprehensive classification of the conflicts (i.e., discrepancies) among the schemas of the independent databases and development of techniques for resolving (i.e., homogenizing) all of the conflicts in the classification. An earlier paper provided a comprehensive classification of schematic conflicts that may arise when integrating multiple independent relational database (RDB) schemas into a single multidatabase (MDB) schema. In this paper, we provide a comprehensive classification of techniques for resolving the schematic conflicts that may arise when integrating multiple RDB schemas, or RDB schemas and object-oriented database (OODB) schemas, or multiple OODB schemas. The classification of conflict resolution techniques includes not only those necessary for resolving schematic conflicts identified in the earlier paper, but also additional conflicts that arise when OODBs become part of the databases to be integrated. Most of the conflict resolution techniques discussed in the paper have already been incorporated into SQL/M, a multidatabase language implemented in UniSQL/M, a commercially available multidatabase system from UniSQL, Inc. which integrated SQL-based relational database systems and the UniSQL/X unified relational and object-oriented database system.  相似文献   

12.
Flat graphical, conceptual modeling techniques are widely accepted as visually effective ways in which to specify and communicate the conceptual data requirements of an information system. Conceptual schema diagrams provide modelers with a picture of the salient structures underlying the modeled universe of discourse, in a form that can readily be understood by and communicated to users, programmers and managers. When complexity and size of applications increase, however, the success of these techniques in terms of comprehensibility and communicability deteriorates rapidly.This paper proposes a method to offset this deterioration, by adding abstraction layers to flat conceptual schemas. We present an algorithm to recursively derive higher levels of abstraction from a given (flat) conceptual schema. The driving force of this algorithm is a hierarchy of conceptual importance among the elements of the universe of discourse.  相似文献   

13.
This paper presents the system ADDS that has been developed to assist the database designer designing a database schema. A distinction is made between the stage of information structure analysis in which the information structure of the system is defined according to its user information needs, and the stage of database schema design in which the record types of the database and the relationships between them are defined. In the first stage a conceptual schema is obtained, represented as an information structure diagram (ISD), and in the later stage the ISD is used to derive the database schema in the form of a data structure diagram (DSD).ADDS automatically creates the database schema out of a conceptual schema which is expressed as an ISD of the binary-relationship data mode. The resulting schema consists of normalized record types, according to the relation model, along with hierarchical/set relationships between ‘owner’ and ‘member’ record types, as in the CODASYL/Network model. ADDS applies algorithms to convert the conceptual schema into the database schema. It is implemented on a micro-computer under MS-DOS using dBASE III.  相似文献   

14.
基于本体的概念模式复用   总被引:9,自引:0,他引:9  
本文基于对概念模式抽象的深入研究 ,利用域本体及基本概念模式复用方法 ,提出了一套复用架构 ,与用户交互 ,可以半自动产生参考组件 ,以一基本概念模式 (基本的参考组件 )为核心结合一般参考组件进行新应用域的概念模式设计 .本文引入域结构及本体等概念 ,解决了组件识别问题 ,并把各种抽象技术有机结合起来 ,提高组件生成效率 .  相似文献   

15.
XML已经成为Web上表示结构化和半结构化数据的标准格式,为了描述XML数据的结构和内容,业界已经提出了多个XML模式语言。虽然XML模式对ValidatingXML文档非常有用,但它不适用于要求表示数据有关语义知识的任务,对这样的任务最好使用概念模式。针对XML模式的概念建模,介绍了一种扩展实体关系模型及将用XML模式语言定义的模式转换成扩展实体模式的过程。  相似文献   

16.
One of the most important challenges that software engineers (designers, developers) still have to face in their everyday work is the evolution of working database systems. As a step for the solution of this problem in this paper we propose MeDEA, which stands for Metamodel-based Database Evolution Architecture. MeDEA is a generic evolution architecture that allows us to maintain the traceability between the different artifacts involved in any database development process. MeDEA is generic in the sense that it is independent of the particular modeling techniques being used. In order to achieve this, a metamodeling approach has been followed for the development of MeDEA. The other basic characteristic of the architecture is the inclusion of a specific component devoted to storing the translation of conceptual schemas to logical ones. This component, which is one of the most noteworthy contributions of our approach, enables any modification (evolution) realized on a conceptual schema to be traced to the corresponding logical schema, without having to regenerate this schema from scratch, and furthermore to be propagated to the physical and extensional levels.  相似文献   

17.
XML Schema Definition (XSD) is the logical schemas of an XML model, but there is no standard format for the conceptual schema of an XML model. Therefore, we propose an XML Tree Model (XTM) as an XML conceptual schema for representing data semantics in a diagram, and also as an XML data model validator for confirming the data semantics required by users. An XTM consists of hierarchical nodes representing all the elements, and the data relationships among elements within the XSD. A rule-based algorithm and an information capacity with pre- and post-conditions are developed as the methodology for reverse engineering. The proposed algorithm consists of two rules: General Information Transformation and Data Semantic Recovering to construct an XTM. Users can draw an XTM with data relationships among elements as a result of the reverse engineering.  相似文献   

18.
Intuitively, data management and data integration tools should be well suited for exchanging information in a semantically meaningful way. Unfortunately, they suffer from two significant problems: they typically require a common and comprehensive schema design before they can be used to store or share information, and they are difficult to extend because schema evolution is heavyweight and may break backward compatibility. As a result, many large-scale data sharing tasks are more easily facilitated by non-database-oriented tools that have little support for semantics.The goal of the peer data management system (PDMS) is to address this need: we propose the use of a decentralized, easily extensible data management architecture in which any user can contribute new data, schema information, or even mappings between other peers schemas. PDMSs represent a natural step beyond data integration systems, replacing their single logical schema with an interlinked collection of semantic mappings between peers individual schemas.This paper considers the problem of schema mediation in a PDMS. Our first contribution is a flexible language for mediating between peer schemas that extends known data integration formalisms to our more complex architecture. We precisely characterize the complexity of query answering for our language. Next, we describe a reformulation algorithm for our language that generalizes both global-as-view and local-as-view query answering algorithms. Then we describe several methods for optimizing the reformulation algorithm and an initial set of experiments studying its performance. Finally, we define and consider several global problems in managing semantic mappings in a PDMS.Received: 16 December 2002, Accepted: 14 April 2003, Published online: 12 December 2003Edited by: V. Atluri  相似文献   

19.
Establishing interschema semantic knowledge between corresponding elements in a cooperating OWL-based multi-information server grid environment requires deep knowledge, not only about the structure of the data represented in each server, but also about the commonly occurring differences in the intended semantics of this data. The same information could be represented in various incompatible structures, and more importantly the same structure could be used to represent data with many diverse and incompatible semantics. In a grid environment interschema semantic knowledge can only be detected if both the structural and semantic properties of the schemas of the cooperating servers are made explicit and formally represented in a way that a computer system can process. Unfortunately, very often there is lack of such knowledge and the underlying grid information servers (ISs) schemas, being semantically weak as a consequence of the limited expressiveness of traditional data models, do not help the acquisition of this knowledge. The solution to overcome this limitation is primarily to upgrade the semantic level of the IS local schemas through a semantic enrichment process by augmenting the local schemas of grid ISs to semantically enriched schema models, then to use these models in detecting and representing correspondences between classes belonging to different schemas. In this paper, we investigate the possibility of using OWL-based domain ontologies both for building semantically rich schema models, and for expressing interschema knowledge and reasoning about it. We believe that the use of OWL/RDF in this setting has two important advantages. On the one hand, it enables a semantic approach for interschema knowledge specification, by concentrating on expressing conceptual and semantic correspondences between both the conceptual (intensional) definition and the set of instances (extension) of classes represented in different schemas. On the other hand, it is exactly this semantic nature of our approach that allows us to devise reasoning mechanisms for discovering and reusing interschema knowledge when the need arises to compare and combine it.  相似文献   

20.
Due to the increase of XML-based applications, XML schema design has become an important task. One approach is to consider conceptual schemas as a basis for generating XML documents compliant to consensual information of specific domains. However, the conversion of conceptual schemas to XML schemas is not a straightforward process and inconvenient design decisions can lead to a poor query processing on XML documents generated. This paper presents a conversion approach which considers data and query workload estimated for XML applications, in order to generate an XML schema from a conceptual schema. Load information is used to produce XML schemas which can respond well to the main queries of an XML application. We evaluate our approach through a case study carried out on a native XML database. The experimental results demonstrate that the XML schemas generated by our methodology contribute to a better query performance than related approaches.
Ronaldo dos Santos MelloEmail:
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号