共查询到20条相似文献,搜索用时 0 毫秒
1.
The current business environment changes rapidly, dictated by user requirements and market opportunities. Organisations are therefore driven to continuously adapt their business processes to new conditions. Thus, management of business process schema evolution, particularly process version control, is in great demand to capture the dynamics of business process schema changes. This paper aims to facilitate version control for business process schema evolution, with an emphasis on version compatibility, co-existence of multiple versions and dynamic version shifts. A multi-level versioning approach is established to specify dependency between business process schema evolutions, and a novel version preserving graph model is proposed to record business process schema evolutions. A set of business process schema updating operations is devised to support the entire set of process change patterns. By maintaining sufficient and necessary schema and version information, our approach provides comprehensive support for navigating process instance executions of different and changing versions, and deriving the process schema of a certain version. A prototype is also implemented for the proof-of-concept purpose. 相似文献
2.
A large number of complexly interrelated parameters are involved at the internal schema level design of database systems. Consequently, a single design model is seen to be infeasible. A package of three aids is proposed to assist a designer in step by step design of internal schema. The three aids pertain to splitting of a relation, merging of relations, and access strategy selection for a relation. 相似文献
3.
4.
One of the most important challenges that software engineers (designers, developers) still have to face in their everyday work is the evolution of working database systems. As a step for the solution of this problem in this paper we propose MeDEA, which stands for Metamodel-based Database Evolution Architecture. MeDEA is a generic evolution architecture that allows us to maintain the traceability between the different artifacts involved in any database development process. MeDEA is generic in the sense that it is independent of the particular modeling techniques being used. In order to achieve this, a metamodeling approach has been followed for the development of MeDEA. The other basic characteristic of the architecture is the inclusion of a specific component devoted to storing the translation of conceptual schemas to logical ones. This component, which is one of the most noteworthy contributions of our approach, enables any modification (evolution) realized on a conceptual schema to be traced to the corresponding logical schema, without having to regenerate this schema from scratch, and furthermore to be propagated to the physical and extensional levels. 相似文献
5.
Salah Sadou Author Vitae 《Journal of Systems and Software》2009,82(6):932-946
Large information systems are typically distributed and cater to several client programs, with different needs. Traditional approaches to software development and deployment cannot handle situations where (i) the needs of one client application evolve over time, diverging from the needs of others, and (ii) when the server application cannot be shutdown for maintenance. In this paper, we propose an experimental framework for the unanticipated dynamic evolution of distributed objects that enables us to: (i) extend the behavior of distributed objects during run-time, requiring no shutdown, and (ii) offer different functionalities to different applications simultaneously. In our approach, new client programs can invoke behavioral extensions to server objects that are visible only to them, while legacy applications may continue to use the non-extended versions of the server. Our approach has the advantage of: (i) requiring no changes to the host programming language or to the virtual machine, and (ii) providing a transparent programming model to the developer. In this paper, we describe the problem of unanticipated dynamic evolution of distributed objects, the principles underlying our approach, and our prototype implementations for Java and C#. We conclude by discussing related work, and the extent to which our approach can be used to support industrial strength unanticipated evolution. 相似文献
6.
7.
基于OpenURL协议和XML Schema的异构数据库整合方案研究 总被引:2,自引:0,他引:2
苏志芳 《计算机工程与设计》2008,29(16)
OptnURL框架是一种开放链接环境下提供定位服务的参考链接技术,也是网络资源整合的一种重要手段.基于该框架结构和XMLSchema较强的数据描述能力,该研究方案通过ISO2709模式到XMLSchema文档的映射构成转化平台,实现了以OPAC书目查询系统为中心的异构资源数据整合,并提出了一种新方法来解决数字资源参考链接动态修改的问题.该方案在某图书馆进行纸本数据和部分数字资源镜像数据的整合中得以应用. 相似文献
8.
This paper presents a working decision support system for use in the physical design of a database. Physical database design, a structured decision problem, lends itself to a decision support approach because closed form algorithms are computationally infeasible. The paper describes the physical database design problem, presents an overview of a software system for use in solving this problem, and evaluates the use of the system in solving a sample problem. 相似文献
9.
Michael Mortensen Sudipto Ghosh James M. Bieman 《Information and Software Technology》2008,50(7-8):621-640
Aspect-based refactoring, called aspectualization, involves moving program code that implements cross-cutting concerns into aspects. Such refactoring can improve the maintainability of legacy systems. Long compilation and weave times, and the lack of an appropriate testing methodology are two challenges to the aspectualization of large legacy systems. We propose an iterative test driven approach for creating and introducing aspects. The approach uses mock systems that enable aspect developers to quickly experiment with different pointcuts and advice, and reduce the compile and weave times. The approach also uses weave analysis, regression testing, and code coverage analysis to test the aspects. We developed several tools for unit and integration testing. We demonstrate the test driven approach in the context of large industrial C++ systems, and we provide guidelines for mock system creation. 相似文献
10.
Updating the schema is an important facility for object-oriented databases. However, updates should not result in inconsistencies either in the schema or in the database. We propose a classification of basic schema updates and define a set of parametrized primitives to perform schema updates which the designer will use to define his/her own update semantics. 相似文献
11.
We propose an algorithm for executing transactions in object-oriented databases. The object-oriented database model generalizes the classical model of database concurrency control by permitting accesses toclass andinstance objects, by permittingarbitrary operations on objects as opposed to traditional read and write operations, and by allowingnested execution of transactions on objects. In this paper, we first develop a uniform methodology for treating both classes and instances. We then develop a two-phase locking protocol with a new relationship between locks calledordered sharing for an object-oriented database. Ordered sharing does not restrict the execution of conflicting operations. Finally, we extend the protocol to handle objects that execute methods on other objects thus resulting in the nested execution of transactions. The resulting protocol permits more concurrency than other known locking-based protocols. 相似文献
12.
Chin-Feng Lee S. Wesley Changchien Wei-Tse Wang Jau-Ji Shen 《Information Systems Frontiers》2006,8(3):147-161
Data mining can dig out valuable information from databases to assist a business in approaching knowledge discovery and improving
business intelligence. Database stores large structured data. The amount of data increases due to the advanced database technology
and extensive use of information systems. Despite the price drop of storage devices, it is still important to develop efficient
techniques for database compression. This paper develops a database compression method by eliminating redundant data, which
often exist in transaction database. The proposed approach uses a data mining structure to extract association rules from
a database. Redundant data will then be replaced by means of compression rules. A heuristic method is designed to resolve
the conflicts of the compression rules. To prove its efficiency and effectiveness, the proposed approach is compared with
two other database compression methods.
Chin-Feng Lee is an associate professor with the Department of Information Management at Chaoyang University of Technology, Taiwan, R.O.C.
She received her M.S. and Ph.D. degrees in 1994 and 1998, respectively, from the Department of Computer Science and Information
Engineering at National Chung Cheng University. Her current research interests include database design, image processing and
data mining techniques.
S. Wesley Changchien is a professor with the Institute of Electronic Commerce at National Chung-Hsing University, Taiwan, R.O.C. He received a
BS degree in Mechanical Engineering (1989) and completed his MS (1993) and Ph.D. (1996) degrees in Industrial Engineering
at State University of New York at Buffalo, USA. His current research interests include electronic commerce, internet/database
marketing, knowledge management, data mining, and decision support systems.
Jau-Ji Shen received his Ph.D. degree in Information Engineering and Computer Science from National Taiwan University at Taipei, Taiwan
in 1988. From 1988 to 1994, he was the leader of the software group in Institute of Aeronautic, Chung-Sung Institute of Science
and Technology. He is currently an associate professor of information management department in the National Chung Hsing University
at Taichung. His research areas focus on the digital multimedia, database and information security. His current research areas
focus on data engineering, database techniques and information security.
Wei-Tse Wang received the B.A. (2001) and M.B.A (2003) degrees in Information Management at Chaoyang University of Technology, Taiwan,
R.O.C. His research interests include data mining, XML, and database compression. 相似文献
13.
Automating schema mapping is challenging. Previous approaches to automating schema mapping focus mainly on computing direct matches between two schemas. Schemas, however, rarely match directly. Thus, to complete the task of schema mapping, we must also compute indirect matches. In this paper, we present a composite approach for generating a source-to-target mapping that contains both direct and many indirect matches between a source schema and a target schema. Recognizing expected-data values associated with schema elements and applying schema-structure heuristics are the key ideas needed to compute indirect matches. Experiments we have conducted over several real-world application domains show encouraging results, yielding about 90% precision and recall measures for both direct and indirect matches. 相似文献
14.
15.
A manufacturing XML schema definition and its application to a data management system on the shop floor 总被引:1,自引:0,他引:1
Digitization for sharing knowledge on the shop floor in the machinery industry has been given much attention recently. To help engineers use digitization practically and efficiently, this paper proposes a method based on manufacturing case data that has a direct relation to manufacturing operations. The data are represented in XML schema, as it can be easily applied to Web-based systems on the shop floor. The definitions were made for eight manufacturing methods including machining and welding. The derived definitions consist of four divisions of metadata, work-piece, process and evaluation. Three divisions except for the “process” division are common to the manufacturing methods. The average number of elements for a manufacturing method is about 200. The represented schema is also used to convey knowledge such as operation standards and manufacturing troubleshooting on the shop floor. Using the definitions, a data management system is developed. It is a Web-based Q&A system, in which the engineers specify the manufacturing case data mainly by selecting from the candidates. Then, the system fills in the blank portions and/or shows messages to help complete the case data. The proposed method is evaluated through practical scenarios of arc welding and machining. 相似文献
16.
Unauthorized changes to databases can result in significant losses for organizations as well as individuals. Watermarking can be used to protect the integrity of databases against unauthorized alterations. Prior work focused on watermarking database tables or relations. Malicious alteration cannot be detected in all cases. In this paper we argue that watermarking database indexes in addition to the database tables would improve the detection of unauthorized alterations. Usually, each database table in commercial applications has more than one index attached to it. Thus, watermarking the database table and all its indexes improve the likelihood of detecting malicious attacks. In general, watermarking different indexes like R-trees, B-trees, Hashes, require different watermarking techniques and exploit different redundancies in the underlying data structure. This diversity in watermarking techniques contributes to the overall integrity of the databases.Traditional relational watermarks introduce some error to the watermarked values and thus cannot be applied to all attributes. This paper proposes a novel watermarking scheme for R-tree data structures that does not change the values of the attributes. Moreover, the watermark does not change the size of the R-tree. The proposed technique takes advantage of the fact that R-trees do not put conditions on the order of entries inside the node. In the proposed scheme, entries inside R-tree nodes are rearranged, relative to a “secret” initial order (a secret key), in a way that corresponds to the value of the watermark.To achieve that, we propose a one-to-one mapping between all possible permutations of entries in the R-tree node and all possible values of the watermark. Without loss of generality, watermarks are assumed to be numeric values. The proposed mapping employs a numbering system that uses variable base with factorial value.The detection rate of the malicious attacks depends on the nature of the attack, distribution of the data, and the size of the R-tree node. Our extensive analysis and experimental results showed that the proposed technique detects data alteration with high probability (that reaches up to 99%) on real datasets using reasonable node sizes and attack model. The watermark insertion and extraction are mainly main memory operations, and thus, have minimal effect on the cost of R-tree operations. 相似文献
17.
This paper presents a decomposition approach for the solution of the dynamic programming formulation of the unit loading problem in hydroplant management. This decomposition approach allows the consideration of network and canal constraints without additional computational effort. 相似文献
18.
Maria Helena L.B. Braz Sean W.M. SiqueiraDiva de S. e S. Rodrigues Rubens N. Melo 《Computers in human behavior》2011,27(4):1344-1351
The development of instructional content using Information Technologies is an expensive, time-consuming and complex process that requires new methodologies. It was in this context that the concept of Learning Objects (LOs) was proposed in order to promote reuse. However, this goal is not yet fully attained and new contributions to increase reuse are still welcome. Besides, if content is conveyed in LOs that are easier to reuse, they must be combined and sequenced in order to build more elaborated and complex content. This paper presents a strategy to deal with these problems based on the definition of small LOs here called Component Objects (COs). These COs are structured and combined according to a conceptual metamodel, which is the basis for the definition of conceptual schemas representing the existing material, including not only content but also practice. This strategy for searching, extracting, and sequencing COs, supports a teacher to better control the implementation of complex content, reducing errors in the authoring process. This approach includes a specification language and an algorithm for semi-automatic sequencing learning content and practice. Finally, a case study that shows the proposed approach and some results of using the algorithm are presented. 相似文献
19.
Min-Hsiung Hung Wen-Huang Tsai Haw-Ching Yang Yi-Jhong Kao Fan-Tien Cheng 《Robotics and Computer》2012
The semiconductor and thin-film-transistor–liquid-crystal-display (TFT-LCD) industries widely value Automatic Virtual Metrology System (AVMS). AVMS needs to handle a large volume of VM-related data, which may cause poor internal database performance. In general, AVMS adopts efficient but expensive commercial database management systems (DBMSs) to yield good AVMS performance. This usually makes the AVMS construction cost very high. Therefore, the industries require a novel AVMS architecture with lower cost and greater efficiency in database. This paper proposes a novel AVMS architecture based on Main Memory Database (MMDB) technology. Specifically, the MMDB is used to improve the performance bottlenecks of the current Disk Resident Database (DRDB). Also, we design automatic data-backup and automatic data-query sources integration mechanisms to effectively relieve rapidly increased data volume in the original AVMS architecture. In addition, the novel AVMS architecture adopts a free commercial MMDB to significantly reduce total system cost. Integrated testing results show that the proposed AVMS architecture and developed technologies can enable the AVMS to have better data-storage efficiency, superior data-query performance, and lower database cost. The proposed AVMS architecture and research results in this paper can be a useful reference for TFT-LCD manufacturing companies in constructing their own AVM systems. The proposed AVMS architecture can also be applied in the semiconductor and solar-cell industries. 相似文献
20.
One of the major problems within the software testing area is how to get a suitable set of cases to test a software system.
This set should assure maximum effectiveness with the least possible number of test cases. There are now numerous testing
techniques available for generating test cases. However, many are never used, and just a few are used over and over again.
Testers have little (if any) information about the available techniques, their usefulness and, generally, how suited they
are to the project at hand upon, which to base their decision on which testing techniques to use. This paper presents the
results of developing and evaluating an artefact (specifically, a characterisation schema) to assist with testing technique
selection. When instantiated for a variety of techniques, the schema provides developers with a catalogue containing enough
information for them to select the best suited techniques for a given project. This assures that the decisions they make are
based on objective knowledge of the techniques rather than perceptions, suppositions and assumptions. 相似文献