共查询到20条相似文献,搜索用时 0 毫秒
1.
Katarina Grolinger Miriam A.M. Capretz 《Information and Software Technology》2011,53(2):159-170
Context
The constant changes in today’s business requirements demand continuous database revisions. Hence, database structures, not unlike software applications, deteriorate during their lifespan and thus require refactoring in order to achieve a longer life span. Although unit tests support changes to application programs and refactoring, there is currently a lack of testing strategies for database schema evolution.Objective
This work examines the challenges for database schema evolution and explores the possibility of using various testing strategies to assist with schema evolution. Specifically, the work proposes a novel unit test approach for the application code that accesses databases with the objective of proactively evaluating the code against the altered database.Method
The approach was validated through the implementation of a testing framework in conjunction with a sample application and a relatively simple database schema. Although the database schema in this study was simple, it was nevertheless able to demonstrate the advantages of the proposed approach.Results
After changes in the database schema, the proposed approach found all SELECT statements as well as the majority of other statements requiring modifications in the application code. Due to its efficiency with SELECT statements, the proposed approach is expected to be more successful with database warehouse applications where SELECT statements are dominant.Conclusion
The unit test approach that accesses databases has proven to be successful in evaluating the application code against the evolved database. In particular, the approach is simple and straightforward to implement, which makes it easily adoptable in practice. 相似文献2.
基于本体的异构数据库集成方法 总被引:5,自引:0,他引:5
随着数据库种类的增多,在查询不同数据库时,经常碰到用户术语和数据库术语不匹配的问题.提出了一种基于本体理论的异构数据库集成方法,该方法利用了一个全局字典,建立各局部数据库的本体,通过中间件和局部查询代理处理用户请求,用他们自己的术语(不精确查询)进行查询.给出了本体模型和查询处理算法以及原型系统的实现. 相似文献
3.
Moisés Gomes de Carvalho Alberto H.F. Laender Marcos André Gonçalves Altigran S. da Silva 《Information Systems》2013
The schema matching problem can be defined as the task of finding semantic relationships between schema elements existing in different data repositories. Despite the existence of elaborated graphic tools for helping to find such matches, this task is usually manually done. In this paper, we propose a novel evolutionary approach to addressing the problem of automatically finding complex matches between schemas of semantically related data repositories. To the best of our knowledge, this is the first approach that is capable of discovering complex schema matches using only the data instances. Since we only exploit the data stored in the repositories for this task, we rely on matching strategies that are based on record deduplication (aka, entity-oriented strategy) and information retrieval (aka, value-oriented strategy) techniques to find complex schema matches during the evolutionary process. To demonstrate the effectiveness of our approach, we conducted an experimental evaluation using real-world and synthetic datasets. The results show that our approach is able to find complex matches with high accuracy, similar to that obtained by more elaborated (hybrid) approaches, despite using only evidence based on the data instances. 相似文献
4.
5.
基于OpenURL协议和XML Schema的异构数据库整合方案研究 总被引:2,自引:0,他引:2
苏志芳 《计算机工程与设计》2008,29(16)
OptnURL框架是一种开放链接环境下提供定位服务的参考链接技术,也是网络资源整合的一种重要手段.基于该框架结构和XMLSchema较强的数据描述能力,该研究方案通过ISO2709模式到XMLSchema文档的映射构成转化平台,实现了以OPAC书目查询系统为中心的异构资源数据整合,并提出了一种新方法来解决数字资源参考链接动态修改的问题.该方案在某图书馆进行纸本数据和部分数字资源镜像数据的整合中得以应用. 相似文献
6.
7.
I-Ching Hsu Li-Pin Chi Sheau-Shong Bor 《Journal of Network and Computer Applications》2009,32(3):616-629
As new standards, markup languages, protocols, and client devices continue to emerge, the main problem of existing transcoding systems is the lack of intelligence to cope with the heterogeneous effects, including various transcoding policies, markup documents, device constraints, and server platforms. This study proposes a new approach, called hybrid transcoding, to combine the traditional transcoding technologies based on ontology-based metadata to improve these heterogeneous problems. Additionally, the heterogeneous markup document transcoding (HMDT) platform, based on the proposed hybrid transcoding and web services technologies, is also presented to serve as a transcoding service broker to facilitate interoperability between distributed heterogeneous transcoders. To demonstrate the feasibility of HMDT platform, an application scenario of hybrid transcoding is implemented to convert HTML forms into various client devices. 相似文献
8.
Carol Small 《Information Systems》1993,18(8):581-595
PFL is a functional database language in which functions are defined equationally and bulk data is stored using a special class of functions called selectors. It is a lazy language, supports higher-order functions, has a strong polymorphic type inference system, and allows new user-defined data types and values to be declared. All functions, types and values persist in a database. Functions can be written which update all aspects of the database: by adding data to selectors, by defining new equations, and by introducing new data types and values. PFL is “semi-referentially transparent”, in the sense that whilst updates are referentially opaque and are executed destructively, all evaluation is referentially transparent. Similarly, type checking is “semi-static” in the sense that whilst updates are dynamically type checked at run time, expressions are type checked before they are evaluated and no type errors can occur during their evaluation.
In this paper we examine the expressiveness of PFL with respect to updates, and illustrate the language by developing a number of general purpose update functions, including functions for restructuring selectors, for memoisation, and for generating unique system identifiers. We also provide a translation mechanism between Datalog programs and equations, and show how different Datalog evaluation strategies can be supported. 相似文献
9.
K. L. Kwast 《Annals of Mathematics and Artificial Intelligence》1993,9(1-2):205-238
The logical theory of database integrity is developed as an application of deontic logic. After a brief introduction to database theory and Kripke semantics, a deontic operator X, denoting what should hold, non-trivially, given a set of constraints, is defined and axiomatized. The theory is applied to updates, to dynamic constraints and to databases extended with nulls. 相似文献
10.
Automating schema mapping is challenging. Previous approaches to automating schema mapping focus mainly on computing direct matches between two schemas. Schemas, however, rarely match directly. Thus, to complete the task of schema mapping, we must also compute indirect matches. In this paper, we present a composite approach for generating a source-to-target mapping that contains both direct and many indirect matches between a source schema and a target schema. Recognizing expected-data values associated with schema elements and applying schema-structure heuristics are the key ideas needed to compute indirect matches. Experiments we have conducted over several real-world application domains show encouraging results, yielding about 90% precision and recall measures for both direct and indirect matches. 相似文献
11.
Paolo Atzeni Luigi Bellomarini Francesca Bugiotti Fabrizio Celli Giorgio Gianforme 《Information Systems》2012,37(3):269-287
To support heterogeneity is a major requirement in current approaches to integration and transformation of data. This paper proposes a new approach to the translation of schema and data from one data model to another, and we illustrate its implementation in the tool MIDST-RT.We leverage on our previous work on MIDST, a platform conceived to perform translations in an off-line fashion. In such an approach, the source database (both schema and data) is imported into a repository, where it is stored in a universal model. Then, the translation is applied within the tool as a composition of elementary transformation steps, specified as Datalog programs. Finally, the result (again both schema and data) is exported into the operational system.Here we illustrate a new, lightweight approach where the database is not imported. MIDST-RT needs only to know the schema of the source database and the model of the target one, and generates views on the operational system that expose the underlying data according to the corresponding schema in the target model. Views are generated in an almost automatic way, on the basis of the Datalog rules for schema translation.The proposed solution can be applied to different scenarios, which include data and application migration, data interchange, and object-to-relational mapping between applications and databases. 相似文献
12.
Bolanle Ojokoh Author Vitae Ming Zhang Author Vitae Jian Tang Author Vitae 《Information Sciences》2011,181(9):1538-1551
Our objective was to explore an efficient and accurate extraction of metadata such as author, title and institution from heterogeneous references, using hidden Markov models (HMMs). The major contributions of the research were the (i) development of a trigram, full second order hidden Markov model with more priority to words emitted in transitions to the same state, with a corresponding new Viterbi algorithm (ii) introduction of a new smoothing technique for transition probabilities and (iii) proposal of a modification of back-off shrinkage technique for emission probabilities. The effect of the size of data set on the training procedure was also measured. Comparisons were made with other related works and the model was evaluated with three different data sets. The results showed overall accuracy, precision, recall and F1 measure of over 95% suggesting that the method outperforms other related methods in the task of metadata extraction from references. 相似文献
13.
Chin-Feng Lee S. Wesley Changchien Wei-Tse Wang Jau-Ji Shen 《Information Systems Frontiers》2006,8(3):147-161
Data mining can dig out valuable information from databases to assist a business in approaching knowledge discovery and improving
business intelligence. Database stores large structured data. The amount of data increases due to the advanced database technology
and extensive use of information systems. Despite the price drop of storage devices, it is still important to develop efficient
techniques for database compression. This paper develops a database compression method by eliminating redundant data, which
often exist in transaction database. The proposed approach uses a data mining structure to extract association rules from
a database. Redundant data will then be replaced by means of compression rules. A heuristic method is designed to resolve
the conflicts of the compression rules. To prove its efficiency and effectiveness, the proposed approach is compared with
two other database compression methods.
Chin-Feng Lee is an associate professor with the Department of Information Management at Chaoyang University of Technology, Taiwan, R.O.C.
She received her M.S. and Ph.D. degrees in 1994 and 1998, respectively, from the Department of Computer Science and Information
Engineering at National Chung Cheng University. Her current research interests include database design, image processing and
data mining techniques.
S. Wesley Changchien is a professor with the Institute of Electronic Commerce at National Chung-Hsing University, Taiwan, R.O.C. He received a
BS degree in Mechanical Engineering (1989) and completed his MS (1993) and Ph.D. (1996) degrees in Industrial Engineering
at State University of New York at Buffalo, USA. His current research interests include electronic commerce, internet/database
marketing, knowledge management, data mining, and decision support systems.
Jau-Ji Shen received his Ph.D. degree in Information Engineering and Computer Science from National Taiwan University at Taipei, Taiwan
in 1988. From 1988 to 1994, he was the leader of the software group in Institute of Aeronautic, Chung-Sung Institute of Science
and Technology. He is currently an associate professor of information management department in the National Chung Hsing University
at Taichung. His research areas focus on the digital multimedia, database and information security. His current research areas
focus on data engineering, database techniques and information security.
Wei-Tse Wang received the B.A. (2001) and M.B.A (2003) degrees in Information Management at Chaoyang University of Technology, Taiwan,
R.O.C. His research interests include data mining, XML, and database compression. 相似文献
14.
Requirements Engineering - With the increase in market needs, game development teams are facing a high demand of creating new games every year. Although several methodologies and tools were... 相似文献
15.
Aircraft operators are continually striving to reduce both the amount and the cost of aircraft maintenance. Whilst at the same time ensuring that the aircraft safety, reliability and integrity are not compromised. One solution which has seen a lot of attention is known as condition monitoring. The aim of condition monitoring is to develop the ability to detect, diagnose and locate damage, even predicting the remaining useful life of the structure or system. There are difficulties associated with developing aerospace condition monitoring which transcends technical, financial and regulatory. Aerospace legislation requires that any decisions on maintenance, safety and flightworthiness to be auditable and data patterns to relate to known information. The use of data, physical models and knowledge approaches can individually produce reliable health related decisions, but the fusing of these different solutions within an appropriate framework will enhance the intelligence in the decision making process. This paper reviews such a framework and design methodology being used for the development of knowledge based condition monitoring systems for aircraft landing gear actuators. 相似文献
16.
17.
18.
19.
This paper presents a family of real-time executives, designed by Telettra, for telecommunication, telecontrol, process control and supervisory applications.These applications are subjects to two different types of real-time requirements: deadlines and throughput. Rather than designing a single executive capable to face both requirements, a two level, hierarchical approach has been taken. Two executives coexist on a single processor: a low level, periodic executive for tasks with strict deadline constraints, and a higher level, multitask executive for tasks with throughput constraints, or tasks with weaker deadline constraints that can be specified and dealt with at application level with little support from the executive.Thanks to the hierarchical approach, simple mechanisms are sufficient to support communications between the two levels. Facilities are also provided to support load control in the deadline oriented environment, according to policies that are defined by the multitask level application.The presence on all computational nodes of a multitask environment is a key characteristic, since it allows a highly modular style of programming, and facilitates the construction of distributed systems.The paper shows how these ideas are applied in the design of the peripheral processor of a telephone switching system. 相似文献
20.
Heterogeneous multiattribute group decision making (MAGDM) problems which involve multi-granularity linguistic labels, fuzzy numbers, interval numbers and real numbers are very complex and important in practical applications of decision making theory. Hitherto, there exists no general theoretical inducement for solving such problems. The purpose of this paper is to develop a systematic methodology for solving the heterogeneous MAGDM problems by introducing the multiattribute ranking index based on the particular measure of closeness to the positive ideal solution (PIS) and using the weighted Minkowski distance to measure differences between each alternative and the PIS as well as the negative ideal solution (NIS). The proposed methodology is shown to have some advantages over the fuzzy TOPSIS. Validity and applicability of the methodology proposed in this paper is illustrated with a real example of the missile weapon system selection problem. 相似文献