首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this contribution, the main characteristics of MUMPS, the host language of AIDA, are briefly discussed. This is basically an introductory text for readers who had had no previous experience with MUMPS, on the programming side. After a short introduction, its history is summarized. Then the language itself is introduced and illustrated with examples; for the sake of brevity not all commands, functions and other features are discussed. In the next section, an introduction to MUMPS' stronghold is given: the database structure. Finally, we give the main reasons why we have chosen MUMPS as AIDA's host language. In the appendices, attention is given to the language standardization process, the visibility of MUMPS and its availability, sources of further information and the most important text books on MUMPS (in English).  相似文献   

2.
3.
A Medical Information System for the Fertility Department has been built (VERGYNIA) using the 4th generation software package AIDA to make the medical data accessible for research and teaching on the one hand and to assist the management of the department in the daily routine of patient care on the other. The system has been implemented on a PDP 11/23 computer with a 20 Mbyte hard disk, a 10 Mbyte removable disk and 128 Kbyte central memory. Three visual display terminals and a small printer are connected to the system. A dedicated line connection between this system and the computer facilities of the Department of Medical Informatics allows easy transfer of data for analysis by statistical packages and the transfer of new programs to the computer of the Fertility Department. The PDP 11/23 contains a production system, a developmental system, and a separate environment for research which can all three run simultaneously on the same computer without interfering with each other. VERGYNIA was predominantly constructed with AIDA; only parts of the output programs have been programmed in MUMPS, due to the fact that the required AIDA tools were not yet available at the time of development. The original version of VERGYNIA was already in operation in 1982 and was built with the tools that form the basis of the current AIDA release. Due to many ad hoc modifications and additions, the system needed a total redesign in order to make it compatible with the new enhancements of the current AIDA release. This redesign was carried out during 1985.  相似文献   

4.
Abstract-driven pattern discovery in databases   总被引:6,自引:0,他引:6  
The problem of discovering interesting patterns in large volumes of data is studied. Patterns can be expressed not only in terms of the database schema but also in user-defined terms, such as relational views and classification hierarchies. The user-defined terminology is stored in a data dictionary that maps it into the language of the database schema. A pattern is defined as a deductive rule expressed in user-defined terms that has a degree of uncertainty associated with it. Methods are presented for discovering interesting patterns based on abstracts which are summaries of the data expressed in the language of the user  相似文献   

5.
数据在信息系统中的使用面临着这样一个困难,即数据存储是以关系模型为基础,而软件开发以对象模型来进行,造成了软件开发中数据访问技术的不和谐。在软件实现上陷入两种模式的转换工作,破坏面向对象语言的面向对象性,造成开发效率低下,代码重用率变低。提出并建立了一个软件框架,利用它提供的处于关系数据库和客户端之间的API来进行基于对象的数据库访问,充分发挥两种不同模式的优点,以提高软件开发的效率。  相似文献   

6.
This paper describes a set of tools that enables developers to log and analyze the run-time behavior of distributed control systems. A feature of the tools is that they can be applied to distributed systems. The logging tools enable developers to instrument C or C++ programs so that data indicating state changes can be logged automatically in a variety of formats. In particular, run-time data from distributed systems can be synchronized into a single relational database. Tools are also provided for visualizing the logged data. Analysis to verify correct program behavior is done using a new interval logic that is described in this paper. The logic enables system engineers to express temporal specifications for the autonomous control program that are then checked against the logged data. The data logging, visualization, and interval logic analysis tools are all fully implemented. Results are given from a NASA distributed autonomous control system application.  相似文献   

7.
张雪东  王淮生 《微机发展》2007,17(11):128-130
数据在信息系统中的使用面临着这样一个困难,即数据存储是以关系模型为基础,而软件开发以对象模型来进行,造成了软件开发中数据访问技术的不和谐。在软件实现上陷入两种模式的转换工作,破坏面向对象语言的面向对象性,造成开发效率低下,代码重用率变低。提出并建立了一个软件框架,利用它提供的处于关系数据库和客户端之间的API来进行基于对象的数据库访问,充分发挥两种不同模式的优点,以提高软件开发的效率。  相似文献   

8.
This paper presents new approaches to the validation of loop optimizations that compilers use to obtain the highest performance from modern architectures. Rather than verify the compiler, the approach of translation validationperforms a validation check after every run of the compiler, producing a formal proof that the produced target code is a correct implementation of the source code. As part of an active and ongoing research project on translation validation, we have previously described approaches for validating optimizations that preserve the loop structure of the code and have presented a simulation-based general technique for validating such optimizations. In this paper, for more aggressive optimizations that alter the loop structure of the code—such as distribution, fusion, tiling, and interchange—we present a set of permutation ruleswhich establish that the transformed code satisfies all the implied data dependencies necessary for the validity of the considered transformation. We describe the extensions to our tool voc-64 which are required to validate these structure-modifying optimizations. This paper also discusses preliminary work on run-time validation of speculative loop optimizations. This involves using run-time tests to ensure the correctness of loop optimizations whose correctness cannot be guaranteed at compile time. Unlike compiler validation, run-time validation must not only determine when an optimization has generated incorrect code, but also recover from the optimization without aborting the program or producing an incorrect result. This technique has been applied to several loop optimizations, including loop interchange and loop tiling, and appears to be quite promising. This research was supported in part by NSF grant CCR-0098299, ONR grant N00014-99-1-0131, and the John von Neumann Minerva Center for Verification of Reactive Systems.  相似文献   

9.
A simple interactive language named Micro MUMPS has been implemented on a microcomputer system. Its powerful facilities for data base manipulation and character handling make programming easy for end users who are unfamiliar with computers. Micro MUMPS is a practical subset of the language MUMPS which has been implemented on many minicomputers, and it also has some additional capabilities indispensable to micro-computer applications. A modified prefix B-tree is used in Micro MUMPS database and its organization can be changed according to the requirements of space and time efficiency. The design criteria of Micro MUMPS and micro-computer based implementation techniques are discussed in this paper.  相似文献   

10.
The paper presents approaches to the validation of optimizing compilers. The emphasis is on aggressive and architecture-targeted optimizations which try to obtain the highest performance from modern architectures, in particular EPIC-like micro-processors. Rather than verify the compiler, the approach of translation validation performs a validation check after every run of the compiler, producing a formal proof that the produced target code is a correct implementation of the source code.First we survey the standard approach to validation of optimizations which preserve the loop structure of the code (though they may move code in and out of loops and radically modify individual statements), present a simulation-based general technique for validating such optimizations, and describe a tool, VOC-64, which implements these technique. For more aggressive optimizations which, typically, alter the loop structure of the code, such as loop distribution and fusion, loop tiling, and loop interchanges, we present a set of permutation rules which establish that the transformed code satisfies all the implied data dependencies necessary for the validity of the considered transformation. We describe the necessary extensions to the VOC-64 in order to validate these structure-modifying optimizations.Finally, the paper discusses preliminary work on run-time validation of speculative loop optimizations, that involves using run-time tests to ensure the correctness of loop optimizations which neither the compiler nor compiler-validation techniques can guarantee the correctness of. Unlike compiler validation, run-time validation has not only the task of determining when an optimization has generated incorrect code, but also has the task of recovering from the optimization without aborting the program or producing an incorrect result. This technique has been applied to several loop optimizations, including loop interchange, loop tiling, and software pipelining and appears to be quite promising.  相似文献   

11.
12.
The acceptance of the C programming language by academia and industry is partially responsible for the ‘software crisis’. The simple, trusting semantics of C mask many common faults, such as range violations, which would be detected and reported at run-time by programs coded in a robust language such as Ada.
  • 1 Ada is a registered trademark of the U.S. Government (Ada Joint Program Office)
  • This needlessly complicates the debugging of C programs. Although the assert macro lets programmers add run-time consistency checks to their programs, the number of instantiations of this macro needed to make a C program robust makes it highly unlikely that any programmer could correctly perform the task. We make some unobtrusive extensions to the C language which support the efficient detection of faults at run-time without reducing the readability of the source code. Examples of the extensions are automatic checking of error codes returned by library routines, constrained subtypes and detection of references to uninitialized and/or non-existent array elements.  相似文献   

    13.
    A new computer code is described which calculates the concentrations (or activities) of chemical species which are at chemical equilibrium. The species distribution model (SDM) is structured so that the user has essentially full control to define a wide variety of problems. Some of the capabilities allow the user (at run-time) to set up a virtually unlimited number and type of solutions and select the activity coefficient technique for each, omit species from consideration, limit the reactivity of species, and edit species and corresponding data (e.g., activity coefficient or equilibrium constant data). The code is highly modular and the routines are short with well-defined purposes and clean interfaces. The code and data structures are set up for ease of model enhancement.  相似文献   

    14.
    Integrity maintenance is essential in integrated engineering information systems because the behaviour of the whole system becomes unpredictable unless the integrity of the shared data is properly maintained. However, validation of integrity is often very expensive because it requires lots of cross references of entity instances of the same type or different types, which should be retrieved from database systems in a shared database environment. One of the goals of this work is to minimize the number of database accesses in validating a set of integrity constraints. A heterogeneous database environment is built using EXPRESS as a global schema language. EXPRESS is an international standard information modelling language which was developed for STEP (STandard for the Exchange of Product data). Based on a set of database interface functions, evaluation sequences are determined by analysing the data dependencies among database accesses required for integrity validation.  相似文献   

    15.
    A methodology for constructing the sensitivity of the incompressible Navier–Stokes equations is presented as the context for differentiating high-level Fortran source code using source-transformation automatic differentiation tools. The methodology aims to be scalable and to retain all the compile-time and run-time safety the language offers. The incompressible solver is presented as it is the standard kernel in industrial CFD software. To complement this paper, the software used for this work has been made available on www.gpde.net.  相似文献   

    16.
    We describe the design and implementation of the Glue-Nail deductive database system. Nail is a purely declarative query language; Glue is a procedural language used for non-query activities. The two languages combined are sufficient to write a complete application. Nail and Glue code are both compiled into the target language IGlue. The Nail compiler uses variants of the magic sets algorithm and supports well-founded models. The Glue compiler's static optimizer uses peephole techniques and data flow analysis to improve code. The IGlue interpreter features a run-time adaptive optimizer that reoptimizes queries and automatically selects indexes. We also describe the Glue-Nail benchmark suite, a set of applications developed to evaluate the Glue-Nail language and to measure the performance of the system.Part of this article was presented at the ACM SIGMOD International Conference on Management of Data, Washington, DC, 1993.Much of this research was done while the authors were at Stanford University, Stanford, California, USA.  相似文献   

    17.
    In this article, we present the Laboratory Inventory Network Application (LINA), a software system that assists research laboratories in keeping track of their collections of biologically relevant materials. This open source application uses relational Microsoft Access database technology as a back end and a Microsoft .NET application as a front end. Preconstructed table templates are provided that contain standardized and customizable data fields. As new samples are added to the inventory, each is provided with a unique laboratory identifier, which is assigned automatically and sequentially, allowing rapid retrieval when a given reagent is required. The LINA contains a number of useful search tools including a general search, which allows database searches using up to four user-defined criteria. The LINA represents an easily implemented and useful organizational tool for biological laboratories with large numbers of strains, clones, or other reagents.  相似文献   

    18.
    G. F. Levy 《Software》1997,27(12):1369-1384
    The automatic conversion of Fortran 77 to C can be accomplished using f2c, but the generated code is difficult to read and maintain, and needs to be linked with various non-standard run-time libraries which provide input/output and mathematical routines. This paper describes tools that have been developed at NAG Ltd. to automatically replace non-standard run-time f2c input/output functions with their equivalent <stdio.h> functions. © 1997 John Wiley & Sons, Ltd.  相似文献   

    19.
    20.
    Databases are the core of Information Systems (IS). It is, therefore, necessary to ensure the quality of the databases in order to ensure the quality of the IS. Metrics are useful mechanisms for controlling database quality. This paper presents two metrics related to referential integrity, number of foreign keys (NFK) and depth of the referential tree (DRT) for controlling the quality of a relational database. However, to ascertain the practical utility of the metrics, experimental validation is necessary. This validation can be carried out through controlled experiments or through case studies. The controlled experiments must also be replicated in order to obtain firm conclusions. With this objective in mind, we have undertaken different empirical work with metrics for relational databases. As a part of this empirical work, we have conducted a case study with some metrics for relational databases and a controlled experiment with two metrics presented in this paper. The detailed experiment described in this paper is a replication of the later one. The experiment was replicated in order to confirm the results obtained from the first experiment.

    As a result of all the experimental works, we can conclude that the NFK metric is a good indicator of relational database complexity. However, we cannot draw such firm conclusions regarding the DRT metric.  相似文献   


    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号