首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
归纳法推理系统   总被引:7,自引:0,他引:7  
李卫华  张黔 《计算机学报》1996,19(3):230-236
本文介绍了基于微机的归纳法推理系统。用该系统,作者已证明了一批计算机程序的正确性及一些有价值的程序属性,包括算术表达式编译程序的正确性、FORTRAN编译程序的正确性、LISP解释程序的正确性等。文中简介了系统的理论基础、数据类型、总体结构,举例说明了系统的推理能力等。  相似文献   

2.
A FORTRAN IV computer program is presented and described which models the fractionation of trace elements during simple diffusion controlled crystallization of magmas. Two mathematical techniques are used: Crank—Nicolson finite difference and Lanczos tau polynomial methods, because it was determined that there were regions in which either one or the other was unsuitable. The regions of applicability of the respective methods are identified. The program can be used in several manners: (1) It can model diffusion controlled crystallization in which the melt is initially homogeneous in composition, with K (partition coefficient), D (diffusion coefficient), and R (rate of crystal growth) specified. Any of these variables may be changed during crystallization; (2) It can model a situation where the melt has compositional heterogeneity (specified by user), with K, R and D also specified. These variables may be changed during crystallization; (3) If the solid profile is specified, as well as K, the program can be made to calculate best-fit values for R/D ratio. Output from the program compares favourably with actual analytical data from the Bushveld Complex, South Africa. Although the geological basis for the model probably is conceptually simplistic, the model provides a basis for comparison with natural data, and thus can assist in obtaining greater insight into the processes involved in magmatic crystallization.  相似文献   

3.
One important task of the industrial engineer is to be an interface person—between operations and planning, between workers and their physical environment, between bench level tasks and systems level management, and with growing need, between the computer and the user. The latter has been caused in part by the programming requirement to express how a task is to be done, rather than what is to be done. The IEs role here is to provide the interface to ask the “what” questions. This paper addresses one strategy for constructing the computer-user interface.Using a core set of sixteen short FORTRAN subprograms, a simple procedure for constructing user oriented conversational computer languages has been developed (seven subprograms are directed at the conversational language and nine are list handling routines). This core set has been used successfully to develop user packages for medical doctors doing cell kinetics simulation in cancer research, for energy policy makers to access and manipulate time series energy data, for undergraduates to solve statistical and mathematical programming problems. The procedure is machine independent, only requiring a FORTRAN compiler, and can be used by the IE to bridge the gap between the user with a question, and the solution power of the computer.  相似文献   

4.
5.
Early desire     
EARLY DESIRE (Direct Executing SImulation in REal time) is the first of a series of entirely new floating-point equation-language systems for interactive continuous-system simulation. DESIRE systems combine high computing speed (1.3 to 4 times faster that threaded FORTRAN) with immediate direct execution: no external compiler, linker, or loader is needed. DESIRE employs an interpreted job-control language (essentially an advanced BASIC dialect) for slower operations such as interactive program entry, editing, file manipulation, and for programming multi-run simulation studies. The ‘dynamic’ program segment containing differential equations in first-order form is entered just like the BASIC statements and can freely access the same named variables. The relative simplicity of the time-critical dynamic segment permits us to compile it practically instantaneously into efficient machine code, since a simple, super-fast compiler will do. DESIRE utilizes existing, precompiled FORTRAN integration routines; different integration rules can be switched as disk overlays while the program runs. EARLY DESIRE runs on PDP-11 or LSI-11 mini/microcomputers. Future DESIRE systems will download dynamic program segments into a variety of multiple execution processors.Thereafter rose Desire in the beginning; Desire, the primal germ and seed of Spirit.  相似文献   

6.
One has a large computational workload that is “divisible” (its constituent tasks’ granularity can be adjusted arbitrarily) and one has access to p remote computers that can assist in computing the workload. How can one best utilize the computers? Two features complicate this question. First, the remote computers may differ from one another in speed. Second, each remote computer is subject to interruptions of known likelihood that kill all work in progress on it. One wishes to orchestrate sharing the workload with the remote computers in a way that maximizes the expected amount of work completed. We deal with three versions of this problem. The simplest version ignores communication costs but allows computers to differ in speed (a heterogeneous set of computers). The other two versions account for communication costs, first with identical remote computers (a homogeneous set of computers), and then with computers that may differ in speed. We provide exact expressions for the optimal work expectation for all three versions of the problem - via explicit closed-form expressions for the first two versions, and via a recurrence that computes this optimal value for the last, most general version.  相似文献   

7.
8.
Computer benchmarking is a common method for measuring the parameters of a computational model. It helps to measure the parameters of any computer. With the emergence of multicore computers, the evaluation of computers was brought under consideration. Since these types of computers can be viewed and considered as parallel computers, the evaluation methods for parallel computers may be appropriate for multicore computers. However, because multicore architectures seriously focus on cache hierarchy, there is a need for new and different benchmarks to evaluate them correctly. To this end, this paper presents a method for measuring the parameters of one of the most famous multicore computational models, namely Multi-Bulk Synchronous Parallel (Multi-BSP). This method measures the hardware latency parameters of multicore computers, namely communication latency (g i ) and synchronization latency (L i ) for all levels of the cache memory hierarchy in a bottom-up manner. By determining the parameters, the performance of algorithms on multicore architectures can be evaluated as a sequence.  相似文献   

9.
In this paper, the authors use the AFARS (Algorithm For Analyzing the Reliability of Systems) to solve the mathematical model of a reliability block diagram of a generalized complex system. The block diagram is typical for maintained and non-maintained equipments found in aerospace, nuclear power, rapid-transit, facility-security systems and myriads of industrial systems.The solutions obtained are in the form of computer tabulations which may be used for plotting the characteristic curves of the dynamic parameters of interest, namely, the predicted operational readiness A(t), interval availability A(t1, t2), reliability or mission success (R(t)), and maintainability (M(t). For the unique “stationary-case” when all times-to-failure and all times-to-repair obey the Exponential probability law, the printout includes the predicted equilibrium availabilty A, the mean-time-to-first sytem-failure MITTFSF, mean-time-between-failure MTBF, and the mean-time-to-repair MTTR of the system being analyzed.AFARS is an analytical algorithm using Markovian state-transition diagrams and is capable of solving reliability mathematical models of complex systems at least one magnitude of CPU processing time quicker than by simulation algorithms such as GERT by Pritsker and Whitehouse. AFARS is written in ANSI FORTRAN and is totally interactive. Furthermore, the user need not know FORTRAN or have any programming skills. In using AFARS, the user does no programming, writes no equations and does not need to know probability theory. The algorithm is designed to run on mini and large-frame computers.The AFARS program is now restricted to solving reliability models; it may be used to solve queueing models and any stochastic model that can be described as a birth-and-death process.  相似文献   

10.
Many novel computer architectures like array and multiprocessors which achieve high performance through the use of concurrency exploit variations of the von Neumann model of computation. The effective utilization of the machines makes special demands on programmers and their programming languages, such as the structuring of data into vectors or the partitioning of programs into concurrent processes. In comparison, the data flow model of computation demands only that the principle of structured programming be followed. A data flow program, often represented as a data flow graph, is a program that expresses a computation by indicating the data dependencies among operators. A data flow computer is a machine designed to take advantage of concurrency in data flow graphs by executing data independent operations in parallel. In this paper, we discuss the design of a high level language (DFL: Data Flow Language) suitable for data flow computers. Some sample procedures in DFL are presented. The implementation aspects have not been discussed in detail since there are no new problems encountered. The language DFL embodies the concepts of functional programming, but in appearance closely resembles Pascal. The language is a better vehicle than the data flow graph for expressing a parallel algorithm. The compiler has been implemented on a DEC 1090 system in Pascal.  相似文献   

11.
The adaptation of the Cooley—Tukey, the Pease and the Stockham FFT's to vector computers is discussed. Each of these algorithms computes the same result namely, the discrete Fourier transform. They differ only in the way that intermediate computations are stored. Yet it is this difference that makes one or the other more appropriate depending on the application. This difference also influences the computational efficiency on a vector computer and motivates the development of methods to improve efficiency. Each of the FFT's is defined rigorously by a short expository FORTRAN program which provides the basis for discussions about vectorization. Several methods for lengthening vectors are discussed, including the case of multiple and multi-dimensional transforms where M sequences of length N can be transformed as a single sequence of length MN using a ‘truncated’ FFT. The implementation of an in place FFT on a computer with memory-to-memory architecture is made possible by in place matrix-vector multiplication.  相似文献   

12.
Consider a distributed system consisting of n computers connected by a number of identical broadcast channels. All computers may receive messages from all channels. We distinguish between two kinds of systems: systems in which the computers may send on any channel (dynamic allocation) and system where the send port of each computer is statically allocated to a particular channel. A distributed task (application) is executed on the distributed system. A task performs execution as well as communication between its subtasks. We compare the completion time of the communication for such a task using dynamic allocation and channels with the completion time using static allocation and channels. Some distributed tasks will benefit very much from allowing dynamic allocation, whereas others will work fine with static allocation. In this paper we define optimal upper and lower bounds on the gain (or loss) of using dynamic allocation and channels compared to static allocation and channels. Our results show that, for some tasks, the gain of permitting dynamic allocation is substantial, e.g. when , there are tasks which will complete 1.89 times faster using dynamic allocation compared to using the best possible static allocation, but there are no tasks with a higher such ratio. Received: 26 February 1998 / 26 July 1999  相似文献   

13.
This paper presents a new conception for a distributed task-oriented real-time operating system comprising a compiler, an operating system kernel and communication packages. The system TOROS supplies the tools for a uniform programming of complex process control applications on heterogenous hardware including workstation, PC, programmable controller and microcontroller. The whole control task is split into a set of small modules. These modules are uniquely programmed by defining a state machine and using guarded commands. They are connected logically through calls to tasks provided by other modules. The specification of the modules is done in an hardware independent language. At compile time the modules are distributed to specified target computers. The system automatically translates each module into the particular code and realizes the communication between the modules either on the same computer or through the links.  相似文献   

14.
What's computation? The received answer is that computation is a computer at work, and a computer at work is that which can be modelled as a Turing machine at work. Unfortunately, as John Searle has recently argued, and as others have agreed, the received answer appears to imply that AI and Cog Sci are a royal waste of time. The argument here is alarmingly simple: AI and Cog Sci (of the “Strong” sort, anyway) are committed to the view that cognition is computation (or brains are computers); butall processes are computations (orall physical things are computers); so AI and Cog Sci are positively silly. I refute this argument herein, in part by defining the locutions ‘x is a computer’ and ‘c is a computation’ in a way that blocks Searle's argument but exploits the hard-to-deny link between What's Computation? and the theory of computation. However, I also provide, at the end of this essay, an argument which, it seems to me, implies not that AI and Cog Sci are silly, but that they're based on a form of computation that is well “beneath” human persons.  相似文献   

15.
文章[1]中提出了数组之间的数据融合优化方法,并以IA-32服务器为平台测试了数据融合优化的效果。测试结果表明,在IA-32机器上,数据融合优化在性能代价模型的控制下,能较好地改善具有非连续数据访问特征的应用程序的CACHE利用率。那么,在新一代体系结构IA-64平台上,数据融合优化的效果如何呢?该文分别以IntelIA-32服务器和HPITANIUM服务器为平台,用IntelFORTRAN编译器ifc和efc及自由软件编译器g95分别编译并运行数据融合优化变换前后的程序,获得两种平台上的执行时间及相关的性能数据。测试结果表明,源程序级的数据融合优化不能很好地与IA-64平台上的EFC编译器高级优化配合工作,在O3级优化开关控制下,优化效果是负值。此测试结果进一步表明,编译高级优化如数据预取、循环变换和数据变换等各种优化必须结合体系结构的特点统筹考虑,才能取得好的全局优化效果。该文为研究各种面向IA-32体系结构的编译优化算法在IA-64体系结构上的性能可移植性优化起到抛砖引玉的作用。  相似文献   

16.
攀钢1450热连轧改造系统过程计算机之间的通讯   总被引:1,自引:0,他引:1  
胡宇  刘雅超  吕彦峰 《控制工程》2007,14(4):410-412
简单介绍了攀钢1450热连轧改造系统过程计算机之间相互通讯的三种方式。描述了系统中数据中心、各过程控制计算机的功能,并详细说明了加热炉、粗轧、精轧过程计算机之间以及过程计算机与数据中心之间的通讯接口,在通讯程序中运用了自动重发,重连等机制,保证了通讯的快速性、正确性和可靠性。实践表明,通讯系统运行平稳可靠。  相似文献   

17.
Title of program: COULFG: Coulomb, Bessel Functions Catalogue number: ABNK Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland (see application form in this issue) Computer: IBM 370/165 and AS/7000; Installation: Daresbury Laboratory, Warrington, Lancs. Operating system: OS/360 GI compiler and HX compiler (level 2.2.1) Programming language used: ASA FORTRAN High speed storage required: 180 Kbytes No. of bits in a word: 32 Overlay structure: none Peripherals used: card reader, printer No. of cards in combined program and test deck: 432 Card punching code: EBCDIC  相似文献   

18.
19.
Many distributed applications must meet stringent performance requirements even when the performance characteristics of the underlying systems and networks vary significantly at runtime. Runtime adaptation can be used to tolerate such changes, but sophisticated adaptive distributed programs can be extremely challenging to design, implement and debug. This paper proposes a language called Program Control Language (PCL) that provides a novel means of specifying adaptations in distributed applications. PCL is based on an abstract, global representation of a distributed program (a static task graph), which enables a programmer to reason about and to describe a wide range of application-specific adaptation strategies at a high level, using a few key mechanisms. PCL provides simple high-level syntax for local and remote adaptation operations, local and remote performance monitoring (and aggregation), and for performing adaptations synchronously or asynchronously with respect to the execution of the application process initiating the adaptation. The global task graph representation enables remote performance metrics and adaptation operations to be specified in simple global terms by any process, and the compiler and runtime system automatically perform the communication and synchronization required for the remote operations. The paper describes the conceptual adaptation framework, the PCL language, and our implementation of the PCL compiler and runtime system. The paper uses three adaptative applications examples to illustrate the capabilities and benefits of PCL, and to show experimentally that the performance overheads of using PCL for implementing an adaptive application are negligible.  相似文献   

20.
P. H. Ng  G. Young 《Software》1978,8(4):421-427
This paper reports a FORTRAN post mortem dump system (PMD) for the ICL 1900 computers. The system, Jointly implemented by Birmingham and Liverpool Universities, can perform a core/storage dump In terms of the original FORTRAN source following the segment (subroutines, etc.) history of execution when the program falls to terminate successfully. The compilation overheads of the new system are very low and the execution overheads practically none.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号