首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 28 毫秒
1.
Multiple criteria decision making (MCDM) tools have been used in recent years to solve a wide variety of problems. In this paper we consider a nation-wide crop-planning problem and show how an MCDM tool can be used efficiently and effectively for these types of problems. A crop-planning problem is usually formulated as a single objective linear programming model. The objective is either the maximization of return from cultivated land or the minimization of cost of cultivation. This type of problem, however, normally involves more than one goal. We thus formulate a crop-planning problem as a goal program (an MCDM tool) and discuss the importance of three different goals for a case problem. We solve the goal program with a real world data set, and compare the solution with that of linear program. We argue that the goal program provides better insights to the problem and thus allows better decision support.  相似文献   

2.
The paper presents a new approach to the problem of completeness of the SLDNF-resolution. We propose a different reference theory that we call strict completion. This new concept of completion (comp*(P)) is based on a program transformation that given any program transforms it into a strict one (with the same computational behaviour) and the usual notion of program completion. We consider it a reasonable reference theory to discuss program semantics and completeness results. The standard 2-valued logic is used. The new comp*(P) is always consistent and the completeness of all allowed programs and goals w.r.t. comp*(P) is proved.  相似文献   

3.
4.
IBM大型机与小型机间汉字转换解决方案   总被引:1,自引:0,他引:1  
本文描述了在IBM的大型机ES/9000(基于MVS/VSE操作系统)与小型机RS/6000(基于AIX操作系统)间通过CICS传输中文数据存在的数据转换问题,分析了汉字EBCDIC码与汉字ASCII码单纯通过CICS配置不能正确转换的原因,给出了两种解决方案:第一种方案通过CICS程序、JAVA程序、CICS配置结合实现汉字转换;第二种方案只通过JAVA程序、CICS配置实现汉字转换。  相似文献   

5.
We describe a library and associated set of applications for locating seismic events. The library is called the GENeralized LOCation (GENLOC) library because it is a general library that implements most methods commonly used for single event locations. The library has a flexible implementation of the standard Gauss–Newton method with many options for weighting schemes, inversion methods, and algorithms for choosing an initial location estimate. GENLOC also has a grid-search algorithm that makes no assumptions about the geometry of the grid it is searching returning only the point with a best fit solution for the specified residual norm. GENLOC supports both arrival time and array slowness vector measurements. A unique feature is the strong separation between the travel time/earth model problem and the location estimations. GENLOC can utilize data from any seismic phase for which the user can supply an earth model and method to compute theoretical travel times and/or slowness values. The GENLOC library has been used in five different working applications: (1) a simple command line program, (2) an interactive graphical user interface version used in an analyst information system, (3) a database-driven relocation program, (4) a recent implementation of the progressive multiple event location method, and (5) a real-time location program. We ran a validation test against LOCSAT and found reasonable consistency in estimated locations. We attribute observed differences in the solutions to roundoff errors in different calculators used by the two programs.  相似文献   

6.
《Information Systems》1986,11(1):87-100
The CODASYL database (db) statements in an application program can have one or more different semantics associated with it, depending on the path through which the execution reaches that statement. This makes the CODASYL operations in a db program context dependent. The previous works on the db program conversion from the CODASYL record-at-a-time interface to the set-at-a-time interface of the relational model consider a limited class of programs where each db statement in the program is assumed to have a unique semantic interpretation. In this paper, we define a framework for analyzing the multiple semantics of the CODASYL operations and their context dependencies. We then show how the analysis can be used for converting a CODASYL db program which contains statements having ambiguous, multiply-defined semantics. The method described here allows us to convert any CODASYL db program.  相似文献   

7.
程序时序安全属性可以用有限状态自动机(FSM)来描述,对该属性的静态检测是当前研究的热点之一.该文提出了FSM切片技术,以需求驱动的模式抽取出关于时序安全属性等价的程序切片.该切片使检测规模减小、程序结构简化,因而减小了检测中组合爆炸情形出现的机会,最终使时序安全属性的静态检测在准确性和可伸缩性上都得到了提高.实验表明,FSM切片可以使Saturn的可伸缩性平均提高到原来的6.34倍,使Fastcheck的准确性平均提高到原来的1.20倍.  相似文献   

8.
OMEGA is a rule-based program which rapidly generates conformational ensembles of small molecules. We have varied the parameters which control the nature of the ensembles generated by OMEGA in a statistical fashion (D-optimal) with the aim of increasing the probability of generating bioactive conformations. Thirty-six drug-like ligands from different ligand-protein complexes determined by high-resolution (< or =2.0A) X-ray crystallography have been analyzed. Statistically significant models (Q(2)> or =0.75) confirm that one can increase the performance of OMEGA by modifying the parameters. Twenty-eight of the bioactive conformations were retrieved when using a low-energy cut-off (5 kcal/mol), a low RMSD value (0.6A) for duplicate removal, and a maximum of 1000 output conformations. All of those that were not retrieved had eight or more rotatable bonds. The duplicate removal parameter was found to have the largest impact on retrieval of bioactive conformations, and the maximum number of conformations also affected the results considerably. The input conformation was found to influence the results largely because certain bond angles can prevent the bioactive conformation from being generated as a low-energy conformation. Pre-optimizing the input structures with MMFF94s improved the results significantly. We also investigated the performance of OMEGA in connection with database searching. The shape-matching program Rapid Overlay of Chemical Structures (ROCS) was used as search tool. Two multi-conformational databases were built from the MDDR database plus the 36 compounds; one large (maximum 1000 conformations/mol) and one small (maximum 100 conformations/mol). Both databases provided satisfactory results in terms of retrieval. ROCS was able to rank 35 out of 36 X-ray structures among the top 500 hits from the large database.  相似文献   

9.
In this paper, we present two algorithms to design flow path and the location of its pickup and delivery (P/D) stations simultaneously in a block layout for Automated Guided Vehicles (AGVs). We develop two algorithms to solve this problem. The first one is a cutting-plane algorithm to solve the mixed integer linear program that models the problem. The second one is a Simulated Annealing (SA) approach which solves the problem heuristically to a near best solution. Computational results show the performance of both algorithms.  相似文献   

10.
When using microarray analysis to determine gene dependence, one of the goals is to identify differentially expressed genes. However, the inherent variations make analysis challenging. We propose a statistical method (SRA, swapped and regression analysis) especially for dye-swapped design and small sample size. Under general assumptions about the structure of the channels, scanner, and target effects from the experiment, we prove that SRA removes bias caused by these effects. We compare our method with ANOVA, using both simulated and real data. The results show that SRA has consistent sensitivity for the identification of differentially expressed genes in dye-swapped microarrays, particularly when the sample size is small. The program for the proposed method is available at http://www.ibms.sinica.edu.tw/∼csjfann/firstflow/program.htm.  相似文献   

11.
A reduced cover set of the set of full reducer semijoin programs for an acyclic query graph for a distributed database system is given. An algorithm is presented that determines the minimum cost full reducer program. The computational complexity of finding the optimal full reducer for a single relation is of the same order as that of finding the optimal full reducer for all relations. The optimization algorithm is able to handle query graphs where more than one attribute is common between the relations. A method for determining the optimum profitable semijoin program is presented. A low-cost algorithm which determines a near-optimal profitable semijoin program is outlined. This is done by converting a semijoin program into a partial order graph. This graph also allows one to maximize the concurrent processing of the semijoins. It is shown that the minimum response time is given by the largest cost path of the partial order graph. This reducibility is used as a post optimizer for the SSD-1 query optimization algorithm. It is shown that the least upper bound on the length of any profitable semijoin program is N(N-1) for a query graph of N nodes  相似文献   

12.
Correlated survival outcomes occur quite frequently in the biomedical research. Available software is limited, particularly if we wish to obtain smoothed estimate of the baseline hazard function in the context of random effects model for correlated data. The main objective of this paper is to describe an R package called frailtypack that can be used for estimating the parameters in a shared gamma frailty model with possibly right-censored, left-truncated stratified survival data using penalized likelihood estimation. Time-dependent structure for the explanatory variables and/or extension of the Cox regression model to recurrent events are also allowed. This program can also be used simply to obtain directly a smooth estimate of the baseline hazard function. To illustrate the program we used two data sets, one with clustered survival times, the other one with recurrent events, i.e., the rehospitalizations of patients diagnosed with colorectal cancer. We show how to fit the model with recurrent events and time-dependent covariates using Andersen-Gill approach.  相似文献   

13.
14.
An open source program to generate zero-thickness cohesive interface elements in existing finite element discretizations is presented. This contribution fills the gap in the literature that, to the best of the author’s knowledge, there is no such program exists. The program is useful in numerical modeling of material/structure failure using cohesive interface elements. The program is able to generate one/two dimensional, linear/quadratic cohesive elements (i) at all inter-element boundaries, (ii) at material interfaces and (iii) at grain boundaries in polycrystalline materials. Algorithms and utilization of the program is discussed. Several two dimensional and three dimensional fracture mechanics problems are given including debonding process of material interfaces, multiple delamination of composite structures, crack propagation in polycrystalline structures.  相似文献   

15.
The MCS/SEL/BAS program provides a method for group recognition, based on a criterion of homogeneity within the groups. The basic aim of this clustering method is not to 'force' data into a number of separate groups, as it allows the possibility that a given element in the data set can be assigned to more than one group. Moreover, a parsimonious path through the groups is sought by selecting groups on the basis of two suitably chosen, peak-ordered criteria. This selection continues until a covering of the data set is obtained (i.e., until each element in the data set is assigned to at least one group). Then relationships occurring among the set of selected groups are investigated by means of two coefficients, called overlapping and cohesion coefficient, respectively. The utility of this program has been demonstrated here in elaborating large sets of data derived from mating type interactions of ciliates, but it can be used also for analyzing data derived from a wide spectrum of compatibility phenomena exhibited by other living organisms. Algorithms of this program are written in BASIC and formulated in a conversational mode for processing on a Macintosh. A computer program (MCS/SEL/BAS) is available from G. Mancini upon request.  相似文献   

16.
Assuming a sampling rate of electrocardiograms of 200/s, the time interval between two successive analog-to-digital conversions is 5 ms. We present a program which recognizes R wave peaks of electrocardiograms and measures R-R time intervals within this critical period. The program is written in assembly language and run on a PDP-12. The cycle time is 1.6 μs and most instructions require one to three cycles for execution in the computer used. Since the whole sequence of this program is executable in about 160 μs, electrocardiograms recorded on a data recorder can be analysed 31 times faster than recording speed. For application of this routine, we present a program for displaying an R-R interval histogram and a joint interval histogram.  相似文献   

17.
Bitflips, or single-event upsets (SEUs) as they are more formally known, may occur for instance when a high-energy particle such as a proton strikes a CPU and thereby corrupting the contents of an on-chip register, e.g., by randomly flipping one or more bits in that register. Such random changes in central registers may lead to critical failure in the execution of a program, which is especially problematic for safety- or security-critical applications. Even though SEUs are well studied in the literature, relatively little attention have been given to the formal modelling of SEUs and the application of formal methods to mitigate the consequences of SEUs. In this paper we develop a formal semantic framework for easy formal modelling of a large variety of SEUs in a core assembly language capturing the essential features of the ARM assembly language. Based on the semantic framework, we derive and formally prove correct a static analysis that enforces so-called blue/green separation in a given program. Static blue/green separation is a language-based fault detection technique relying on inlined replication of critical code. A program that has proper blue/green separation is shown to be fault-tolerant with respect SEUs in data registers. However, this technique requires specialised hardware support in order to achieve full coverage. We therefore use our semantic framework to further develop so-called gadgets, essentially small code fragments that emulate the behaviour of blue/green instructions in a safe manner. The gadgets allow us to achieve partial blue/green separation without specialised hardware support. Finally, we show how our semantic framework can be used to extract timed-automata models of ARM assembler code programs. We then apply statistical model checking to these timed-automata models enabling us to model, analyse, and quantify program behaviour in the presence of fault models that go well beyond data-flow SEUs, e.g., bitflips in program counters or in the code itself. We use this approach to provide evidence that our suggested program modifications, i.e., the use of gadgets, significantly decrease the probability of such faults going undetected.  相似文献   

18.
19.
The amount of memory required by a parallel program may be spectacularly larger than the memory required by an equivalent sequential program, particularly for programs that use recursion extensively. Since most parallel programs are nondeterministic in behavior, even when computing a deterministic result, parallel memory requirements may vary from run to run, even with the same data. Hence, parallel memory requirements may be both large (relative to memory requirements of an equivalent sequential program) and unpredictable. Assume that each parallel program has an underlying sequential execution order that may be used as a basis for predicting parallel memory requirements. We propose a simple restriction that is sufficient to ensure that any program that will run in n units of memory sequentially can run in mn units of memory on m processors, using a scheduling algorithm that is always within a factor of two of being optimal with respect to time. Any program can be transformed into one that satisfies the restriction, but some potential parallelism may be lost in the transformation. Alternatively, it is possible to define a parallel programming language in which only programs satisfying the restriction can be written  相似文献   

20.
We define two normal forms for CSP programs. In the First Normal Form, each process contains only one I/O repetitive command and all its I/O commands appear as guards of this command. In the Second Normal Form, all guards of this I/O repetitive command are I/O guards. We describe an inductive method that transforms any CSP program into an equivalent program in first or second normal form. The notion of equivalence is discussed. It is shown that no transformation into second normal form can preserve deadlock freedom.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号