首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We present a formal approach to study the evolution of biological networks. We use the Beta Workbench and its BlenX language to model and simulate networks in connection with evolutionary algorithms. Mutations are done on the structure of BlenX programs and networks are selected at any generation by using a fitness function. The feasibility of the approach is illustrated with a simple example.  相似文献   

2.
3.
4.
R package flexmix provides flexible modelling of finite mixtures of regression models using the EM algorithm. Several new features of the software such as fixed and nested varying effects for mixtures of generalized linear models and multinomial regression for a priori probabilities given concomitant variables are introduced. The use of the software in addition to model selection is demonstrated on a logistic regression example.  相似文献   

5.
In this paper we propose a class of flexible weight functions for use in comparison of two cumulative incidence functions. The proposed weights allow the users to focus their comparison on an early or a late time period post treatment or to treat all time points with equal emphasis. These weight functions can be used to compare two cumulative incidence functions via their risk difference, their relative risk, or their odds ratio. The proposed method has been implemented in the R-CIFsmry package which is readily available for download and is easy to use as illustrated in the example.  相似文献   

6.
A system based on ROOT for handling the micro-DST of the BaBar experiment is described. The purpose of the Kanga system is to have micro-DST data available in a format well suited for data distribution within a world-wide collaboration with many small sites. The design requirements, implementation and experience in practice after three years of data taking by the BaBar experiment are presented.  相似文献   

7.
The semantics of a proof language relies on the representation of the state of a proof after a logical rule has been applied. This information, which is usually meaningless from a logical point of view, is fundamental to describe the control mechanism of the proof search provided by the language. In this paper, we present a monadic datatype to represent the state information of a proof and we illustrate its use in the PVS theorem prover. We show how this representation can be used to design a new set of powerful tacticals for PVS, called PVS#, that have a simpler and clearer semantics compared to the semantics of standard PVS tacticals.  相似文献   

8.
A key feature for infrastructures providing coordination services is the ability to define the behaviour of coordination abstractions according to the requirements identified at design-time. We take as a representative for this scenario the logic-based language ReSpecT (Reaction Specification Tuples), used to program the reactive behaviour of tuple centres. ReSpecT specifications are at the core of the engineering methodology underlying the TuCSoN infrastructure, and are therefore the “conceptual place” where formal methods can be fruitfully applied to guarantee relevant system properties.In this paper we introduce ReSpecT nets, a formalism that can be used to describe reactive behaviours that can succeed and fail, and that allows for an encoding to Petri nets with inhibitor arcs. ReSpecT nets are introduced to give a core model to a fragment of the ReSpecT language, and to pave the way for devising an analysis methodology including formal verification of safety and liveness properties. In particular, we provide a semantics to ReSpecT specifications through a mapping to ReSpecT nets. The potential of this approach for the analysis of ReSpecT specifications is discussed, presenting initial results for the analysis of safety properties.  相似文献   

9.
OntoTrackis an ontology authoring tool that combines a graph-based hierarchical layout and instant reasoning feedback within one single view. Currently OntoTrack can handle ontologies with an expressivity almost comparable to OWL Lite. The graphical representation provides an animated and zoomable subsumption graph with context sensitive features such as click-able miniature branches or selective detail views, together with drag-and-drop editing. Each editing step is instantly synchronised with an external reasoner in order to provide appropriate graphical feedback about relevant modeling consequences. A recent extention of OntoTrack provides an on-demand textual explanation for subsumption relationships between classes. This paper describes the key features of the current implementation and discusses future work, as well as some development issues. OntoTrack can be downloaded at http://www.informatik.uni-ulm.de/ki/ontotrack/.  相似文献   

10.
aITALC, a new tool for automating loop calculations in high energy physics, is described. The package creates Fortran code for two-fermion scattering processes automatically, starting from the generation and analysis of the Feynman graphs. We describe the modules of the tool, the intercommunication between them and illustrate its use with three examples.

Program summary

Title of the program:aITALC version 1.2.1 (9 August 2005)Catalogue identifier:ADWOProgram summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWOProgram obtainable from:CPC Program Library, Queen's University of Belfast, N. IrelandComputer:PC i386Operating system:GNU/Linux, tested on different distributions SuSE 8.2 to 9.3, Red Hat 7.2, Debian 3.0, Ubuntu 5.04. Also on SolarisProgramming language used:GNU Make, Diana, Form, Fortran77Additional programs/libraries used:Diana 2.35 (Qgraf 2.0), Form 3.1, LoopTools 2.1 (FF)Memory required to execute with typical data:Up to about 10 MBNo. of processors used:1No. of lines in distributed program, including test data, etc.:40 926No. of bytes in distributed program, including test data, etc.:371 424Distribution format:tar gzip fileHigh-speed storage required:from 1.5 to 30 MB, depending on modules present and unfolding of examplesNature of the physical problem:Calculation of differential cross sections for e+e annihilation in one-loop approximation.Method of solution:Generation and perturbative analysis of Feynman diagrams with later evaluation of matrix elements and form factors.Restriction of the complexity of the problem:The limit of application is, for the moment, the 2→2 particle reactions in the electro-weak standard model.Typical running time:Few minutes, being highly depending on the complexity of the process and the Fortran compiler.  相似文献   

11.
We present in this work a new computational code for the quantum calculation of integral cross sections for atom-molecule (linear) scattering processes. The atom is taken to be structureless while the molecule can be in its singlet, doublet, or triplet spin states and can be treated as either a rigid rotor or a rovibrational target. All the relevant state-to-state integral cross sections, and their sums over final states, can be calculated with the present code, for which we also describe in detail the various component routines.

Program summary

Program title: ASPINCatalogue identifier: AEBO_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBO_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 99 596No. of bytes in distributed program, including test data, etc.: 1 267 615Distribution format: tar.gzProgramming language: Fortran/MPIComputer: AMD OPTERON COMPUTING SYSTEMS, model TYAN GX28 (B2882)Operating system: SuSE LINUX Professional 9RAM: 128 GBClassification: 2.6External routines: LAPACK/BLASNature of problem: Scattering of a diatomic molecule in its , , or spin states with an atom in its state. Partial and integral cross sections.Solution method: The coupled channel equations that describe the scattering process are solved through the propagation of the reactance K matrix employing a modification of the Variable Phase Method [1-3].Restrictions: Depending on the vib-rotational base used the problem may or may not fit into available RAM memory because all the runtime relevant quantities are stored on RAM memory instead of on disk.Additional comments: Both serial and parallel implementations of the program are provided. The CPC Librarian was not able to successfully run the parallel version.Running time: For simple and converged calculations a usual running time is in the order of a few minutes in the computer mentioned above, being shorter for the singlet and longer for the triplet.References:[1] F. Calogero, Variable Phase Approach to Potential Scattering, New York, 1967.[2] A. Degasperis, Il Nuovo Cimento 34 (1964) 1667.[3] C. Zemach, Il Nuovo Cimento 33 (1964) 939.  相似文献   

12.
Randomized algorithms are widely used for finding efficiently approximated solutions to complex problems, for instance primality testing and for obtaining good average behavior. Proving properties of such algorithms requires subtle reasoning both on algorithmic and probabilistic aspects of programs. Thus, providing tools for the mechanization of reasoning is an important issue. This paper presents a new method for proving properties of randomized algorithms in a proof assistant based on higher-order logic. It is based on the monadic interpretation of randomized programs as probabilistic distributions (Giry, Ramsey and Pfeffer). It does not require the definition of an operational semantics for the language nor the development of a complex formalization of measure theory. Instead it uses functional and algebraic properties of unit interval. Using this model, we show the validity of general rules for estimating the probability for a randomized algorithm to satisfy specified properties. This approach addresses only discrete distributions and gives rules for analyzing general recursive functions.We apply this theory to the formal proof of a program implementing a Bernoulli distribution from a coin flip and to the (partial) termination of several programs. All the theories and results presented in this paper have been fully formalized and proved in the Coq proof assistant.  相似文献   

13.
In a multicore transactional memory (TM) system, concurrent execution threads interact and interfere with each other through shared memory. The less interference a thread provokes the better for the system. However, as a programmer is primarily interested in optimizing her individual code’s performance rather than the system’s overall performance, she does not have a natural incentive to provoke as little interference as possible. Hence, a TM system must be designed compatible with good programming incentives (GPI), i.e., writing efficient code for the overall system should coincide with writing code that optimizes an individual thread’s performance. We show that with most contention managers (CM) proposed in the literature so far, TM systems are not GPI compatible. We provide a generic framework for CMs that base their decisions on priorities and explain how to modify Timestamp-like CMs so as to feature GPI compatibility. In general, however, priority-based conflict resolution policies are prone to be exploited by selfish programmers. In contrast, a simple non-priority-based manager that resolves conflicts at random is GPI compatible.  相似文献   

14.
Advances in wireless sensing and actuation technology allow embedding significant amounts of application logic inside wireless sensor networks. Such active WSN applications are more autonomous, but are significantly more complex to implement. Event-based middleware lends itself to implementing these applications. It offers developers fine-grained control over how an individual node interacts with the other nodes of the network. However, this control comes at the cost of event handlers which lack composability and violate software engineering principles such as separation of concerns. In this paper, we present CrimeSPOT as a domain-specific language for programming WSN applications on top of event-driven middleware. Its node-centric features enable programming a node’s interactions through declarative rules rather than event handlers. Its network-centric features support reusing code within and among WSN applications. Unique to CrimeSPOT is its support for associating application-specific semantics with events that carry sensor readings. These preclude transposing existing approaches that address the shortcomings of event-based middleware to the domain of wireless sensor networks. We provide a comprehensive overview of the language and the implementation of its accompanying runtime. The latter comprises several extensions to the Rete forward chaining algorithm. We evaluate the expressiveness of the language and the overhead of its runtime using small, but representative active WSN applications.  相似文献   

15.
The updated version of the Helac-Phegas1 event generator is presented. The matrix elements are calculated through Dyson-Schwinger recursive equations using color connection representation. Phase-space generation is based on a multichannel approach, including optimization. Helac-Phegas generates parton level events with all necessary information, in the most recent Les Houches Accord format, for the study of any process within the Standard Model in hadron and lepton colliders.

New version program summary

Program title: HELAC-PHEGASCatalogue identifier: ADMS_v2_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADMS_v2_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 35 986No. of bytes in distributed program, including test data, etc.: 380 214Distribution format: tar.gzProgramming language: FortranComputer: AllOperating system: LinuxClassification: 11.1, 11.2External routines: Optionally Les Houches Accord (LHA) PDF Interface library (http://projects.hepforge.org/lhapdf/)Catalogue identifier of previous version: ADMS_v1_0Journal reference of previous version: Comput. Phys. Comm. 132 (2000) 306Does the new version supersede the previous version?: Yes, partlyNature of problem: One of the most striking features of final states in current and future colliders is the large number of events with several jets. Being able to predict their features is essential. To achieve this, the calculations need to describe as accurately as possible the full matrix elements for the underlying hard processes. Even at leading order, perturbation theory based on Feynman graphs runs into computational problems, since the number of graphs contributing to the amplitude grows as n!.Solution method: Recursive algorithms based on Dyson-Schwinger equations have been developed recently in order to overcome the computational obstacles. The calculation of the amplitude, using Dyson-Schwinger recursive equations, results in a computational cost growing asymptotically as 3n, where n is the number of particles involved in the process. Off-shell subamplitudes are introduced, for which a recursion relation has been obtained allowing to express an n-particle amplitude in terms of subamplitudes, with 1-, 2-, …  up to (n−1) particles. The color connection representation is used in order to treat amplitudes involving colored particles. In the present version HELAC-PHEGAS can be used to efficiently obtain helicity amplitudes, total cross sections, parton-level event samples in LHA format, for arbitrary multiparticle processes in the Standard Model in leptonic, and pp collisions.Reasons for new version: Substantial improvements, major functionality upgrade.Summary of revisions: Color connection representation, efficient integration over PDF via the PARNI algorithm, interface to LHAPDF, parton level events generated in the most recent LHA format, k reweighting for Parton Shower matching, numerical predictions for amplitudes for arbitrary processes for phase-space points provided by the user, new user interface and the possibility to run over computer clusters.Running time: Depending on the process studied. Usually from seconds to hours.References:
[1]
A. Kanaki, C.G. Papadopoulos, Comput. Phys. Comm. 132 (2000) 306.
[2]
C.G. Papadopoulos, Comput. Phys. Comm. 137 (2001) 247.
  相似文献   

16.
In this paper, we introduce the concept of an exoskeleton as a new abstraction of arbitrary shapes that succinctly conveys both the perceptual and the geometric structure of a 3D model. We extract exoskeletons via a principled framework that combines segmentation and shape approximation. Our method starts from a segmentation of the shape into perceptually relevant parts and then constructs the exoskeleton using a novel extension of the Variational Shape Approximation method. Benefits of the exoskeleton abstraction to graphics applications such as simplification and chartification are presented.  相似文献   

17.
18.
We consider the issue of exploiting the structural form of Esterel programs to partition the algorithmic RSS (reachable state space) fix-point construction used in model-checking techniques. The basic idea sounds utterly simple, as seen on the case of sequential composition: in P; Q, first compute entirely the states reached in P, and then only carry on to Q, each time using only the relevant transition relation part. Here a brute-force symbolic breadth-first search would have mixed the exploration of P and Q instead, in case P had different behaviors of various lengths, and that would result in irregular BBD representation of temporary state spaces, a major cause of complexity in symbolic model-checking.Difficulties appear in our decomposition approach when scheduling the different transition parts in presence of parallelism and local signal exchanges. Program blocks (or “Macro-states”) put in parallel can be synchronized in various ways, due to dynamic behaviors, and considering all possibilities may lead to an excessive division complexity. The goal is here to find a satisfactory trade-off between compositional and global approaches. Concretely we use some of the features of the TiGeR BDD library, and heuristic orderings between internal signals, to have the transition relation progress through the program behaviors to get the same effect as a global RSS computation, but with much more localized transition applications. We provide concrete benchmarks showing the usefulness of the approach.  相似文献   

19.
We investigate the integration of C implementation of fast arithmetic operations into Maple, focusing on triangular decomposition algorithms. We show substantial improvements over existing Maple implementations; our code also outperforms Magma on many examples. Profiling data show that data conversion can become a bottleneck for some algorithms, leaving room for further improvements.  相似文献   

20.
We examined the relationships of the constructs in the UTAUT model to determine how they are affected by culture. In our study, we used data from Korea and the U.S. to examine two technologies: the MP3 player and Internet banking. Results showed that the UTAUT model fits our data well. The comparison of Korea and the U.S. revealed that the effects of effort expectancy on behavioral intention and the effects of behavioral intention on use behavior were greater in the U.S. sample. The implications of this are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号